The European Union (EU) is setting stricter rules on the use of Artificial Intelligence (AI). These measures were outlined in new guidelines from the European Commission, aimed at ensuring responsible use of AI in workplaces and online services.
The new restrictions are part of the EU’s Artificial Intelligence Act, which became legally binding last year. While the full law will be enforced by August 2026, some rules including bans on specific AI applications, took effect on February 2.
The guidelines ban AI practices that are manipulative, discriminatory, or invasive. Some of the prohibited uses include:
- Emotion Tracking at Work: Employers can no longer use AI-powered webcams or voice recognition to monitor their employees’ emotions.
- Deceptive AI Tactics: Online services are barred from using AI to manipulate users into making significant financial decisions.
- Exploitation of Vulnerable Groups: AI must not take advantage of people based on age, disability, or financial situation.
- Social Scoring Systems: AI can’t use personal data like race or nationality to rank people, including for social welfare.
- AI-Powered Predictive Policing: Law enforcement agencies are restricted from using biometric data alone to predict criminal behavior unless supported by additional evidence.
- AI Facial Recognition in Public Spaces: AI-powered facial recognition in mobile CCTV is banned for law enforcement, except in rare cases with strict safeguards.
EU countries have until the 2ⁿᵈ of August to appoint market surveillance authorities to oversee compliance to these rules. Companies that violate the AI Act could face fines, ranging from 1.5% to 7% of their global revenue.
The EU’s new AI rules are designed to protect people from unfair monitoring and manipulation. By setting clear boundaries, the goal is to ensure that AI is used responsibly and ethically.