Blog & Resources

New EU AI Regulations: 4 Data Points

In his 2010 dystopian novel Super Sad True Love Story, Gary Shteyngart portrays a near-future in which “credit poles” on city sidewalks continually ping pedestrians with updates of their credit-score. All citizens also wear pendants equipped with by “RateMe Plus” technology, a crowd-sourcing algorithm that continually ranks people based on their credit scores and others’ judgments of their physical appearance among other data.

 

While that near future feels uncomfortably like the present in 2021, the European Union (EU) is attempting to pump the brakes on runaway algorithms with its new proposal for rules governing artificial intelligence (AI).

 

To repeat: these rules are proposed. They are also far-reaching and will take time – at least months, possibly years – to be finalized. The proposal “faces a long road – and potential changes – before it becomes law,” emphasizes The Wall Street Journal, given that these types of laws require approval by the European Council (which represents 27 EU member countries) and the elected European Parliament.

 

The proposal includes a set of AI-specific rules divided into 85 sections (or “articles”) that are detailed over 108 pages. It also calls for adjustments and amendments to legislative acts to maintain consistency between the new AI rules and existing data security and privacy rules. This harmonization is in line with the EU’s comprehensive data strategy. The proposal signals that AI rules are definitely on the horizon. From a practical standpoint, four points about the EU’ document are noteworthy:

  1. The proposed rules would apply to providers of AI systems and capabilities – regardless of where those companies are based (inside or outside the EU).
  2. The rules would also apply to users of AI systems and data who are based in the EU, as well as to both producers and users based anywhere when the output is used in the EU.
  3. As has been the case with GDPR, the EU’s AI rules likely will be used by other global (and state) policymakers to shape their own AI regulations.
  4. As the proposal progresses toward approval, other EU data privacy and security rules will be changed to ensure consistency. These changes may have implications on a much larger range of outsourcers and third parties – including those whose do not currently use AI.

 

It is also important to note that the EU proposal bans certain AI applications – those that “manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behavior of minors) and systems that allow ‘social scoring’ by governments” – while calling for stricter restrictions for “high-risk” AI systems. The high-risk systems are described in fairly broad terms, including AI technology used in the following areas:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents); and
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

 

“On artificial intelligence, trust is a must, not a nice to have,” the European Commission’s Margrethe Vestager said in a statement.  “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

 

Vestager currently serves as executive vice president of the European Commission for A Europe Fit for the Digital Age. She has also served as the European Commissioner for Competition since 2014, a role in which she has gone toe-to-toe against some of the world’s largest technology companies (among others) concerning their global tax strategies.

 

The proposal has significant data-privacy repercussions. On that count, this rundown of the rules from the IAPP’s Jetty Tielemans is helpful. “The commission takes a risk-based but overall cautious approach to AI and recognizes the potential of AI and the many benefits it presents,” she notes, “but at the same time is keenly aware of the dangers these new technologies present to the European values and fundamental rights and principles.”

 

AI’s two-sided risk/benefits coin also has growing value for third party risk management professionals. “The advancement of AI technology has helped companies conduct more proactive and predictive analytics to detect fraud,” according to a report by the Anti-Fraud Collaboration (AFC). “Traditional risk-based analysis can evolve and incorporate machine learning algorithms to identify anomalies, which then inform and refine risk algorithms.” The AFC also cautions that one of AI’s most formidable challenges is its fluidity – it is designed to continually change, adapt and improve.

 

Judging from its new AI rules, it seems like the EU is aware of the technology’s dystopian risks and substantial rewards and also intent on preventing a future defined by credit poles and AI-driven social-scoring wearables (the proposal calls for restrictions and/or prohibitions on most facial recognition applications). That’s why TPRM and privacy pros should continue to read up on all of the EU’s ongoing data strategy developments.

 

Looking for more on AI as it pertains to TPRM? Check out this post on why AI requires stronger data governance. 

Share this