EU AI Act Passes: How CIOs Can Prepare

The EU’s AI Act will become the world’s most comprehensive regulatory framework for artificial intelligence. Businesses doing business in the EU will need to comply or face massive penalties.

Shane Snider , Senior Writer, InformationWeek

March 13, 2024

4 Min Read
miniature robot and stamp with the inscription AI Act in front of a European flag
Imago via Alamy Stock

The European Union on Wednesday passed its sweeping AI Act, which will establish tough guidelines and penalties for businesses using artificial intelligence.

The EU will roll out the new regulations in phases between 2024 and 2027 targeting “high-risk” AI applications. Companies running afoul of the new rules could face fines of up to 7% of global turnover, or $38 million, whichever is higher.

The vote passed with 523 votes in favor, 46 against, and 49 votes not cast. The law will enter into force in May after approval from the European Council and after last-minute legal language checks. The AI Act, penned in 2021, categorizes AI risks from “unacceptable” -- a designation that would earn a ban -- to “high,” “medium,” and “low” hazards.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the European Commissioner for Internal Market said on X, (formerly twitter).

The Commission’s Civil Liberties Committee member Dragos Tudorach lauded the AI Act’s passage, but said the work is just beginning. “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies,” he said in a statement. “AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets, and the way we conduct warfare.

Related:What Does the New AI Executive Order Mean for Development, Innovation?

What Businesses Can Expect

Much like the EU’s general data protection regulation (GDPR) set the standard for the way businesses collect and protect data, the AI Act is expected to have sweeping, worldwide impact on businesses. Many worldwide companies tailor their data practices to comply with GDPR as a foundational governance framework.

“This law is the most definitive stab at governing this AI monster,” Nitish Mittal, partner with IT research firm Everest Group, tells InformationWeek in an interview. He says as influential as GDPR has been for worldwide businesses, the AI Act’s influence could be more profound. “I believe this is going to have a ‘halo effect’ for other regions. GDPR deals with a very specific point around data privacy. But AI is such a big, broad existential issue for our society and businesses right now … every country is trying to see what they can do to get inspired by this law in some shape or form.”

Jonathan Dambrot, CEO of AI security and trust firm Cranium, says the regulations give businesses a needed framework as they race to adopt new AI tools. “When we look at AI, there’s still a lot of confusion,” he says. “You have CEOs sing that the risk of not using AI is higher than the risk of using AI. There’s so much investment and so much experimentation happening…”

Related:EU AI Act Takes Another Step Forward

Steep fines will be an effective motivator, Dambrot says.

“The fines that are being proposed are significantly higher than the fines associated with GDPR, so there will be a lot of attention put on this,” he says.

Sam Li, founder and CEO of compliance platform developer Thoropass, the AI Act’s precedent will be a launchpad for organizations’ AI roadmaps. “This is the gold standard,” he tells InformationWeek via video chat. “This is the foundation of any additional frameworks that other countries will adopt. To my delight as a practitioner, they took a risk-based approach. That’s good because you will have frameworks that are very specific and very prescriptive.”

How CIOs Should Respond

CIOs and other IT leaders will be tasked with making sure their organizations stay compliant with the new law or face the possibility of costing their companies millions of dollars. Everest’s Mittal says CIOs can focus on data hygiene, find the best and most value for specific AI use cases, and talk to partners and other stakeholders about the new regulations.

“You are not alone in this journey,” Mittal says. “Talk to your partners, your CIO buddies, and other peers in the industry to figure out what they’re doing and learn from them.”

Related:Haggling Over the Future of AI Regulation and Responsibility

Starting with clean data is key, he says. “If you think you have an AI problem, it’s probably not an AI problem, it’s probably a data problem … A lot of things that go wrong come down to the quality of data models and data governance. You can’t build an AI castle on a weak foundation.”

Cranium’s Dambrot says in many organizations, CIOs are being joined by other executives as companies form AI governance groups. “It’s about bringing those people together and answering questions around how you’re going to manage those AI systems, how to secure it …  If I’m talking to a CIO, I’m going to put them on that path if they haven’t started setting up an AI governance group already.”

Read more about:

Regulation

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights