CoCreations

View Original

How Tech Companies Are Responding To The EU’s AI Act

Catch up: What is the AI Act?

The big picture: The European Union passed the AI Act, the first major regulatory framework for AI, in mid-March. 

It states that “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.”

The legislation introduces new transparency rules and bans certain AI uses, including:

  • Using AI to influence or change behaviors in ways that are harmful

  • Biometric classification to ascertain race, sexual orientation, beliefs, or trade union membership

  • “Social scoring” systems that track a person’s behavior and could lead to discrimination

  • Real-time facial recognition in public places

While it may take up to 36 months to go into full effect, companies operating in the EU that are not in compliance could face fines much sooner.

Get up to speed: 

Why it matters: The regulation has immediate implications for global companies with a presence in the EU, and could set the tone for how AI is governed elsewhere.

  • The AI Act applies to any company that does business within the European Union, not just companies based there. 

  • Much like the EU’s implementation of GDPR in 2018, the AI Act will compel organizations to move quickly to assess their AI strategy, deployment and risk.

What else? Several other countries have introduced or are considering AI legislation.

  • Last year, the Biden administration signed an executive orderrequiring companies to notify the government when developing AI models that could pose serious risks. 

  • Brazil has a draft AI law that outlines the rights of users interacting with AI systems and categorizes different types of AI based on the risk they pose to society.

  • China implemented a law specifically to regulate generative AI.

How have tech companies responded?

KPMG’s US leader on regulatory insights, Amy Matsuo, said recently that “the time for simply establishing sound risk governance and risk management AI programs is quickly passing – the time for implementing, operationalizing, demonstrating and sustaining effective risk practices is now.” 

Below is a look at how some major tech players have responded to the AI Act. 

Amazon

  • A spokesperson for the company said: “We are committed to collaborating with the EU and industry to support the safe, secure, and responsible development of AI technology."

IBM

  • IBM applauded the EU’s decision to adopt the AI Act and stated in a press release that it “believes the regulation will foster trust and confidence in AI systems while promoting innovation and competitiveness.”

  • Christina Montgomery, VP and chief privacy & trust officer at IBM, said she “‘commend[s] the EU for its leadership in passing comprehensive, smart AI legislation’” and that the “‘approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.’”

Meta 

  • Meta has backed the AI Act, publishing information about how they’ve tested it and stating that it “could serve as a key piece of legislation that helps build and deploy trustworthy AI.”

  • Marco Pancini, head of EU affairs, said “‘it is critical we don't lose sight of AI's huge potential to foster European innovation and enable competition, and openness is key here.’”

Microsoft 

  • Microsoft vice chair and president Brad Smith wrote a chapter for the company’s report, AI in Europe: Meeting the opportunity across the EUIn the report, Smith suggests “there are enormous opportunities to harness the power of AI,” but that it's equally important to focus “on the challenges and risks that AI can create.”

OpenAI

  • Before the AI Act was passed, OpenAI CEO Sam Altman warnedthat “the details really matter. We will try to comply, but if we can’t comply we will cease operating” in the EU. The company has not made a statement since the law passed. 

Salesforce

  • Salesforce expressed approval of the AI Act and, according to Eric Loeb, EVP of government affairs, “applauds EU institutions for taking leadership in this domain.”

What should strategic communicators do next?

The EU’s AI Act is one of many signals that organizations must continue to think through the broad implications of AI adoption. Its impact on society, business and media is transformative.

Communications leaders can look at the AI Act as a forcing function to reconvene internal stakeholders and examine the short- and long-term implications. 

Consider:

  1. Has the organization set up an AI steering committee?

  2. How has the organization updated its core messaging about AI?

  3. Is the communications team monitoring for AI developments that could prompt questions from key stakeholders?

  4. How can the company map perception on topics where audiences may look to it for AI leadership? 

  5. What capital does the organization have connected to AI? Human, financial, innovation, other?

  6. What reputational risk does the organization’s AI implementation and/or product transformation present?