2024 State Election Results Dashboard
image/svg+xml Skip to main content
Search image/svg+xml

Key Takeaways:

  • The EU AI Act and the U.S. SAFE Innovation Framework share a common focus on prioritizing human safety, ensuring transparency, and addressing bias and discrimination in artificial intelligence systems. 
  • While both frameworks aim to regulate AI, the EU has taken a legislative approach, categorizing AI systems and implementing specific reporting and oversight requirements, whereas the US framework is currently a suggested framework and emphasizes a bipartisan approach. The EU AI Act places greater emphasis on preventing AI's potential surveillance capabilities and safeguarding privacy, while the US framework highlights considerations related to liability, intellectual property, and likeness in AI applications. 
  • The development of AI regulations signifies the increasing global attention towards AI governance, with efforts being made to balance innovation, accountability, and the protection of individuals and society.

On June 8, 2023, the European Union (EU) introduced the AI Act to create a framework for how artificial intelligence (AI) would be regulated and monitored across all nations within the EU. The legislation focuses on five main priorities: Artificial intelligence use should be safe, transparent, traceable, non-discriminatory, and environmentally friendly. The legislation also requires that AI systems be overseen by people and not by automation, establishes a technology-neutral, uniform definition of what constitutes AI, and would apply to systems that have already been developed as well as to future AI systems. 

Major Provisions of the European Union’s AI Act

Different Rules for Different Levels of Risk

The legislation creates different systems of obligations for different types of AI systems so that systems posing less of a risk to society are not treated the same as the ones that could potentially pose a threat to society (referred to as “high risk” and “unacceptable risk”). For example, the regulations place an outright ban on certain types of AI that regulators feel would pose an unacceptable risk to society, such as AI systems that include cognitive behavioral manipulation of people or specific groups of vulnerable populations (such as children), social scoring or classifying people on social/personal characteristics, or systems that react in real-time (such as facial recognition). AI systems classified as high risk are further categorized as either systems used in products that would fall under the purview of the EU’s product safety legislation, or eight specific areas that would be allowed but must be registered in an EU database (biometric ID and categorization of people, management/operation of critical infrastructure, education and vocational training, employment/worker management, access/enjoyment of essential private services and public services, law enforcement, migration/asylum/border control, and assistance in legal interpretations and applications of the law).

Assessment and Guidelines for High-Risk AI Systems

The legislation also requires all high-risk AI systems to be assessed before the products are put on the market, as well as throughout the product’s life cycle. Recognizing the sudden popularity of generative AI (such as ChatGPT) and other forms of AI that allow people to manipulate video or audio, the legislation also creates guidelines for these types of products. 

Business Pushback

In July, the members of the European Parliament adopted the EU Parliament’s position on the AI Act and began negotiating on the final version of the law. The proposed legislation has not been without criticism. A few weeks later, over 150 business executives sent a letter to the EU asking it to rethink their upcoming AI Act because it would cause “disproportionate compliance costs and disproportionate liability risks," adding that the rules could lead to "highly innovative companies moving their activities abroad, investors withdrawing their capital from the development of European Foundation Models and European AI in general." 

Similar Conversations in the U.S.: SAFE Innovation Framework and NIST Working Group

Similar debates are occurring in the U.S. In June, U.S. Senate Majority Leader Chuck Schumer (D-NY) announced his plans for the “SAFE Innovation Framework,” arguing that “comprehensive AI legislation will secure both U.S. national security and American jobs; support responsible systems in the areas of misinformation, bias, copyright, liability and intellectual property; require AI tools to align with democratic values; and determine what level of transparency the federal government and private citizens require from AI companies.” The bill will focus on how AI is used in the areas of misinformation, bias, copyright, liability, and intellectual property (IP). 

The following day, the Biden Administration announced a new NIST Public Working Group on AI to “tackle risks of rapidly advancing generative AI.” We’ll learn more about the SAFE Innovation Framework once bill language is released.

Similarities and Differences

There are several similarities between the frameworks of the EU AI Act and the SAFE Innovation Framework. Both aim to make the safety of humans the central piece of any framework regulating AI. Additionally, both frameworks pursue transparency and accountability when it comes to how or when AI is being used, and that people are aware of those uses. Further, they both aim to prevent bias and discrimination in how AI systems use, monitor, view, create, or store information about people. 

There are also several differences. The EU’s AI Act is already making its way through the legislative process, whereas the U.S. has only discussed basic frameworks. The EU AI Act splits AI into various categories based off of what it is going to be used for, and creates specific reporting or oversight requirements as well as requiring various types to be monitored before and during the time an AI product is available to people. The EU  also seems to be placing more of a priority on preventing AI from being able to be used for surveillance and focusing on the privacy of individuals. The U.S.’s SAFE Innovation Framework doesn’t appear to address how AI relates to liability, IP, and likeness. For example, what type of data can be used by AI when the system is being trained? Can only free data be used, or can data that is provided by a paid subscription service be used? Similarly, if AI creates a song based on a musician’s existing lyrics and voice, should that be allowed? 

Recent Federal Action

On July 21, 2023, President Biden announced an agreement with 7 of the largest AI companies on a set of 8 rules that will “help move toward safe, secure, and transparent development of AI technology.” While the agreement is non-binding the companies agreed to ensure the products are safe before introducing them to the public, committing to internal and external security testing, sharing information across the industry and with government, civil society, and academia on managing risks with AI, third-party discovery and reporting of vulnerabilities, creating technical mechanisms to make users know when content is generated with AI, prioritizing research on societal risks that AI can create, and developing AI to help address society’s greatest challenges. 

The administration also announced that they are working with and have already consulted with other countries on this topic. They are also discussing this issue with the UN and with the G-7 Hiroshima Process which is being led by Japan. Going forward, expect the UN to adopt similar policies or frameworks to the agreement that was announced between President Biden and the AI companies, though with more restrictions on the uses of certain types of AI, especially ones that use facial or biometric data. We should also expect to have hearings in the U.S. Senate in the fall, like Majority Leader Schumer had discussed. 

State Legislation Regulating Artificial Intelligence

States and localities are also exploring the impact of AI. We recently wrote about the proliferation of legislation and regulations that regulate the use of AI in hiring and promotions, with many states considering bills in 2023 related to this issue. MultiState’s team is actively identifying and tracking this issue so that businesses and organizations have the information they need to navigate and effectively engage with the emerging laws and regulations addressing artificial intelligence. If your organization would like to further track state artificial intelligence legislation, or other technology issues, please contact us.