“Although the federal government did not enact AI legislation this year, the continued introduction of AI bills and the decision of Committees to pass AI bills through markup suggests that we will see some form of AI legislation in 2025, or even further in the future.”
Although artificial intelligence (AI) has been around for decades in some form, the more recent Generative AI (GenAI) boom has brought it back into the limelight. With the sudden popularity and prevalence of newer systems such as ChatGPT, we have seen GenAI, and even AI more broadly, enter new industries and have new use cases seemingly daily. However, with new technological development comes regulation. This is especially so in cases of high-scale and rapid development of technology that have both a potential for positive impact and growth, alongside a potential for negative consequences of misuse—just like AI.
Given the current state of the AI industry, 2024 saw a record number of AI bills introduced or decided upon at both the federal and state levels. In 2024, at least 45 states and Washington D.C. introduced AI bills, while 31 states adopted resolutions or enacted legislation. At the same time, the federal government also introduced two of its own comprehensive AI bills. With this landscape in mind, a representative sample of some of the top legislative AI developments of 2024 includes (1) two federal bills, (2) several bills both enacted and rejected in California, (3) one Colorado bill that is similar to a bill that did not pass in California, and (4) a swathe of enacted bills creating state AI task forces.
Federal Level
In 2024, two bills related to the development and integration of AI were introduced at the federal level—one by the House of Representatives and one by the United States Senate.
H.R. 6936: Federal Artificial Intelligence Risk Management Act of 2024. Introduced in January, H.R. 6936 would require the National Institute of Standards and Technology (NIST) to develop guidance for federal agencies to incorporate into their AI risk management efforts. The NIST Guidelines would have to include, to name a few, standards for reducing the risk of developing or using AI in federal agencies, cybersecurity strategies and tools for AI use, and standards that AI suppliers must meet in order to provide AI to federal agencies. Other than these Guidelines, H.R. 6936 would also require the Administrator of the Federal Procurement Policy to provide draft contract language requiring AI supplies to conform to specific conduct and to provide access to data, models, and parameters for sufficient testing, evaluation, verification, and validation.
Although H.R. 6936 was introduced early in the year, it has not made significant progress. After being introduced, the bill was referred to the Committee on Oversight and Accountability, as well as the Committee on Science, Space, and Technology. Despite this, the Committees have not made any comments on or changes to the bill. Given this, H.R. 6936 is likely to see more progress in 2025.
S.4178: Future of Artificial Intelligence Innovation Act of 2024. Introduced in April, S. 4178 would create the AI Safety Institute within the NIST. The purpose of S. 4178 is to establish AI standards, metrics, and tools, as well as support research and development activities. To do so, the AI Safety Institute would take on certain tasks, including research and evaluation on topics like AI model safety, as well as the development of voluntary standards on detecting synthetic content, preventing privacy rights violations, and transparency of datasets. Furthermore, S. 4178 would establish an AI testbed program for collaboration between the public and private sectors on evaluating AI systems’ capabilities, developing tests for AI systems, and collaborating on general research, development, testing, and risk assessment. And finally, S. 4178 would require the Secretary of Commerce, Secretary of State, and Director of the Office of Science and Technology Policy to cooperate and coordinate on forming an alliance to develop international AI standards.
Most recently, in July, the Committee on Commerce, Science, and Transportation passed the bill through markup and will move forward with S. 4178. Given the state of the bill, it can be expected that more will be heard about S. 4178 in 2025.
California
California was one of the more active states in terms of introducing and enacting AI legislation. In September, the California Government announced that Governor Gavin Newsom had signed 18 AI bills into law. However, Newsom vetoed one bill, and the California State Senate did not end up passing one bill, both of which garnered significant attention when they were introduced.
AB-2885: Artificial Intelligence. Signed by Newsom, this bill amends California law to provide a uniform definition for “Artificial Intelligence.” Now, California law defines Artificial Intelligence as: “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” In plain language, California now defines AI as a system that has some sense of autonomy and can generate outputs based on inferences derived from input data.
AB-2013: Artificial Intelligence Training Data Transparency. Signed by Newsom and taking effect January 1, 2026, this bill requires developers of GenAI systems or services to make certain public disclosures related to its training data. More specifically, they must post on their website a high-level summary of the datasets used to develop the GenAI, including information such as (1) the sources or owners of datasets, (2) the number of data points included, (3) a description of the types of data points, (4) whether the datasets include any protected intellectual property, (5) whether the datasets were purchased or licensed, (6) whether the datasets include personal information, and (7) whether the datasets were cleaned, processed, or modified, and the intended purpose of doing so. Given this, developers of GenAI systems will face significant disclosure requirements in California. However, AB-2013 does not apply to GenAI systems or services with the sole purpose of helping ensure security and integrity, the operation of aircraft in the U.S., or national security, military, or defense.
SB-942: California AI Transparency Act. Signed by Newsom and taking effect January 1, 2026, SB-942 enacts provisions that require developers of GenAI systems that have over 1 million monthly visitors or users to take actions that help the public differentiate between AI-generated and non-AI-generated materials. First, these developers must provide a free AI detection tool to the public that allows users to determine whether content was created or altered by the developer’s GenAI system. Additionally, the free AI detection tool must allow users to upload or link content and must provide system provenance data detected within the content (not including any personal provenance data).
Second, these developers must provide users the option of including a clear and conspicuous disclosure in the content that identifies it as AI-generated and that is permanent or difficult to remove. Furthermore, the developers must provide a latent disclosure in the content that includes the developer’s name, the version of the GenAI system that generated or altered the content, the time and date of the generation or alteration, and a “unique identifier.” This disclosure must be detectable by the developer’s tool and must be permanent or difficult to remove. Given the bill’s consistent use of “image, video, or audio content,” SB-942 does not apply to GenAI models that do not output one of these types of content.
SB-1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. SB-1047 aimed to prevent the risk of certain “critical harms” associated with AI systems, including those related to chemical, biological, radiological, or nuclear weapons. However, the bill was limited to covering only AI models that met certain computational and training cost thresholds. These thresholds were exactly why Newsom vetoed SB-1047. In a letter penned by Newsom, he stated, “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.” Although Newsom recognized the need to mitigate the risk of a “major catastrophe” before one is caused by AI, the current legislation did not meet the necessary balance, and California has yet to enact AI legislation aimed at this specific purpose.
AB-2930: Automated Decision Systems. AB-2930 was not passed by the California State Senate this legislative session. The bill aimed to prevent “algorithmic discrimination” that can occur in AI models. To do so, AB-2930 attempted to introduce requirements for both developers and deployers of AI processes or systems. For example, developers would have been required to perform and provide deployers an impact assessment before deployment and annually thereafter, including information such as the types of personal characteristics that the AI process or system will assess. And deployers who deploy an AI process or system that makes “consequential decisions” would have been required to inform those affected by the use that the AI system is being used and other information related to the nature of the AI system’s use. Although California did not pass this bill, Colorado passed a very similar bill.
Colorado
Colorado enacted three AI bills, one of which is similar to California’s AB-2930. By enacting its own bill, Colorado enacted the first legislation aimed at algorithmic discrimination that can occur within AI systems.
CO SB205: Consumer Protections for Artificial Intelligence. In May, Governor Jarod Polis signed CO SB205 into law. Just like California AB-2930, this law aims to prevent “algorithmic discrimination” in AI systems. The law defines algorithmic discrimination as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
To prevent this, like California AB-2930, the Colorado law enacts different requirements for developers and deployers of AI systems that make or are a substantial factor in making a “consequential decision.” First, developers have several requirements, including that they must (1) make specific types of information available to deployers and other developers, including reasonably foreseeable risks of algorithmic discrimination, and (2) provide a public statement including how the developer manages reasonably foreseeable risks. Second, deployers have an even longer list of requirements, including that they must (a) implement and maintain a risk management policy and program, (b) complete impact assessments annually and after major modifications to the AI system, and (c) provide consumers affected by the consequential decision-making with a statement informing them of the use of the AI system.
Task Forces
Lastly, outside of specific legislation aimed at the use or implementation of AI, several states enacted laws creating an AI task force. The states that created some form of an AI task force in 2024 are: Colorado; Illinois; Indiana; Massachusetts (by Executive Order); Oregon; Washington; and West Virginia. Although the language creating the task force in each State is unique, at a high level, the purpose of these task forces is to issue and recommend protections for consumers, workers, or the more general public from the risks of AI. Given this, the creation of these task forces can be seen as a sign that 2025 will likely see even more legislation or executive rulemaking governing the use of AI.
Lots of Legislation to Come
In conclusion, 2024 saw a large number of significant developments in AI legislation. Although the federal government did not enact AI legislation this year, the continued introduction of AI bills and the decision of Committees to pass AI bills through markup suggests that we will see some form of AI legislation in 2025 or even further in the future. At the same time, although many states did not enact AI legislation, nearly every state introduced AI legislation this year, suggesting that states will continue to evaluate their need for AI legislation in 2025 and beyond.
Image Source: Deposit Photos
Author: sdecoret
Image ID: 242011008