It’s been more than two years since the term “generative AI” entered the common man’s lexicon, and we’re still figuring out how to regulate this technology. We’ve seen several bills over the years, but there aren’t any official laws just yet. Well, the U.S. just established a new AI safety review office in the department of commerce, and this might help with regulating AI.
This new office will aim to do what most bills surrounding AI aim to do. It will push AI safety regulations while ensuring that the U.S. maintains its position as the AI world leader. Right now, many of the world’s biggest AI companies are based in California, so the States do have a bit of an advantage. This is also why proper regulation is so important.
The new AI Safety Review office will offer an AI safety take-home test
A new bill came to the surface called the Preserving American Dominance in AI Act of 2024. It was sponsored by Senetor Mitt Rommney and co- sponsored by Jerry Moran, R-Kan., Angus King, I-Maine., Jack Reed, D-R.I., and Maggie Hassan, D-N.H. This act will require the new office to establish a safety test for AI model developers. The test will help indicate any critical risks a model may have before the company deploys it.
A test like this would be a good idea, as it will make it easier for companies to comply with governmental standards. Other bills that attempt to govern AI basically tell companies “Don’t make your AI dangerous!”, but there isn’t a clear indicator as to what that means. This is because the line between safe and dangerous AI is still pretty blurry. However, if this test establishes a certain standard that the model needs to meet in order to pass, then it’ll be easier to know if the model will go against regulations.
The act will also require that the office work with the AI Safety Institute to provide assistance to AI companies. This type of synergy could be instrumental in AI development in the U.S. Companies developing Frontier AI models will receive assistance in red-teaming against the areas where AI could prove the most dangerous. We’re talking about biological, chemical, nuclear, radiological, and cyber risks. AI will, undoubtedly touch industries that deal with those areas.
90 days, and the hammer comes down
As with other bills and regulations, there are harsh consequences for failing to comply with the rules. If brought into law, AI developers would have 90 days to have the office review their model’s safeguards. Those who don’t comply could also face criminal charges. These could include up to 10 years in prison and Civil charges of up to $1 million/day.
However, this could scale depending on the company. $1 million/day seems like it’d be preserved for billion-dollar/trillion-dollar companies like Google, Microsoft, Meta, etc. A small company making a cute little AI app won’t be hit quite as hard.
2024-12-27 15:10:42