Crafting sensible laws governing AI is one of the most difficult challenges that the government has to face. This is something that governmental bodies are still wrapping their brains around. Well, today, we got news that the Biden Administration has just taken another step forward in this endeavor. The U.S. government has unveiled strict safety requirements governing federal agencies using AI tools and systems.
Right now, artificial intelligence is one of the most sensitive and controversial topics in tech. We got our first taste of generative AI back in November of 2022, and several governments have been scrambling to construct a legislature surrounding it. However, it’s been a long process.
Recently, the EU took the first major step in making laws covering AI, and this makes it the first governmental body to do so. Companies and countries within the EU need to enact these laws within the next couple of years without facing repercussions.
Also, the U.S. Patent Office just released an update letting the public know that it is going to release solid laws governing copyright and AI over the course of 2024. It plans on posting these laws by the end of the year.
The U.S. government released requirements for federal agencies using AI
Right now, things are still pretty early for these new guardrails going into place, so we’re all still wondering what sort of ramifications to expect. As with any governmental regulation, federal agencies have several months to put the wheels into motion. So, this piece will be updated if the U.S. government comes out with any more AI requirements.
First requirement: Safety
Firstly, federal agencies using AI tools need to make sure that these tools do not negatively impact American citizens’ rights or safety. So, the tool cannot suppress any American citizens’ constitutional rights or potentially lead to injury or loss of life of any citizen.
How can an AI system impact user rights?
One major example of this is using AI systems to compromise any person’s privacy. Agencies are not allowed to use AI tools to, say, collect and ascertain biometric or facial data on people.
Another great example of this will be keeping AI systems away from anything dealing with voting. It’s currently 2024, and the American presidential election is soon to kick off. So, agencies cannot use AI tools to count, collect, or otherwise affect votes in ballots.
How can an AI system affect safety?
When it comes to impacting safety, there are tools that cannot be used in any real-world applications that could possibly affect human safety or lives. So, agencies controlling electrical grids, emergency systems, water, transportation, medicine, etc. are not allowed to use AI tools for critical purposes.
At this point, generative AI is still developing, so there’s a high possibility of AI systems messing up. For example, if an AI system controlling a town’s water supply messes up, it could have devastating effects on that town and the people who live there.
There is an exception
If an agency or entity wants to use AI systems for these functions, then it will need to have concrete safeguards in place to ensure that there will be no negative impact. If the agency can properly justify using the tool, it may use it.
Whatever safeguards the federal agencies use will be up to them, so the ball is in their court. However, they will have until December to ensure that everything is in place so that whatever systems they use do not impact human safety or rights.
Second requirement: Transparency
The second of these requirements hits on something the U.S. government and citizens have worried about regarding AI; transparency. The affected agencies can not withhold critical information about the systems they use, or their potential impact on American citizens.
Each year, federal agencies will have to “publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” said U.S. Vice President Kamala Harris. This will not only ensure that the government knows what each company is using, but it also ensures that American citizens know what agencies are using.
There is a major problem with companies and agencies keeping their activities behind closed doors. The people whose data is training the AI models need to know what these entities are doing with their data. So, agencies will need to be completely transparent about their AI tools.
There is an exception, however. If publishing the information puts any citizen’s privacy or security at risk, then the agency will not have to post a full report. However, it will still need to post metrics.
The third requirement: AI insight
The last of the U.S. requirements pushes federal agencies to have proper knowledge and insight into AI. So, each department will need to appoint a Chief AI Officer to oversee exactly how these agencies are using AI. The chief AI officers will have deep and professional knowledge of artificial intelligence. This could help the government lean about and mitigate potential risks.
Along with that, several agencies will also need to have AI governance boards. They will need to have these by May 27th.
Are these official laws?
Technically, no. A while back, the President of the United States signed an Executive Order to address major companies and their use of AI. These requirements further expand on the Executive Order and outline more specifics. However, these are not official laws.
If the agencies or entities fail to comply with the guardrails outlined, it remains to be seen what sort of punishment they will receive if at all. While these are not official laws, they are still requirements from the top authority of the United States government, so these agencies will need to comply with them regardless.
2024-03-29 15:09:12