Regulation, regulation, regulation! This is the word that’s been buzzing around the subject of AI for months, and we’re still waiting for the government to come forth and start making some rules. Well, according to a new report, The White House is starting to crack down on AI and lay the groundwork for regulations.
Right now, major AI companies hold a ton of power in their hands, as they’re developing technology that can have a large impact on the tech industry. Since the generative AI explosion, governmental bodies have been trying to find ways of keeping AI safe. As much as we like Google, Microsoft, and other companies, we don’t trust these profit-driven companies to truly be responsible with AI. So, this is why regulation is the name of the game from now on.
The White House is introducing some rules to regulate AI
On Monday, the Biden administration unveiled a new executive order to regulate AI companies developing the next generation of AI. This executive order “…stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law.”, said a senior administration official.
The thing is that AI could have a major impact on consumers, national security, and the health and well-being of millions of people if not billions. This order will act as the first set of rules and guidelines to make sure that AI doesn’t get out of control. These are a bit vague, as we’re all still getting a feel for what AI can do and the types of precautions we should put into place.
What does the EO expect companies to do?
So, let’s go through what this order will have these companies do.
Report to the federal government
So, for those larger companies, if their models have the potential to impact national security, national economic security, or national public safety and health (which they most likely do), they need to inform the federal government before they train them. This gives the government a heads-up as to what the company is doing.
Share red team data
Next, they’re required to share all of their red-team test results. If you don’t know what red teams are, they’re essentially teams of people who are tasked with the duty of making AI chatbots say harmful or offensive things. They force the chatbots to say these things so that they can know where to improve.
The federal government will want to keep an eye on what sort of harmful responses these chatbots can give. This way, it can get a better idea of how bad their responses could be.
Share training runs
Also, the companies should share the training runs for their LLMs. This will give the federal government a timeline showing how the tests progressed over time. The training run will give the government officials an idea of how the training progressed.
As you can imagine, if the government doesn’t like the results or what they’re training their AI on, then it will take action. All of this is required before the company makes the AI model available to the public. If the AI model has dangerous potential, and it gets released to the public, then there’s no turning back.
What are the exceptions?
This executive order targets a broad spectrum of AI technology, but there are limits to its order. It points to major AI companies; companies whose AI models have the potential to impact national security or economic security. We’re talking about multi-million or multi-billion-dollar models with the largest reach.
These include products from the likes of OpenAI, Google, Meta, Anthropic, etc. Smaller models and start-ups are exempt from the order for the moment. “This is not going to catch AI systems trained by graduate students, or even professors,” said an administration official. So, if you start making an LLM in your basement right now, you won’t be subject to the rules.
Also, they don’t extend to products that are on the market today. This means that major products like Bing AI, ChatGPT, Bard, (the jury is out on Gemini), Snapchat My AI, and others aren’t under the rules. However, future AI models with a capacity of 10^26 petaflops are under the rules.
The EO is targeting more than just AI companies
This executive order is looking to leave no stone unturned. The order turns to the Department of Energy and Homeland Security. This department is being coaxed to address the potential threats to “critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,”. This just shows how deep the influence of AI could go.
We’re not sure how the department will address these risks, but it’s extremely important that it does. AI is being used for business applications, and there’s no doubt that it could be used in governmental industries in the near future. We need to make sure that it won’t cause any major issues.
Cybersecurity
Onto the subject of cybersecurity, the order seeks to create a cybersecurity program that will use AI to autonomously find and weed out cybersecurity threats.
This could be a major boon to cybersecurity, but potential results remain to be seen. This is only expressing the intention to make the program. We don’t know when the program will come about or who will be involved. This EO is still new, so nothing is really set in stone.
Watermarking
This next subject hits on a more personal front. It’s getting harder to know whether the content you see on the internet is human-made or AI-generated. Back in the early days of DALL-E, it was as plain as the uncanny noses on AI-generated faces. Now, it’s becoming indistinguishable.
It’s pretty easy for people to create deepfaked and deceptive content and fool the masses. This comes to a head during the election season.
This is why the EO is showing an interest in creating a watermarking system so that people can know that AI-generated content is indeed AI-generated.
While this is the case, the order didn’t point to any specific plans or methods for creating the system. It basically just stated that it’s on the government’s mind. We’ll need to wait for more information on that front.
Civil rights
Another hot topic of AI has to do with discrimination. This is a major issue with AI. As smart as LLMs are, they don’t have a moral compass. They’re trained on information across the internet and all of the biased information that comes with it. This means that AI models could inadvertently be biased against certain groups of people. If that’s the case, that bias will translate to whatever applications the AI is being used for.
The EO aims to combat this by extending to “landlords, federal benefits programs and federal contractors”. These are all areas where racial and other forms of discrimination have a major influence. Say, you’re a landlord, and you use AI to go through applications of hopeful tenants. Well, if the AI is trained on biased information regarding African American people, then it will automatically reject people of color and pass them up even if they’re perfectly qualified.
This also includes “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.” These are areas that need human influence and not algorithmically-fed data. This push will ensure that AI won’t lead to unlawful prosecution or discrimination.
Privacy
The EO also touched on the all-important subject of privacy. This one is also pretty vague, as it’s still in its early stages. What we know is that it’s going to call for prioritization of methods for collecting data that don’t put users’ information at risk. LLMs are just monumental pools of information just waiting to be leaked.
This order will strive to ensure better security over personal data. There aren’t any solid plans available to the public. Just like other parts of this order, we only have word on the intent. What we do know is that solutions may include “cryptographic tools that preserve individuals’ privacy,”.
Workers’ rights
This is the area of AI that hits the closest to home. We’ve long known the fear of being booted from our jobs when technology makes us obsolete. Well. AI has proven to be extremely capable of replacing human beings with several tasks.
This is something that the EO aims to tackle. It will bring “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” says an administrative official. These guidelines will aim to keep jobs across different industries from laying off large portions of their staff in lieu of AI technology. It will also help protect higher-level and executive positions from being automated with AI.
This is just the first step
Right now, these can’t be thought of as regulations or laws. Even now, nearly a year after ChatGPT hit the market, we’re still only starting to realize the potential of AI technology- whether good or bad. So, it’s hard to really say what AI companies can and can’t do.
However, this EO is the first step to creating a rulebook for companies to follow. As time goes on, there will be more rules and regulations to come forth. All we have to do is wait.
2023-11-01 15:08:11