joe biden

With the US presidential elections approaching in November 2024, major players are hesitant to release advanced AI systems and models. The reluctance is primarily due to the fears of spreading misinformation or influencing election outcomes, which could lead to stringent regulations if they disrupt the process in any way.

OpenAI CTO Mira Murati recently confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident on when it comes to how it might affect the global elections or other issues,” she said last month.

And it doesn’t help that their voice engine was cited as being another factor in misinforming voters. “We recognise that generating speech that resembles people’s voices has serious risks, which are especially top of mind in an election year,” the company said.

Recently, Elon Musk’s AI chatbot Grok falsely reported that ‘PM Modi was ejected from the Indian government’, sparking controversy for spreading misinformation. The examples are aplenty. 

In the US, a deepfake video falsely showed President Joe Biden claiming that Russia has occupied Kyiv for ten years, confusing it with Crimea. Similarly, an audio deepfake circulated, wherein the US president falsely urged voters not to participate in the primaries, stating that they were rigged.

AI Regulations also Take a Back Seat

While misuse runs rampant, regulations are also moving at a snail’s pace, likely because it is an election year. Things will hopefully come into action early next year, influenced by which party comes into power. 

Recently, Stanford Institute for Human-Centred AI senior fellow Erik Brynjolfsson warned that over-regulation of AI could become an issue if not mitigated.

“Smart regulation can be helpful, it can even speed up the adoption of technologies and protect people from harm. But, at the same time, over-regulating can be harmful and slow down adoption,” he said while speaking to CNBC.

However, the reality might be that we’re rushing toward over-regulation at this point, which could make compliance all but a pipe dream for industry players.

AI Regulations So Far 

Last week, Representative Adam Schiff introduced a bill into the US House of Representatives requiring companies to report any use of copyrighted materials in training AI systems.

“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives. We must balance the immense potential of AI with the crucial need for ethical guidelines and protections,” said Schiff on its introduction.

And balance they have. Schiff’s bill isn’t the first piece of regulation introduced on AI and it certainly won’t be the last. Last year alone, there has been a flurry of activity related to AI regulations within the US government.

While an overarching act may not currently be in place, state legislatures have been quick to introduce and pass bills related to AI, with over 18 states enacting legislations, and over 400 AI bills introduced in 2023.

Meanwhile, there have been several federal measures that have been introduced, including the blueprint for an AI Bill of Rights and an executive order on the usage of AI in the country.

The sentiment that AI might be difficult to regulate seems to have everyone on their feet to take a shot at it at the very least.

However, the goal always seems to have been a federal law rather than several state legislations. Californian Senator Scott Wiener said, “I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law.”

The lack of a federal law could have drastic effects on the industry as a whole as companies try to adapt to several legislations, not to mention international regulations. So the need for such a law comes from both states as well as the companies themselves.

Future AI Regulations 

It’s likely, but whether it will happen is hard to tell. With the elections coming up, both the Democratic and Republican parties have similar stances on AI.

The Democrats have promised to “mobilise public and private actors to ensure that new products and new discoveries are bound by law, ethics and civil liberties protections”.

Similarly, the Republican Party has shown its support for AI regulation, though the two parties differ on what grounds they should be regulated. According to studies, while both parties support AI regulation, Democrats have shown a marked concern on ethics, while Republicans are concerned about AI capabilities and data rights.

The 2024 presidential candidates have both also shown an active response to the growing importance of the industry over the last few years. During his time in office, Republican candidate Donald Trump introduced an executive order that pushed for innovation within the industry in 2020.

This was shortly followed by the launch of the National Artificial Intelligence Initiative Office.

Likewise, the Biden administration also introduced an executive order on ensuring “safe, secure and trustworthy” AI last year. The order outlines safety standards for the usage of AI.

However, both of these are still orders. Neither party has committed to a policy that would be both easy to implement and navigate, nor has there been talk of an overarching policy so far.

While, understandably, the consensus is that AI is hard to regulate, continued inaction could have drastic effects on the ecosystem as a whole.

With major updates coming to the industry, like OpenAI’s GPT-5, the new administration could make or break the industry in how they choose to regulate.

The post GPT-5 Likely to be Released After the US Elections appeared first on Analytics India Magazine.