Most AI Doomers Have Never Trained An ML Model in Their Lives
The science-fiction writer of the mid-20th century Isaac Asimov’s, known for Robot series, “Three Laws of Robotics” play an essential role when we discuss ethical AI. The first one states that, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Seems obvious. The second one states, “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.” Pretty important. “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law,” states the third law.
According to the programme details of the 5 days long Artificial Intelligence Ethics course by University of Oxford, these three laws are one of the focus of the course. Interestingly, in the list of pre-requisite for the course, there is no mention of requiring detailed knowledge of AI/ML and how the systems work before governing and drawing the ethical boundaries around these systems. Seems kind of incomplete. Anyone with any qualification can take this course and tout themselves as an ‘AI Ethicist”.
This brings up the question – If a person who hasn’t built a single ML model in their life, what makes them capable of putting guardrails around these highly capable, even though scary systems?
AI Doomers are AI Boomers
Putting in the perspective of the AI doomers, a lot of them which are not in the field of AI, are influenced by all the movies that have been released all these years about “machines taking over the world.” It is pretty obvious they fear that the systems these big-techs are making, that are increasingly being touted as getting towards sentience, would end up taking over humanity.
But even if a person like this takes a course on AI ethics without learning how machines learn, what makes them qualified to make laws about AI?
Recently, the Israeli historian and philosopher Yuval Noha Harari expressed his skepticism about the possibilities of developing AI models like ChatGPT. “In the future, we might see the first cults and religions in history whose revered texts were written by a non-human intelligence,” said Harari. Seems like a far-fetched idea.
This is similar to Warren Buffet “getting worried” about the dangers of AI and comparing it to the creation of the atomic bomb. Even the Pope called for ethical use of AI.
Interestingly, Geoffrey Hinton after leaving Google is also comparing AI with the atomic bomb. In 2018, he dismissed the need for explainable AI and also was in disagreement with Timnit Gebru, a former AI ethicist at Google, over the existential risks that these LLMs possess. In the past, Sam Altman has also compared the potential of the technology he is trying to develop with the atomic bomb.

But on the other hand, when the AI experts like Hinton or Yann LeCun, who have been the godfathers of AI and are in this field since the beginning, raise concerns about the capabilities of these AI models, then probably the conversation starts to get interesting, and the questions around ethics start stirring up.
Hinton’s most important reason for leaving Google was to engage in conversations regarding the ethical implications of these AI models. He also in hindsight regrets building these models, and said that he should have started speaking about these dangers sooner rather than now.
Still Not on the Same Page
Last week, since the heads of Microsoft, Google, and OpenAI met with the Biden administration at the White House, there is increasing talk about the ethical implications of these products. Though there is no source to know what they spoke about, it might be about putting up responsibility on the leaders to make the AI ethical.
But on the flipside, ever since the AI chatbot race started, the companies’ behind these “bullshit generators” started laying off their ethical and responsible AI teams. It seems as though the big-tech has found out that they do not actually require an ethical team to build guardrails around their products. The possibility is that an ethics’ concerned person on the team might hinder or question the steps and growth that the company is making with its product.
Moreover, when big-tech gets into the race to get ahead of the other one, they might overlook the ethical part of these models. There is a possibility that the tech giants might now come on the same page as the governments about the concerns about it.

Before getting convinced with this statement, it is also important to understand that AI being ethical is an important aspect for big-tech as well. The problem arises when the ethical teams get fixated on solving the biases in systems instead of making them “safe”. To put it in the words of Elon Musk from the BBC Interview, “Who decides what is hateful content?”. But interestingly enough, Musk is one of the top voices who called for a pause on training giant AI experiments and is now building his own AI systems to rival OpenAI.
Even the creator of ChatGPT, OpenAI, has been increasingly vocal about the fears they have around these AI models. In an interview with ABC News, Altman said, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.” Altman believes that as long as the technology is in his hands, it is safe and will try to remain ethical. He also said that society has a limited time to adapt and put safety limits on AI.
Coming back to the layoffs of the ethical teams from big-tech, it might be safe to say that they want to win the AI race instead of building “ethical” robots. Or maybe the case is just that the people who were laid off weren’t too aligned with the company they were working for. Who knows who is in the right?
Not All Ethicists Restrict AI
To clear the wheat from the chaff, it would be wrong to say that all AI ethicists do not understand about the AI systems and how they work. Ethicists like Timnit Gebru or Alex Hanna, who have been part of the big-techs building these AI systems are working towards solving the AI alignment problem at Distributed AI Research Institute (DAIR).
“A space for independent and community-rooted AI research,” DAIR deals with addressing the bias problems within these systems while also talking about the whole framework of how these models might be pervasive to the privacy of the users and citizens of the world. Maybe Gebru and Hanna parted ways from Google after looking at some serious ethical concerns.
Moreover, there is a new breed of ethicists in the field of AI that talk about the rights of the AI. This goes with the third rule of robotics from Asimov, where robots can govern themselves. Jacy Reese Anthis, co-founder of Sentience Institute, when speaking with AIM said that we need an AI rights movement, “even if they currently have no inner life, they might in the future.” Clearly, the conversation is moving in the right direction.
This shows that maybe the only thing missing in the current stream of AI ethicists is the lack of AI knowledge and more of sociological understanding of the world. While the latter is hugely important, the missing of the former makes their stances get overlooked and dismissed. Big-tech needs more ethicists, with more knowledge about how the AI systems work. When that is the case, maybe we would be able to make AI “ethical”. Till then, the big-tech makes the move.
The post Most AI Doomers Have Never Trained An ML Model in Their Lives appeared first on Analytics India Magazine.




