According to experts, humans can detect the existence of another intelligent entity while engaging with chatbots and respond to it as a relevant source of engagement.

Geoffrey Hinton, the godfather of AI, has made a pathbreaking dictum defying conventional wisdom – AI models can have sentiments. His memory of seeing an emotional robot in 1973 is even more impressive.

In a recent interview, Hinton spoke about witnessing an “emotional” robot in Edinburgh. 

With its grippers in place, the robot could assemble a toy automobile if the parts were correctly arranged. But when the parts were dispersed, the robot behaved differently; it seemed “crossed” or irritated, much like a human would when faced with a challenging or unclear task.

This observation, dating back over four decades, underscores the potential for AI and robotics in exhibiting behaviours that we typically associate with human emotions. Hinton’s insights continue to push the boundaries of what we understand about consciousness and emotion in machines.

The Nature of AI Emotions

In a recent podcast, OpenAI chief Sam Altman predicted that the development of AI will force individuals to forge deeper human ties. 

Altman states, “The broad category of new kinds of art, entertainment, is more akin to interpersonal relationships. I’m unsure whether we’ll arrive in five years, nor do I know the job title. However, I believe that amazing in-person human encounters will be prioritised.”

Altman had previously admitted that he was “a little bit scared” of ChatGPT. He advised, “We have to exercise caution here, and people ought to be relieved that we are a little afraid of this. If I said I wasn’t, you should not trust me or be sad that I work here.

Sundar Pichai, the CEO of Google, agreed with Altman. “We all refer to one part of this as the “black box” in the field. You can’t explain why it stated this or got it wrong, and you don’t fully comprehend,” he said.

Elon Musk, the owner of X (formerly Twitter), is also very outspoken about his worries around AI. He has called the progress of ChatGPT “concerning” on multiple occasions. In another post, he accused the chatbot of being “too woke”.

In a blog last month, Microsoft founder Bill Gates elaborated on the risk. “There’s a possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests differ from ours, or stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”

Understanding AI and Vice-versa 

A week ago, CRED founder Kunal Shah questioned if readers would be able to tell a human-written news article from an AI-generated one. About 62% of users said they wouldn’t. However, research claims otherwise. 

According to experts, humans can detect the existence of another intelligent entity while engaging with chatbots and respond to it as a relevant source of engagement. The key topic is whether and how chatbots may have comparable levels and kinds of social influence on humans, even though research has generally acknowledged the social impact of chatbots. 

For example, the fluidity with which ChatGPT spoke in the recent demonstration of GPT-4o, the recently announced premier large language model, blew the minds of many users. 

It responded almost instantly, expressed a wide range of emotions, altered the volume and pacing of its speech, and could even sing.[

Perhaps even more remarkable was that it could hear. It could distinguish different breathing patterns, identify speakers by voice in a group conversation, harmonise with itself, and even respond to interruptions.

Similarly, at Google I/O, Google introduced AI Teammates and NotebookL, which help people in their work and teach them things quickly and meaningfully.

The boundaries between people and robots are getting fuzzier. As these technologies advance and become more integrated into our lives, we consider them independent social entities that can comprehend and respond to our wants and requirements. 

This change affects our understanding of ourselves, relationships with others, and how we engage with technology.

Humans are inherently drawn to other people and their understanding. We seek connections with people who help us understand who we are and where we fit.

The actual, perceived, suggested, and attributed presence of others shapes human perceptions, behaviours, and experiences. Recent developments in robotics and AI have fundamentally changed how we perceive the origins and mechanisms of these social impacts.

A 2023 study suggests that artificial intelligence-generated faces have become indistinguishable from human faces. These indications help people feel more connected to AI and view it as more human. Moreover, AI frequently conveys gender cues and cultural preconceptions about human aid, which might endearingly and familiarly feel natural.

The post AI Models, Too, Have Feelings You Know appeared first on AIM.