LLM Chatbots Are Humanity’s Biggest Mistake
How do we browse the internet? We mostly start out with a Google Search, if not using social media platforms. Most of the world is now fairly proficient with how to search something using a search engine – enter whatever you want in the box staring at you. It is interesting to note that with the likes of the modern day chatbots such as ChatGPT or Bard, or for that matter any other ‘chatbot’ that is going to be released in the future, all incorporate the same empty text box approach.
When Google came out with its search in 1996, no one knew how to use it perfectly. Some would start with a “hi” or “hello” salutation, and then proceed with the query. Some would simply insert a word or sentence to search. With time, people started realising how search engines actually work by scrapping the internet, and got used to interacting with it just like a technology, and not like humans.
When it comes to current chatbots, not much has changed. That does not mean that there is no way that is not better than the other. Obviously, it is clear that writing in the perfect way makes the chatbots hallucinate less. That is why we have courses around prompt engineering.
Simply put, humans are so obsessed with typing into white boxes in search engines and messaging apps, that it seemed like a simple transition to talk to AI the same way.
Though OpenAI is increasingly trying to fix all these breakdowns and hallucinations, even if we speak to anyone with just a spelling or grammatical mistake, this would never be the result, hopefully.
Now, how much of these natural language chatbots are actually natural if we have to learn how to interact with them? And if they are not, is there any other way to interact with LLMs, apart from chatbots?
Making easy, easier
We have grown used to talking with each other on text, that is possibly why we think this is the only way to interact with an entity, even when it is artificial. But if we think about it, talking to an LLM is fundamentally different from an human interaction as it lacks the shared human context that is naturally present in a normal conversation.
Chatbots just seem like a way to mimic human interaction, instead of actually making them useful. It seems like a forced attempt to feel like AI is becoming sentient, when it is clearly not. A lot of this difference also comes because of the basic misunderstanding about doing something versus delegation.
For example, ChatGPT can write a very good text for you or Midjourney can generate an awesome picture for you. That counts as delegation, where the user is completely putting the responsibility of doing a task over to the AI program, much like assigning a task of “write an email” to a person.
On the other hand, GitHub Copilot feels like a more proactive model that relies on working coherently with the human counterpart by predicting what the user wants next with just a press of a button, based on the surrounding code, instead of prompting every single time you want to generate a code. This is why code assistants and chatbot assistants fundamentally differ, even though the technology it is built on is the same.
Then should we get rid of chatbots?
Even when people interact in white boxes on social media like Reddit, Twitter, or even WhatsApp, the responses are a lot better and more human-like, because they are not prompts. Your friend wouldn’t start generating code on Messenger simply because you tell him to do that.
GitHub Copilot is the perfect example of how eventually the interface should be eliminated, whereas with chatbots like ChatGPT, the interface is the end product. Same as how when voice assistants were launched, they were touted as the interface of the future. The fundamental limitation still remains – it is still a black box where most don’t know what it can do but are met with a blank slate of what to ask.
On the contrary, there is nothing wrong with what ChatGPT delivers. It is quite literally a chatbot. Same is the case with text-to-image models like Midjourney or DALL-E, but arguably Adobe with Firefly is getting over the “blank-box” interface by allowing more flexibility of what you want and where you want it.
It is not wise to make the case that chatbots are a limitation, they just have a few of them. Varun from Stanford University argues for suggestions based on a single text input. For example, understanding the context, and giving users the options of follow up questions. Moreover, there can be simple options such as ELI5 (Explain like I am 5), to begin the conversation.
When equipped with a suitable interface, LLMs can function seamlessly and provide assistance comparable to that of a human colleague, albeit with greater speed and intelligence, working alongside us and even surpassing our expectations, to become actual human-like AI.
The post LLM Chatbots Are Humanity’s Biggest Mistake appeared first on Analytics India Magazine.


