Healthcare was already going through tough times in AI adoption, and now LLMs just got added to that list. LLMs showed notable downsides like providing false information, manipulating data, and promoting plagiarism. 

“We are not using LLM deliberately, because LLM is not suited for something we are building,” said Anuj Gupta, founder and director of AI4Rx, a health tech startup, who is also a software architect with over three decades of experience. He has also served as the CTO at EasyGov, an Aadhar full stack aggregator of Government Welfare Schemes, that is now a part of Jio Platform.  

LLM is Not the Only Way

“We also use LLM but not as our core AI model,” said Gupta, saying they have developed a graph-based model – an inference graph, which is their proprietary model. 

Founded in 2019 by Gupta, and healthcare veteran Monika Agarwal. The company launched two AI-based applications MedBeat HealthConnect and MedBeat HealthConnect Plus, for patients and clinicians, respectively. Recently, the company announced the availability of four more languages for patients to check their symptoms. The application now supports English, Hindi, Malayalam, Kannada, Tamil, and Telugu. 

“What we realised is that in healthcare, the main issue is that you don’t have enough quality data, and whatever data is available, it’s available from the US or other countries, that may or may not be applicable for Indian patients,” said Gupta.  

Adopting a Hybrid Approach

AI4Rx is building a system with doctors from various specialisations vetting the content, which helps in building quality datasets. “We started something where a doctor can build a model without data. We created a graphical studio where you can model a disease through your knowledge, and can be done individually. So each specialist can do it for their area of expertise and since we are doing it for each disease, it is easy for testing verification.” 

While it is not possible to completely eliminate LLMs when building such an AI model that helps clinicians and patients alike, a hybrid approach is a suitable one. At AI4Rx, LLMs are used for summarisation, including generating summaries based on a patient’s symptoms, lab investigations and previous prescriptions. However, when it comes to asking follow-up questions for differential diagnosis, they will rely on their own model. 

The model’s advantage is clear: it learns directly from doctors, ensuring a controlled environment, and its explainable nature allows doctors to verify predictions, preventing the incorporation of incorrect data. “We have a controlled process to ensure that whatever goes into production is all verified and tested,” said Gupta. 

Though the method combines the best of both worlds, the process of training becomes extremely lengthy. Dependencies on doctors for testing and verification will slow the process. The startup took over two years to build the model, as they were multiple doctors from different specialities required for the process. “The doctors who are experts are very busy,” said Gupta. 

LLMs in Healthcare 

Google recently released Articulate Medical Intelligence Explorer (AMIE), an AI system based on PaLM, and is optimised for diagnostic medical conversations. Big tech companies such as Google have extensively invested in healthcare and have built LLMs such as Med-PaLM 2 that are designed to answer medical questions. Microsoft, Oracle, and OpenAI have also made significant strides in the sector. 

While the big tech companies will collaborate with large hospital chains, the kind of models built by AI4Rx will cater to the smaller segment, which will allow them to also experiment with hybrid models with a larger timeline. “I know that big chains such as Apollo are going to invest and build something similar, but our focus is on clinics and individual doctors,” said Gupta.  

The post Healthcare has an LLM Problem appeared first on Analytics India Magazine.