The Luigi Warning: Can Indian Insurance Escape the AI Trap?
Luigi Mangione, a 26-year-old Ivy League graduate, created a national storm when he allegedly shot UnitedHealthcare CEO Brian Thompson. While some view him as a troubled individual, others see him as a symbol of rage against a system they believe is broken.
According to an NYPD intelligence report obtained by CNN, Mangione’s actions were allegedly fueled by deep resentment toward the health insurance industry and its perceived prioritisation of corporate profit over human care.
The tragedy has renewed scrutiny on lawsuits alleging that UnitedHealthcare uses AI tools like nH Predict to deny Medicare Advantage claims—supposedly overriding physicians’ judgments and rejecting patient care due to flawed algorithms, which are reported to have error rates as high as 90%.
This moment forces a hard question: Could India face a similar crisis as AI becomes deeply embedded in its health insurance and IT systems? Transparency, accountability, and ethical oversight in algorithmic decision-making are no longer optional; they’re necessary.
Indian IT companies are increasingly shaping the global insurance industry with the use of AI. They are automating claims and underwriting, enhancing fraud detection, and improving customer experience. Companies like Infosys, HCLTech, TCS, and Wipro are integrating sophisticated AI technologies into their insurance processes.
Infosys, for instance, is leading the digital transformation of the Life Insurance Corporation of India (LIC) under its DIVE initiative, delivering end-to-end AI-enabled integration and DevOps services.
HCLTech has expanded its partnership with The Standard, a prominent US insurer, to co-develop AI-led financial protection solutions.
This trend is part of a broader shift in the industry. India’s generative AI solutions in insurance are expected to grow at a CAGR of 38.28% from FY2024 to FY2032, reflecting rapid adoption.
According to Wipro’s “The AI Advantage” report, based on insights from 100 US-based insurers, 81% of companies plan to increase their AI budgets, with underwriting emerging as a key focus area for enhancing efficiency and accuracy. Nearly all surveyed leaders believe AI is vital to customer experience and personalisation.
However, a Genpact-AWS study reveals that only 36% of US customers feel their digital experience has improved, despite 69% of insurers deploying AI, highlighting a need for better scaling and alignment with user expectations.
That said, concerns about fairness and transparency persist. Examples from around the world, such as the Optum healthcare algorithm that reportedly underestimated Black patients’ health risks, demonstrate how biased data can distort AI results.
As AI plays a larger role in decision-making, questions arise about the transparency of these solutions and whether they ensure fairness and equity.
In India, Star Health and Allied Insurance came under insurance watchdog IRDAI’s scrutiny after it found “serious lapses in the claim settlement practices” in the company, according to reports.
The insurance regulator did not announce any direct documented cases specifically attributing claim denials solely to AI. However, last year, Star Health introduced a new AI-driven tool called Star Health Face Scan. The company claimed it will remotely gauge 18 parameters such as blood pressure, pulse rate, heart rate, haemoglobin levels and stress levels.
“The company will continue expanding its technological capabilities to provide even more advanced and user-friendly solutions for its customers,” Anand Roy, MD & CEO of Star Health and Allied Insurance, had said in a press statement.
Lack of Guidelines
To avoid such risks creeping into Indian insurance services, DR KS Uplabdh Gopal, associate fellow (health initiative) at the Observer Research Foundation, told AIM that insurers and regulators need to mandate bias audits and regular checks to assess how well AI models perform across various social groups.
He insisted that the country needs to test algorithms against discriminatory results based on gender, income, geography and caste, even if they are not explicitly used.
Gopal emphasised the need for explainable AI (XAI) models that not only make decisions but also explain why. If a claim is denied or classified as high-risk by an AI tool, the individual involved must be given a reason and the right to appeal.
He expressed regret that in India, we are only beginning to engage in this conversation.
“While the Digital Personal Data Protection Act, 2023 provides some safeguards around consent and data use, we currently have no required guidelines for AI explainability, fairness, or redressal in insurance,” Gopal remarked. He added that the regulatory sandbox of IRDAI permits innovation, but compliance has to keep pace. This is to ensure that AI advances inclusion rather than perpetuating current imbalances.
The Wipro report mentioned earlier also emphasises that a primary challenge in AI adoption within insurance companies involves both external and internal risks. It acknowledged that while AI enables faster and more accurate decisions, it also introduces risks of bias and reputational damage.
HCLTech and TCS, meanwhile, did not respond to the queries sent by AIM.
However, responding to a user’s query on LinkedIn on how agentic AI is transforming the insurance sector, Sukriti Jalali, principal consultant at TCS, stated that they are enabling all roles in the claims process, from policyholders to investigators, with both Generative AI and traditional machine learning.
Indian companies’ way
Saurabh Arora, co-founder and CTO of Plum, an insurtech startup, told AIM that his company does not feed sensitive fields (race, religion or gender) in its claims or pricing models, with age and geography appearing only where regulation explicitly requires it.
The company conducts weekly lightweight audits, keeping potential bias visible and easy to correct, while tailoring explanations for claimants, employers, and regulators. Arora acknowledged they do not claim perfect bias elimination but aim for transparency and simplicity.
Plum said it uses ClaimLens, its document deficiency AI system, to ingest every bill, discharge summary, and lab report, and uses GPT-4o-based OCR plus medical-language models to pull out structured fields (patient ID, ICD-10 code, procedure, line-item costs) and assigns a confidence score to each extraction. Besides, it also uses Anthropic Claude for summarising long policy documents, and Sarvam’s model for local-language voice bots.
Meanwhile, Mphasis recently entered into a strategic partnership with Sixfold, the AI underwriting company redefining how insurers assess risk.
As an implementation partner, Mphasis will integrate Sixfold’s AI platform to help insurers accelerate their underwriting process, speeding up submission intake and equipping underwriters with the contextual risk insights they need to make faster, more confident decisions.
In an email response to AIM, the mid-tier company acknowledged that maintaining ethical integrity in AI-driven underwriting is non-negotiable, especially in a regulated industry like insurance.
“We have implemented bias insulation protocols that include regularly reviewing and updating our models to guard against unintended discrimination and to preserve fairness,” it said.
It also claimed to monitor model drift or hallucinations, which can occur as they evolve over time.“We also ensure human oversight remains central to the process, with underwriters
equipped to review, question, and override AI-generated recommendations whenever necessary,” the company said.
Insurers racing to adopt AI must not lose sight of what matters most—trust. Mphasis rightly emphasizes that “AI is only as valuable as it is trustworthy.”
The post The Luigi Warning: Can Indian Insurance Escape the AI Trap? appeared first on Analytics India Magazine.



