How Neo4j Uses Graph Technology With LLMs to Replace SaaS Applications
“These [AI] models, they’re stupid. But they have a lot of data, and they have a lot of processing. And therefore they look much more intelligent than they actually are,” said Stephen Chin, the VP of developer relations at Neo4j, at the Great International Developer Summit.
In his talk about ‘Enhancing LLMs With Graph Technology’, Chin introduced a framework that claims to provide a complete brain to the LLM for it to give reliable, accurate, and information-rich results. He began by explaining the issues with an LLM without this framework and highlighted the benefits of integrating it with graph technology.
The Current Problem With LLMs for Business Use Cases
Chin put forward an indeterminate problem to OpenAI’s o3 reasoning model, which did not have sufficient information for an answer.
He discussed of a problem where there are three elective course choices for boys and girls in a school of 36 students. Chin took a situation where 75% of students failed, and wanted people to guess how many new students would opt for home economics as their course of choice.
To tackle a problem like this, people need more information and impose constraints before approaching a solution. However, the LLM calculated the failure rate and implied that girls would opt for home economics, which is an unfair bias.
Chin explained that LLMs try to create a relationship or create a story to come up with an answer, instead of reasoning like humans do. He highlighted that LLMs think in word vectors, instead of words, and it involves a statistical probability in a multidimensional space where they calculate the relationship between different objects, and determine what the most likely related subject is.
He also remarked that humans are exposed to a limited number of words and can do so much more compared to an AI model exposed to 500 million words, which is still struggling to reason.
Enhance LLMs by Adding a Left Brain With Graph Technology

Chin proposed the idea of pairing an LLM with a knowledge graph, as they are more logical and structured. He highlighted that LLMs are working with only half a brain, and with a knowledge graph, an LLM gets to work with the whole brain.
“Knowledge graphs have been around for a while. It’s a different way of representing data structures, where you’d use nodes and relationships. And it allows you to create a human-understandable representation of data,” he said.
“And we can pass this to the LLM as the context that it uses to then solve the problem. So, kind of grounding it in a data set which we and the LLM can understand”.
He explained that graph databases are designed to enable the discovery of hidden relationships and data, proving to be a powerful means of understanding data.
Chin noted that these are commonly used for fraud detection, supply chain management, and pharmaceutical applications, such as ontologies. “And they’re also really good at encoding information for AIs,” he said.
Chin explained that the knowledge graphs help LLMs know facts and explicit, explainable insights of the data. And, this technology is implemented using GraphRAG, where a knowledge graph and AI are combined.
The GenAI application or an LLM asks the database for additional information and gives a response back, keeping in mind graph retrieval, graph similarity, community algorithms, and vector similarity.
With GraphRAG integrated, the LLM can answer a wide range of questions, and it can expand on the questions depending on how tightly you set the parameters on the model. The graph database gives developers an architecture that is much more grounded, with more control over the results.
One can try applying different patterns of GraphRAG to improve the results. Neo4j is one of the databases that can act as a memory for an agentic architecture.
Organisations like Klarna, a payment solutions company, similar to Stripe, have adopted the GraphRAG system using Neo4j and replaced 1000+ SaaS applications like Salesforce.
“They’re feeding in the customer wikis, enterprise systems, internal documentation, HR systems, putting that all in a big knowledge graph. Then it becomes connected data where you can actually see things like where HR systems and internal documentation and enterprise are related across different patterns,” Chin said.
He mentioned that they have answered 250,000 employee questions, processing 2,000 daily. Consequently, 85% of their employees now utilise this tool daily to address real issues in customer service, sales, and product problem-solving, making it a powerful asset.
Why Isn’t Everyone Adopting This?
When AIM asked Chin why AI companies that build the LLMs haven’t yet adopted this, he said, “I think the companies that build the models are basically in an arms race to produce general purpose models, which can be used for a wide variety of different use cases with the best quality, given the available technology.”
He mentioned that general-purpose models are not helpful for enterprises. Furthermore, within an organisational setting and requirements like dealing with a supply chain for a company, using the traditional method of vector databases for LLMs will have limitations.
To help organisations use this technology, Neo4j offers integrations with LangChain, Llama Index, Haystack, Pinecone, and Weaviate. Additionally, they have partnered with Docker to provide Neo4j MCP servers.
The post How Neo4j Uses Graph Technology With LLMs to Replace SaaS Applications appeared first on Analytics India Magazine.



