How CRED’s Use of Claude Code Signals the Future of Software Development
One of India’s leading fintech platforms, CRED, recently shared how it transformed the work done by its engineering teams using AI, particularly with Anthropic’s command-line coding tool, Claude Code.
It reported doubling its execution speed for delivering features and fixes, achieving a 10% increase in test coverage across codebases. Additionally, teams successfully delivered projects previously categorised as low-priority initiatives.
The company stated that its approach to AI was to ‘rethink core workflows’ for faster research and decision-making cycles, to deliver high-quality execution.
“CRED needed an agentic coding solution to help them streamline processes across the entire software development lifecycle, from solution discovery and design to implementation and testing,” read the blog post from Anthropic.
Developers at CRED now rely on Claude Code to identify incremental solutions for writing, testing, and committing code across both new and existing projects. Besides, the tool is also used to generate documentation for existing codebases. This is in addition to using the tool to break down complex problems into manageable steps.
“Next, our goal is to move toward agentic execution, where developers primarily focus on reviewing pull requests while Claude Code takes on the bulk of coding and testing,” the startup said in the blog post.
Claude Code’s Growth
CRED’s roadmap also includes implementing repository-level knowledge indexing, which allows AI systems to gather context of complex, multi-repository requirements.
Since the latest series of Claude 4 models was announced in May, Adam Wolff from the Claude Code team mentioned to The Venture Beat that the tools’ revenue has increased five and a half times. The report also added that the platform has experienced a 300% growth in active users. Last month, it was revealed that Claude Code is used by over 115,000 developers.
“With many assumptions, this implies a $130 million revenue business,” read a LinkedIn post from Deedy Das from Menlo Ventures, which is an investor in Anthropic.
CRED’s story isn’t an isolated one where teams are using agentic command line tools — whether it is Claude Code, OpenAI’s Codex CLI, or Gemini CLI – to accelerate development progress.
Recently, Jared Zoneraich, founder of Prompt Layer, said on X, “Claude Code has changed our whole engineering prioritisation philosophy. All of our customers have told us that our production has 10x’d [increased by 10 times] in the last two months.”
“The best cases of us using Claude Code are for small one-off feature updates or fixes that would otherwise not be able to be prioritised, but Claude Code could bang it out in one shot,” Zoneraich told AIM.
Earlier, he shared an example of one such use case, where Claude Code was used to read every single commit his team has made over the last 30 days, figure out what changed, and write a product update email to users.
He said that Claude has improved the developer velocity in his teams, and it is superior to other agents on the market because it has access to both bash commands and the tail end of functions by being able to create scripts, whereas other agents are limited to the functionality their developers give to them.
Skill Benefits Cut Execution Times
Ashish Kumar, the chief data scientist at Indium, a US-based consulting firm, said the benefits are most significant when you are at the start of the project or when you apply finishing touches, when AIM approached him to understand where these agentic tools excel.
Kumar provided an example where these tools are most effective after coding is complete, where the developer must then deploy it on Kubernetes or struggle to create the YAML file correctly.
However, he noted that the power of Copilot coding tools is limited when it must navigate a large code base within ongoing processes. “But if you go to terminal-based coding agents, you can define agents of your own,” he said. “It has a lot of context of your code, and does it much better.”
Kumar said that his team has deployed ‘peer review agents’ with the help of Claude Code. “It reviews both the code generated by the coding agent or written by the human. We check it for security, coding standards, and more.”
He said that he hasn’t seen any significant time benefits yet, but he has observed a lot of skill benefits, which in turn lead to efficient execution times.
“For example, there are people who have been data scientists all their careers. They know their models well and their data science workflow, but they lack skills in shell scripting and CI/CD coding. Imagine a data scientist writing [with AI] shell scripts fluently. These agents enable that,” he said.
The ecosystem of these tools continues to evolve today, and isn’t free from limitations. One recurring frustration is the tools’ tendency to interpret rather than execute exact instructions.
John O’Nolan, founder and CEO of Ghost, said on X that while using Claude Code, he noted that the “single most annoying thing” is how it “consistently does what it believes you meant, based on what you said”, rather than following precise directions. This interpretive approach, while designed to be helpful, can lead to unexpected outcomes when developers need exact implementation.
Another engineer said on X, “With Claude Code, I notice that the model tends to ignore code that already exists, but focuses more on writing new code that works. So you end up with duplicate code and code bloat,” adding that it takes more iterations to clean the code.
One developer echoed the common view that it’s all about reviewing the code at the right moment and guiding the model in the correct direction.
“The more bad code accumulates, the harder it is to salvage it,” he said.
The post How CRED’s Use of Claude Code Signals the Future of Software Development appeared first on Analytics India Magazine.



