Cohere Releases Command-R, a Model Focused on Scalability and RAG
Cohere recently introduced Command-R, a language learning model focused on enterprise applications. The language learning model is designed to understand, interpret, and generate human-like text based on the data it’s been trained on.
This model can perform tasks like Retrieval Augmented Generation (RAG) and tool integration, to improve efficiency and accuracy at a large scale. It targets businesses seeking to deploy AI solutions broadly, offering capabilities like a 128k context window, fluency in 10 languages, enhanced performance metrics, and adherence to privacy and security standards.
“Compared to the last Command, Command-R is smarter, longer context, cheaper (same as the latest gpt3.5t), and you can access the weights directly,” Aiden Gomez the CEO of the company posted on X.
Command-R is now available on Cohere’s hosted API and will be on major cloud platforms soon. Its model weights are also accessible on Hugging Face for research.
The model is optimised for RAG, enabling better information retrieval and ranking with Cohere’s Embed and Rerank models. It supports automation of complex tasks through tool integration and is versatile across 10 major languages, expanding its utility globally.
Cohere released Command-R Amidst a potential $1 billion fundraising round. This is a major advancement in AI for enterprise use, highlighting the market’s appetite for functional AI solutions.
In contrast to OpenAI’s broader, consumer-oriented ChatGPT, Cohere’s Command-R launch directly targets the enterprise market. The company recently partnered with Accenture which will be one of the companies it offers features like improved RAG performance and a competitive pricing for businesses, emphasising privacy, data security, and avoiding vendor lock-in.
The post Cohere Releases Command-R, a Model Focused on Scalability and RAG appeared first on Analytics India Magazine.


