UC Berkeley has released a dialogue model for research purposes—Koala.

Here’s a link to the web demo: https://bit.ly/3zxXCrC

Koala has been trained by fine-tuning Meta’s LLaMA on dialogue data which was scraped from the web—with a particular focus on responses to queries from other large language models like ChatGPT, in addition to question answering datasets and human feedback datasets. The makers chose to scrape a high-quality dataset, instead of maximising the size of the dataset.

60k dialogues publicly shared by users on SHareGPT were collected using APIs. Redundant and non-english dialogue was eliminated, shrinking the data size to approximately 30k.

ChatGPT and human responses were also used from the HC3 english dataset, which amounted to 87K question-answer examples.

Open source data used to train Alpaca, components from OIG dataset ANthropic HH dataset, OpenAI WebGPT’s dataset, and OpenAI summarisation dataset were used to train the model.

With results based on a user study the developers claimed that Koala is adept at responding to an array of queries posed by the users. The blogpost also bragged that generated results are at par with ChatGPT in half of the cases, while they mostly exceed Stanford built Alpaca.

One of the associate professors associated with Koala also tweeted saying, “this has some interesting implications for how powerful LLMs can be trained on a budget (in terms of weights and compute).”

“I think this is really interesting, because this further supports possibility that in the future very capable LLMs could be “privately owned” (vs hosted and only accessed via APIs).” he added.

The post UC Berkeley Releases Koala, For Research Purposes appeared first on Analytics India Magazine.