Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI2 open sources text-generating AI models — and the data used to train them


The Allen Institute for AI (AI2), the nonprofit AI research institute founded by late Microsoft co-founder Paul Allen, is releasing several GenAI language models it claims are more “open” than others — and, importantly, licensed in such a way that developers can use them unfettered for training, experimentation and even commercialization

Called OLMo, an acronym for “Open Language MOdels,” the models and the data set used to train them, Dolma — one of the largest public data sets of its kind — were designed to study the high-level science behind text-generating AI, according to AI2 senior software engineer Dirk Groeneveld.

“‘Open’ is an overloaded term when it comes to [text-generating models],” Groeneveld told TechCrunch in an email interview. “We expect researchers and practitioners will seize the OLMo framework as an opportunity to analyze a model trained on one of the largest public data sets released to date, along with all the components necessary for building the models.”

Open source text-generating models are becoming a dime a dozen, with organizations from Meta to Mistral releasing highly capable models for any developer to use and fine-tune. But Groeneveld makes the case that many of these models can’t really be considered open because they were trained “behind closed doors” and on proprietary, opaque sets of data.

By contrast, the OLMo models, which were created with the help of partners including Harvard, AMD and Databricks, ship with the code that was used to produce their training data as well as training and evaluation metrics and logs.

In terms of performance, the most capable OLMo model, OLMo 7B, is a “compelling and strong” alternative to Meta’s Llama 2, Groeneveld asserts — depending on the application. On certain benchmarks, particularly those touching on reading comprehension, OLMo 7B edges out Llama 2. But in others, particularly question-answering tests, OLMo 7B is slightly behind.

The OLMo models have other limitations, like low-quality outputs in languages that aren’t English (Dolma contains mostly English-language content) and weak code-generating capabilities. But Groeneveld stressed that it’s early days.

“OLMo is not designed to be multilingual — yet,” he said. “[And while] at this stage, the primary focus of the OLMo framework [wasn’t] code generation, to give a head start to future code-based fine-turning projects, OLMo’s data mix currently contains about 15% code.”

I asked Groeneveld whether he was concerned that the OLMo models, which can be used commercially and are performant enough to run on consumer GPUs like the Nvidia 3090, might be leveraged in unintended, possibly malicious ways by bad actors. A recent study by Democracy Reporting International’s Disinfo Radar project, which aims to identify and address disinformation trends and technologies, found that two popular open text-generating models, Hugging Face’s Zephyr and Databricks’ Dolly, reliably generate toxic content — responding to malevolent prompts with “imaginative” harmful content.

Groeneveld believes that the benefits outweigh the harms in the end.

“[B]uilding this open platform will actually facilitate more research on how these models can be dangerous and what we can do to fix them,” he said. “Yes, it’s possible open models may be used inappropriately or for unintended purposes. [However, this] approach also promotes technical advancements that lead to more ethical models; is a prerequisite for verification and reproducibility, as these can only be achieved with access to the full stack; and reduces a growing concentration of power, creating more equitable access.”

In the coming months, AI2 plans to release larger and more capable OLMo models, including multimodal models (i.e. models that understand modalities beyond text), and additional data sets for training and fine-tuning. As with the initial OLMo and Dolma release, all resources will be made available for free on GitHub and the AI project hosting platform Hugging Face.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *