Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Google on Wednesday announced the launch of Gemini 2.0, its most advanced artificial intelligence model to date, as the world’s tech giants race to take the lead in the fast developing technology.
CEO Sundar Pichai said the new model would mark what the company calls “a new agentic era” in AI development, with AI models designed to understand and make decisions about the world around you.
“Gemini 2.0 is about making information much more useful,” Pichai said in the announcement, emphasizing the model’s enhanced ability to understand context, think multiple steps ahead and take supervised actions on behalf of users.
The developments “bring us closer to our vision of a universal assistant,” he added.
The release sent shares in Google soaring by more than four percent on Wall Street a day after the stock already gained 3.5 percent after the release of a breakthrough quantum chip.
The tech giants are furiously taking steps to release more powerful AI models despite their immense cost and some questions about their immediate usefulness to the broader economy.
An AI “agent,” the latest Silicon Valley trend, is a digital helper that is supposed to sense surroundings, make decisions, and take actions to achieve specific goals.
The tech giants promise that agents will be the next stage of an AI revolution that was sparked by the 2022 launch of ChatGPT, which took the world by storm.
Gemini 2.0 is initially being rolled out to developers and trusted testers, with plans for broader integration across Google’s products, particularly in Search and the Gemini platform.
No Nvidia
The technology is powered by Google’s sixth-generation TPU (Tensor Processing Unit) hardware, dubbed Trillium, which the company has now made generally available to customers.
Google emphasized that Trillium processors were used exclusively for both training and running Gemini 2.0.
Most AI training has been monopolized by chip juggernaut Nvidia, which has been catapulted by the AI explosion to become one of the world’s most valuable companies.
Google said that millions of developers are already building applications with Gemini technology, which has been integrated into seven Google products, each serving more than two billion users.
Gemini 2.0’s powers are expected to come in early 2025 to Google’s search application, still the company’s main money-maker.
The first release from the 2.0 family of models will be Flash, offering faster performance while handling multiple types of input (text, images, video, audio) and output (including generated images and speech).
Gemini users worldwide can already tap into a chat-only version of Flash, the company said, with testers given access to a multimodal version that can interpret images and surroundings.
Google also said it was experimenting with a product that can use software apps, websites and other online tools, much like a human user. OpenAI and Anthropic have unveiled similar features.
The company also teased a new version of Project Astra, a smartphone digital assistant like Apple’s Siri that responds to images as well as verbal commands.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)