Two months after the initial launch of its Gemini generative AI model, Google has started to roll out an updated version of its multi-modal model for text, image, and audio interactions.

Available in three variants, Nano, Pro, and Ultra, Gemini 1.5 Pro is the first model Google has offered up for early testing.

– Google

Described as "a mid-size multimodal model optimized for scaling across a wide-range of tasks,” the company said the new 1.5 Pro model shows “dramatic improvements across a number of dimensions” compared to 1.0 Ultra, outperforming 1.0 Pro on 87 percent of the benchmarks used by Google for developing its LLMs, all while requiring less compute.

Gemini 1.0 Ultra was launched by the company just last week, powering Google’s Bard chatbot that will now also be known as Gemini. The version of the chatbot powered by Ultra has been dubbed Gemini Advanced.

The 1.5 Pro model has also been built using a Mixture-of-Experts (MoE) architecture, a model architecture that combines multiple parameter subsets, or ‘expert models’ to generate outputs. Google says this makes the model faster and more efficient to run.

At launch, Gemini 1.5 Pro comes with a standard 128,000 token context window, with a limited group of developers and enterprise customers can try it with a context window of up to one million tokens via AI Studio and Vertex AI in private preview.

Tokens including entire parts or subsections of words, images, videos, audio, or code, and a full one million token context window will be made generally available at an unspecified later date. By comparison, Gemini 1.0 had a 32,000 token context window.

“Gemini 1.5 delivers dramatically enhanced performance. It represents a step change in our approach, building upon research and engineering innovations across nearly every part of our foundation model development and infrastructure,” said Demis Hassabis, CEO of Google DeepMind, in a blog post announcing Gemini 1.5.

“These continued advances in our next-generation models will open up new possibilities for people, developers, and enterprises to create, discover, and build using AI,” he said.