Google’s parent company, Alphabet, has achieved a milestone with the introduction of its most advanced artificial intelligence model yet, named Gemini. The innovative technology is engineered to process various data formats, including video, audio, and text, representing a significant leap forward in AI capabilities.
Highly anticipated AI model, Gemini, showcases heightened reasoning abilities and a nuanced understanding of information, surpassing Google’s previous AI technologies. Alphabet has revealed plans for three distinct versions of Gemini, each optimized to operate with varying levels of processing power. The most robust iteration is tailored for data center deployment, while the smallest version is optimized for efficient performance on mobile devices.
Alphabet CEO, Sundar Pichai, expressed in a blog post that the development of these new AI models signifies one of the largest scientific and engineering endeavors undertaken by the company. This unveiling follows the launch of OpenAI’s ChatGPT a year ago, prompting Google to accelerate efforts in creating AI software that rivals the innovations introduced by Microsoft-backed ventures.
Google has already integrated a segment of Gemini’s technology into its AI assistant, Bard, on the same day of the model’s announcement. The tech giant outlined plans to unleash the most advanced version of Gemini through Bard early next year, promising enhanced capabilities and functionalities.

The Vice President of Product at DeepMind, Eli Collins, revealed that Gemini stands as the largest AI model developed by Google’s DeepMind AI unit, saying that while the model exhibits greater efficiency and capability, it is considerably more cost-effective to serve to users compared to prior larger models. However, the training process still demands a substantial amount of computing power.
Alphabet also disclosed the introduction of a new generation of its custom-built AI chips, known as Cloud Tensor Processing Units (TPUs) specifically designed for training large AI models and are organized in pods comprising 8,960 chips. The company emphasized that the updated version of these processors can expedite the training of substantial language models nearly three times faster than previous iterations. As of Wednesday, developers can access the new chips in a “preview” phase.
