Google have been thrilled to see the recent enthusiasm and adoption of Gemini 1.5 Flash — Google fastest model to date, optimized for high-volume and high-frequency tasks at scale. Every day, Google learn about how people are using Gemini to do amazing things like transcribe audio, understand code errors, and build apps in minutes. Companies like Jasper.ai are also building with Gemini to deliver fantastic experiences for their own users:
“As an AI-first company focused on empowering enterprise marketing teams to get work done faster, it is imperative that we use high quality multimodal models that are cost-effective yet fast, so that our customers can create amazing content quickly and easily and reimagine existing assets,” said Suhail Nimji, Chief Strategy Officer at Jasper.ai. “With Gemini 1.5 Pro and now Flash, we will continue raising the bar for content generation, ensuring adherence to brand voice and marketing guidelines all while improving productivity in the process.”
But Google also realize the true value goes beyond just providing great models. It’s about giving you a holistic ecosystem that makes it easy to access, evaluate, and deploy these models at scale. That’s why we’re rolling out updates to help you move into production and expand to global audiences:
- More models, more possibilities: Google expanded our Model Garden with open models like Meta’s Llama 3.1 and Mistral AI’s latest models. Google made them available as a fully managed “Model-as-a-service,” so you can find the perfect fit for your unique needs without the development overheads.
- Removing language barriers: Google are enabling Gemini 1.5 Flash and Gemini 1.5 Pro to understand and respond in 100+ languages, making it easier for our global community to prompt and receive responses in their native languages.
- Predictable performance: Google understand how critical reliability and performance are. That’s why Google are making Provisioned Throughput in Vertex AI, coupled with a 99.5% uptime service level agreement (SLA), generally available.
- Scale your AI, not your costs: Google have improved Gemini 1.5 Flash to reduce the input costs by up to ~85% and output costs by up to ~80%, starting August 12th, 2024. This, coupled with capabilities like context caching can significantly reduce the cost and latency of your long context queries. Using Batch API instead of standard requests can further optimize costs for latency intensive tasks. With these advantages combined, you can handle massive workloads and take advantage of our 1 million token context window.
These enhancements are a direct response to what you, our customers, have been asking for. They represent our ongoing commitment to not just building the best models, but to provide an AI ecosystem that makes enterprise-scale AI accessible. Try out Gemini 1.5 Flash today with more languages, Provisioned Throughput in GA, and a new lower price on Vertex AI starting August 12th, 2024.