DEV Community

Romina Mendez
Romina Mendez

Posted on • Edited on

Google I/O Connect 2025 Recap: Gemini, Open Models, and AI in Action

Last month, I had the opportunity to attend Google I/O Connect 🇩🇪 Berlin 2025, an event built around the key announcements from the global I/O. It offered a deeper dive into Google’s latest technologies through workshops, live demos, and direct conversations with experts.

In this article, I’m sharing some of the main highlights I got to experience firsthand. I’ve even included working examples so you can see how these technologies apply in practice.

Unsurprisingly, artificial intelligence was the star of the show: many of the showcased solutions revolved around ✨Gemini , Google’s multimodal model that’s reshaping how we interact with 📝 text, 🏞 images, 📢 audio, and ▶️ video.

Concepts like prompt engineering, prototyping and the use of open source models (yes, you read that right … Google is embracing open models too! 🥳) were also recurring themes throughout the sessions and workshops.

But let’s not get ahead of ourselves… Here’s a look at the most relevant takeaways I gathered during the event 👇


 Gemini

If you’ve been following AI developments lately, chances are you’ve heard a lot about Gemini, and for good reason. Developed by Google DeepMind, Gemini is a family of large language models (LLMs) designed to handle complex reasoning tasks and work natively across multiple modalities, including text, images, audio, video, and code.

Rather than being a single model, Gemini refers to an evolving set of AI systems optimized for different use cases and performance levels.

As of today, it includes several specialized versions, detailed below.


AI assistance in Chrome DevTools

The integration of artificial intelligence into Chrome DevTools, powered by Gemini, marks a major shift in how developers can debug and interpret the behavior of their web applications.

It once felt unthinkable to debug a web app by simply asking questions in natural language. Today, that experience is built directly into Chrome DevTools.

By clicking the "AI assistance" button, you can ask a question related to the code or runtime behavior you're analyzing.

On first use, you’ll be prompted to accept the terms of service. Keep in mind that your query and relevant context (such as HTML structure, styles, or stack traces) are processed via Google Cloud infrastructure using Gemini.

Because of this, it’s important for developers to assess the privacy and security implications of sending context-sensitive data—especially when working with confidential or regulated information.


Google AI Studio

Google AI Studio is a platform powered by artificial intelligence that allows developers, researchers and students to experiment and create application prototypes using generative AI models, such as those of the Gemini family.

Experiment, prototype and implement: that's the central proposition of Google AI Studio, a fast way to test models, design prompts and easily integrate them through the Gemini developer API.

AI Studio connects with Gemini models and other base models via API, which makes it a versatile and accessible tool for the agile creation of prototypes and their future implementation.


Google AI Studio: Simple Prompt vs. Google Search

During one of the event sessions, an example was shared that clearly demonstrated how a model’s response can vary depending on whether it’s connected to the internet. Inspired by that, I recreated a similar test using Google AI Studio in chat mode, specifically with the “Grounding with Google Search” option enabled. This feature allows the model to retrieve real-time information directly from the web.

The prompt used was:

“⚽ What was Real Madrid's latest match? What was the result, and where and against whom was it played?”

Comparison of results

Mode Response Summary
Chat using model-only knowledge The model returned outdated information from 2024, based solely on its internal training data. No external sources were consulted.
Chat with Grounding enabled The same prompt returned up-to-date information about a match played on 📅 July 5th, 2025 (article written on July 9th). The model also provided a source link indicating where the information was retrieved from.

Notably, the model also provided a source link, indicating where the information came from.

Chat using model-only knowledge

Chat with Grounding enabled


Google AI Studio: URL Context

Another standout feature in Google AI Studio is 🔗URL Context, which allows Gemini models to ground their responses using only the content from specific websites defined by the user. Rather than relying solely on training data or broad web searches, the model references only the information available from the URLs provided.

This is especially useful for tasks requiring trusted, domain-specific sources—such as scientific research, regulated content, or official corporate information.

🧪 Real-world example

To demonstrate this functionality, I used the following prompt:

Prompt
I need a comparison table of at least five common metabolic diseases, using only information from pharmaceutical companies. Focus on the following aspects: main symptoms, biological mechanisms or causes, currently available treatments (especially branded drugs), target patient populations, and long-term health impact. Based on this analysis, identify the two diseases that are currently receiving the most research and development investment from the pharmaceutical industry.

For each of these two diseases, provide:
1. A summary of recent innovations (last 5 years) in treatment or drug development.
2. Any active clinical trials or investigational drugs from major pharmaceutical companies.
3. The strategic approach of each company (e.g., Boehringer Ingelheim, Novo Nordisk, Sanofi, Eli Lilly) in addressing that disease.

Use only these websites for your research:
- https://www.boehringer-ingelheim.com/
- https://www.novonordisk.com/
- https://www.sanofi.com/
- https://www.lilly.com/
- https://www.astellas.com/

This feature is ideal for use cases where source control is important such as medical research, institutional content generation, or technical documentation based on approved materials.


🔥 Firebase Studio

Firebase Studio is a cloud-based platform built on Firebase and Google Cloud that enables developers to quickly create functional prototypes of web and mobile applications.

With Firebase Studio, you can:

  • Build and launch working prototypes using tools like Firestore, Authentication, Cloud Functions, and Hosting from a unified web interface.
  • Integrate generative AI, real-time databases, and backend logic without managing infrastructure.
  • Iterate quickly without needing to set up a local development environment.

The following video showcases a quiz-based prototype app, built from a prompt, which helps identify early symptoms of diabetes through simple, guided questions and answers.


💎 Gemma

Gemma is a family of open language models developed by Google DeepMind and other Google teams. Built on the same underlying technology as the Gemini models, 💎Gemma models offer free access to their weights. While categorized as "open models," their terms of use, redistribution rights, and ownership vary by version and may not conform to standard open-source licenses.

The 💎Gemma family includes variants tailored to different use cases, such as code generation, multimodal processing (📝text and 🩻images), structured data reasoning, and content classification. ❤️MedGemma is a specialized variant optimized for medical applications, with the ability to interpret clinical text and medical images such as mammograms. Other models, such as ShieldGemma 2, focus on detecting harmful content, while DataGemma integrates real-world statistical data through retrieval techniques to improve factual accuracy and reduce hallucinations in model outputs.

In the following article, you can see an example of how to implement the MedGemma model locally using radiography images.


📚 References

  1. Google. (s.f.). Introducing Gemini: Our largest and most capable AI model. Google Blog. https://blog.google/technology/ai/google-gemini-ai/#introducing-gemini
  2. Google. (2024, February). Building open models responsibly in the Gemini era. Google Open Source Blog. https://opensource.googleblog.com/2024/02/building-open-models-responsibly-gemini-era.html
  3. Google DeepMind. (s.f.). Gemini Flash. https://deepmind.google/models/gemini/flash/
  4. Google DeepMind. (s.f.). Gemini Flash Lite. https://deepmind.google/models/gemini/flash-lite/
  5. Google DeepMind. (s.f.). Gemma Models Overview. https://deepmind.google/models/gemma/
  6. Google. (s.f.). Gemma: Open models for the Gemini era. Google Blog. https://blog.google/technology/developers/gemma-open-models/
  7. Google. (s.f.). MedGemma 4B (Instruction-tuned). Hugging Face. https://huggingface.co/google/medgemma-4b-it
  8. Google. (s.f.). AI assistance in Chrome DevTools. Chrome Developers. https://developer.chrome.com/docs/devtools/ai-assistance?hl=es-419
  9. Google Cloud. (s.f.). Gemini 2.5 Flash overview. https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash?hl=es-419
  10. Chrome for Developers.(2025, June). Top 10 web updates from Google I/O 2025 [Video]. YouTube. https://www.youtube.com/watch?v=GSVe6zguiao&t=103s

Top comments (0)