DEV Community

Cover image for We're the Google DeepMind Team building Gemini, Google AI Studio, and more! Ask Us Anything.

We're the Google DeepMind Team building Gemini, Google AI Studio, and more! Ask Us Anything.

Hey DEV community! 👋

We're the team behind Google AI Studio and the Gemini API at Google DeepMind.

We'll be answering your questions live on August 28, 2025 starting at 1PM ET.

Thank you to everyone who participated in our AMA! We'll do our best to keep answering questions asynchronously throughout the next few weeks so check back later if your question wasn't answered!


Who we are:

What we work on:

  • 🤖 AI Studio: Our developer platform where you can experiment with Gemini models, including access to our latest experimental releases.
  • 🔧 Gemini API: APIs that serve millions of developers and process trillions of tokens.
  • 🎨 Multi-modal & Open-Source Models: Advanced AI models including Veo (video generation), Imagen (image generation), Lyria (music creation), and Gemma (open-source language models).
  • 📚 Developer Experience: Making Google's most advanced AI models easier to integrate and use.
  • 🌍 Community: Building resources, documentation, and support for the global developer community.

Ask us about:

  • 🚀 AI Studio & Gemini API: Features and how to get started
  • 🎨 Our AI Models: Veo, Imagen, Lyria, Gemma
  • 🔬 Working at Google DeepMind: What it's like being at the intersection of research and developer tools
  • 🛠️ Building AI applications: Best practices, common challenges, scaling tips
  • 💡 Career advice: Breaking into AI/ML, developer relations, product management
  • 🌟 The future of AI development: Where we see the space heading
  • 🏗️ Developer experience: How we think about making AI accessible

Please don't ask us about:

  • Unreleased Google products or detailed internal roadmaps
  • Proprietary technical implementations
  • Confidential business information
  • Personal/private information

Get started with AI Studio:

If you haven't tried AI Studio yet, it's the easiest way to start building with Gemini. You can turn on features like code execution, use extended context (2M+ tokens), and access our latest experimental models - all for free to get started!

We'll be rotating through answers throughout the day, so you might hear from different team members. Let's dive in! 🔥

Top comments (161)

Collapse
 
dpelleri profile image
daniele pelleri

Curious if there are upcoming releases for Gemini CLI. In my tests it’s excellent at whole-repo analysis and strategy, but it often stumbles in execution (tools break and it loops).
Are any major releases planned? What kind, and on what timeline?
And will there be multi-agent support?

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

Hey there! Am glad to hear that you've been using and loving the Gemini CLI (us, too! 😄).

This update is via the Google Cloud folks who are building out the CLI:

Gemini CLI is constantly improving with new releases every week! You can expect broader quality fixes to be landing around the end of September. As for multi-agent support, that’s on the roadmap and is expected to be available in mid to late October -- stay tuned!

Collapse
 
attah_ephraim_5c68d5234c8 profile image
Attah Ephraim

Please how do I get to work for Google 🙏.

Collapse
 
annavi11arrea1 profile image
Anna Villarreal

Vector Databases and VR Question:

Do you forsee AI/vector databases being gamified in such a way that we can throw on a VR headset and 'swim' through the vector database, so to speak, sort of as a fun way to explore and retreieve data?

I'd like to try that, sounds fun.

Thanks.

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

I love the idea of being "immersed" in your data, and to use 3D space as a path to spot unexpected relationships in datasets! In addition to the recommendations from other folks on this thread, you might also be interested in checking out the Embeddings Projector, as a fun way to view and manipulate data in 3D space.

Collapse
 
attah_ephraim_5c68d5234c8 profile image
Attah Ephraim

Please how do I get to work for Google 🙏.

Collapse
 
avanichols_dev profile image
Ava Nichols

Are these mostly used for demo or are they useful for practitioners?

Collapse
 
annavi11arrea1 profile image
Anna Villarreal

Thats awesome!

Collapse
 
dev_in_the_house profile image
Devin

Wow

Collapse
 
prema_ananda profile image
Prema Ananda

Excellent idea!
But the main challenge is how to display 512+ dimensions of embeddings in 3D VR space?
Perhaps through interactive projections or using additional channels (color, sound, vibration).

Collapse
 
annavi11arrea1 profile image
Anna Villarreal • Edited

Hi Prema, thanks for your response.

Im assuming it would be approached by taking the overall (x,y,z) of each individual vector and assign it some set volume in space, with some padding, so a user could navigate through the 'cracks'.

It would essentially be like swimming through a gas, but the molecules are ginormous so they are visible to the user... big enough that a user could select each one to see the details.

But small enough to sneak by each one as they gently nudge out of the way but return back to there normal position.

I think this could be done several ways. In my expereince tools that come to mind right away are blender and three.js! Haha.

Could even have a temperature map overlay, so a user could 'jump in' and explore search results based on their custom query and see how closely they are related. Or perhaps a pattern overly, to be accommodating for more users?

You know what. This would be really awesome for music exploration.

Thread Thread
 
ghotet profile image
Jay • Edited

Former game dev here. Blender is 3d modeling software, not really ideal for your use case. I just wanted to say that if you have a big idea like this, your often better off to try to make it yourself.

There are a couple of game engines that are free to use such as Unreal and Unity that provide VR support as well as plenty of online resources.

I would recommend Unity for this due to a combination of community support regarding tutorials, and it using C# as it's primary coding language. Most AI is pretty good at writing C# scripts (As long as you keep them modular) so you don't need to be a master programmer.

You might even enjoy learning how to use the game engine. In regards to visuals, you would also want to learn Blender for the 3D assets.

I don't foresee Google making anything like this as it's very niche and they prefer broad strokes, not to mention they had a pretty massive failure in the game industry and likely aren't looking to try again (Stadia).

Thread Thread
 
annavi11arrea1 profile image
Anna Villarreal

Thanks for your wide-lensed feedback. I have used unreal engine a bit but not to any major extent. Any reason you would use unity over unreal for something like this? Based on your answer, sounds like my orignial question is at the very least, possible.

Thread Thread
 
ghotet profile image
Jay • Edited

I can sort of picture your concept in my head. Unreal is a lot more complex, for me at least, in regards to setting up a system like that cause your options are their visual blueprints or C++ and the engine itself is pretty heavy on resources. Unity is lighter and i think scripting a system like that would be much easier in C# as long as you can optimize it.

You could probably just instantiate new nodes as your going along and cull anything thats out of view. Since its VR its going to be a bit heaftier to run so the smaller engine would likely be more stable for the average person :)

My discord is on my profile if you want to discuss it more over there. I don't really have any other socials lol

Collapse
 
fredner profile image
Frédéric NERET

When will it be possible to vibe code with Google Apps script? Thanks

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

Thanks for the question!

You can already use the Gemini APIs and Gemini in AI Studio to generate Apps Script code, which you can then pull into Google Workspace products (like Sheets). The Google Cloud team also has a few codelabs showing how to use the Gemini APIs with Apps Script (example).

Collapse
 
jishna_m_af518e0e79dab6f4 profile image
Jishna M

I really have keen interest in drug development and personalized medicine using AI. My master's thesis was on find suitable drug candidates for PSP using graph neural networking and other AI techniques. I did use DeepMind's Alphafold2 also in it. I learnt everything by myself for it through online resources. But I feel overwhelmed with the vast number of online resources, and they are not that helpful to make a proper plan with tangible result to get better in the domain. So, if I want to one day work in DeepMind and be part of novel drug discovery, what are the steps I need to take?

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

It’s great to hear that you’re interested in AI for drug discovery! Google DeepMind, Isomorphic Labs, and our colleagues in Google Research are all investing very heavily in AI for health and the medical domain.

The skill sets that you would need would depend on the role that you would be interested in taking - for example, engineering, product, research, marketing, and more are all role profiles that we hire for in our AI for health orgs. For each of those focus areas, I would recommend that you continue building your expertise in AI and in the medical / life sciences, and make sure to share your work visibly - either via GitHub for open-source and software projects, or by publishing the research that you've been pursuing.

I'd also recommend building on or evaluating some of the open models that Google has released in the healthcare space, like TxGemma and MedGemma. Good luck, and am looking forward to seeing what you build!

Collapse
 
attah_ephraim_5c68d5234c8 profile image
Attah Ephraim

I wish to work for Google as a C++ Developer 🙏.

Collapse
 
jishna_m_af518e0e79dab6f4 profile image
Jishna M

I am an AI engineer by profession. So any specific guidelines I can follow to attain a position at Deep Mind in the Drug Research group? To attain an interview call, or what all should I prepare etc.

Collapse
 
ha3k profile image
Ha3k

Can we have a virtual hackathon solely focused on building AI apps in ai.studio?

Collapse
 
pat_loeber profile image
Patrick Loeber Google AI

I love that! We’re planning to run more hackathons later this year and I'll make sure to forward that idea!

Collapse
 
hermergray profile image
Herrmer

Yes!

Thread Thread
 
hermergray profile image
Herrmer

On DEV!

Collapse
 
attah_ephraim_5c68d5234c8 profile image
Attah Ephraim • Edited

I wish to work for Google as a C++ developer scraplinkecomarket.netlify.app
My work with html and css and js.

Collapse
 
vivjair profile image
Vivian Jair Google AI

Stay tuned here - may or may not have something coming soon!

Collapse
 
sin4ch1 profile image
Osinachi Okpara

What would it take to intern as a devrel for DeepMind?

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

We regularly have engineering and product internship roles available in Google and at Google DeepMind! I recommend checking out our careers pages, and searching for "internship".

If you’re interested in a career as a developer relations engineer, I would recommend building in the open - contributing to open-source projects, sharing your work publicly (on social media, and on GitHub) and investing in supporting your local and online developer communities. Many DevRel folks start their careers as software engineers, and then gradually move to a more community-facing role.

Collapse
 
dev_in_the_house profile image
Devin

On this subject, how do you think the idea of internships will evolve in the future? There's so much written about how AI is particularly affecting entry-level jobs. What do you think needs to change for employers to be able to best support this kind of work?

Collapse
 
sherrydays profile image
Sherry Day

How does 'Search-grounded' mode work under the hood—are citations confidence-weighted and deduplicated? Can we constrain freshness windows, force certain domains, or provide our own corpus for grounding?

Collapse
 
alisa_fortin profile image
Alisa Fortin Google AI

The secret sauce is the same as Google Search because the tool relies on the Google Search Index. Currently, the groundingMetadata does not expose a direct confidence score for each citation. The presence of a citation indicates the model found that source relevant for generating a specific part of the response. In terms of deduping, the system generally attempts to provide unique and relevant sources. While you might see citations from different pages on the same domain if they each contribute distinct information, the goal is to provide a concise set of the most useful sources rather than a long list of redundant links.

For bring your own search scenarios, try using function calling with RAG flows

In terms of working under the hood, the first thing the tool will do is analyze your query. For example, a prompt like "Who won the F1 race last weekend?" will trigger a search, while "Write a poem about the ocean" likely won't. The model then formulates one or more search queries based on your prompt to find the most relevant information from the Google Search Index. The most relevant snippets and information from the search results are fed into the model's context window along with your prompt. The model uses this retrieved information as its source of truth to generate a "grounded" response. The API returns the response along with groundingMetadata. This metadata includes the source URLs for the information used, to build citation links back to the original content for verification.
We are working on a filter to constrain to date ranges. You cannot force certain domains (use URL Context for that), but you can exclude some domains from search. The “Bring your own search” option is available through Vertex.

Collapse
 
avanichols_dev profile image
Ava Nichols

How influenced are you by the work done from other companies (i.e. OpenAI releasing GPT-5 recently etc)

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI • Edited

It's always inspiring to see the recent surge in AI development – both in the modeling and product space! 😀

At Google, we ensure many different closed (ex: Anthropic) and open models are available to our customers on Google Cloud via the Vertex AI Model Garden. We also support many of the research labs via both our open machine learning frameworks (JAX) and hardware (TPUs and GPUs) for training on GCP, and have been excited to see many startups and enterprises adopt the Gemini and Gemma models.

Our DevX team has also been hard at work adding or improving support for the Gemini APIs and Gemma models into developer tools (like Roo Code, Cline, Cursor, Windsurf, etc.) and frameworks (LangGraph, n8n, Unsloth, etc.). More to come, we all go further when we're working together as one community.

Collapse
 
ben profile image
Ben Halpern

What was it like at Google when Chatgpt launched?

Collapse
 
kbrown83 profile image
Kenneth Brown

What advice do you have for someone who is considering signing up to a CS Bootcamp vs. going all-in on building with AI tools?

Collapse
 
dynamicwebpaige profile image
Paige Bailey Google AI

Great question, and I know a lot of folks have this top-of-mind. 👍🏻

For programs like a CS Bootcamp or attending a university, I'd say the biggest value that you're really getting is the in-person community. Many educational structures are still catching up to state-of-the-art in AI and in building product-grade software systems, so the coursework you'd be completing might not be aligned with the latest model and product releases - and those features / models are changing minimally weekly, if not daily, which makes it a challenge for educators to keep their curriculum up-to-the-minute.

To build up expertise and the skill set for working with AI systems, I would strongly suggest to just start building: find a problem that really bugs you, use AI to automate it, and then share your work visibly externally -- via GitHub and social media. This is a really useful way to get product feedback, and to get inspired! There are also frequently AI hackathons happening, either in-person or online (ex: the Major League Hacking events list and DevPost are great places to look).

Collapse
 
jess profile image
Jess Lee

You can also check out DEV Challenges 😇

Some comments may only be visible to logged-in visitors. Sign in to view all comments.