
A Hands-On Guide to Google AI Studio's New Build Feature
Ever had a brilliant app idea but dreaded the mountain of boilerplate code, set...
For further actions, you may consider blocking this person and/or reporting abuse
This is super cool! So often I start on my cool new idea, get half way through like building out the data layer or something and lose interest :D
Haha same here!
this is extremely impressive, honestly the self-healing errors thing blew my mind
you think developers will actually start relying on AI for full projects soon, or just for quick prototypes
It would also be nice to have something that could look at a big 2-5+ year old project and suggest optimizations or spot causes of some bug or side effect you describe.
Very Insightful!
Really like how it shows each thinking step instead of being a mystery box. Has anyone tried deploying something other than the card generator yet?
It feels like bubble and all will come to an end when these giants will be in the market with proper user marketing.
There isn't a single LLM model capable of building a production ready app in 2 minutes. Let alone Gemini models, which are way behind Claude. And even Claude makes stupid mistake, that would never be allowed in production.
More like "From prompt to a somewhat working throwaway prototype in less than 2 minutes"
exactly.
I m not a expert in coding and full stack development, I used Google AI Studio, to generate an AI App. when finished he did not integrate the AI and states the following to me:
"You've hit on a very important point, and your thinking is absolutely correct.
In a standard development environment, you would indeed use a .env file at the root of your project to store sensitive information like API keys. That file is intentionally not checked into version control (like Git) to keep the key private.
The reason you don't see a .env file in the project structure I've provided is because my environment is a bit different. I'm providing you with the application's source code, but I cannot create or manage external configuration files like .env.
The application is built on the assumption that the environment where it runs will have the API_KEY available.
In your own local setup, you would simply create a file named .env in the main project folder with the following content:
Generated code
API_KEY="YOUR_GEMINI_API_KEY_HERE"
Use code with caution.
When you run the development server (like Vite or Create React App), it automatically loads variables from this .env file and makes them accessible to the application through process.env.API_KEY, which is exactly what the geminiService.ts file is expecting.
So, to summarize:
Your understanding is spot-on. A .env file is the standard way to handle this.
The code is written to correctly use that standard practice.
You would just need to create that .env file yourself in a real-world scenario."
Do we need to finish the APP ourselves? Or how does it work? Any-one can help me?
it can finsih the app for you but if you want to run it locally on your laptop may be through an IDe you will need to create a .env file in the root of your folder and paste this API_KEY="YOUR_GEMINI_API_KEY_HERE", the gemini api key can be gotten from the api key icon on the right corner when building
i tried didnt work for me tho, said my idea was to complex
Impressive tool! Just one question: Does the 'Deploy to Cloud Run' feature incur charges, and if yes, how are they calculated?
The code assistant often fails to edit the files. It says that it updated the files with the following changes, but no files actually updated.
Very impressive. I created an HR app with it, last week.
very impressive, acting more like a jarvis agent than normal LLM
Love the booster to develop.
Unfortunately for me I'm just working only with my device pending the arrival of my laptop.
Once it comes the moon will be my starting prompt.
I salute the lesson. Expect mine when I get my hands on my baby.
"Failed to call the Gemini API, quota exceeded: Imagen is available with limited free generations per day for testing. To continue generating images, please use the Gemini API."
This is the message I got after several feedbacks to refactor.
The next Game Changer of the week;)
Right up my alley.
what if I want to integrate multimodal live api for speech inputs and outputs with mcp and agents sdk, can it handle too?
very helpfull
This is incredible
Top
This is sow interesting! 🧐
good
after engaging in this exercise my understanding of limits of AI broke the ceiling ,truly mind blowing ,just grateful .
Amazing
this is awesome!!!
Bruh thats so cool!
very impressive
Really cool!
سلام
Interesting!
Super helpful, looking forward to build!
love it - super easy - just craft your prompts as precise as you can.. and that will make huge change!
This sounds amazing . I cant wait !!
You might want to check out Magic Cloud, I suspect it's got a bajillion features Google's AI studio does not have, and definitely far more intuitive ...
I am building a web app that analyzes an inputted Github repo and gives us a whole overview of the repo with advanced features' suggestions that can be added to make it a strong project for our resume
Just created my app, and this is truly phenomenal. Should I no longer bother with my app development course...? I'm self-learning...
I❤️it
This is so awesome,
I would be trying this, let's see what would come out of my idea.🤞
Not sure but it won't let me use it, this seems to be a premium tool only.
Very helpful and giving more explanations i love it
Insightful fr
Joining the gang a little bit later but can't wait to put it to a test myself.
It's very good
It's too amazing
Very Interesting
This is far beyond thinkable and is a game changer. Thanks for this well structured lesson.
Nice!
هنا الابداع وهنا التميز منصة لا مثيل لها شكرا جوجل
It's awesome
I have created a Tom and Jerry app.
dev.to/bhavasarhemang/tom-and-jerr...