According to GitHub, more than 40% of developers now use Copilot. But are we trusting AI too much, too fast? A new trend called vibe coding is gain...
For further actions, you may consider blocking this person and/or reporting abuse
Well, Thats Right. The AI Generated Codes Have Risks As Well. Recently I Was Working On A Project And I Got Some Problem On From Git while pushing my code to Github. I Asked AI Model For It and It Told Me The Solution. I Did That As It Said, And I Ended Up Losing My Whole Project. The Project Got Deleted. I Tried To Back That Up But Thankfully, Recuva Helped And I Recovered My Project But Someone With No Recovery Knowledge Can Lose His Entire Hardwork.
Thankyou!!
Thanks VenomousCode for your comment! Yep, I’ve had a similar experience while working with AI tools; after I finished what I was doing, I asked the AI to tidy up the code repository and delete any unnecessary files. Turns out, it deleted many important files and broke the repo.
Wow.
Because AI hallucinate less than developers!
😅😅
We need more articles like this. AI hype is going out of hand.
I hope reality will not accept the "new standard" of security and quality that comes with the overuse of LLM tools.
Stil, I can't relate to your statement that "AI can generate high-quality code." Of course, it depends on how you define quality, but low-quality (as a solution) output is essentially what is wrong with code generation.
Thanks so much for sharing your thoughts, Alex! I really appreciate your insights. It seems like the results are quite mixed—sometimes, AI can come up with really high-quality code. I was especially impressed with Claude Sonnet, particularly how quickly it responds and how neatly it structures solutions for specific problems. I didn't quite see the same level of performance from GPT models; they seem a bit behind in my experience. It’s also interesting to consider that the choice of which LLM to use might depend on what you need it for. Thanks again for sharing your perspective!
Sometimes various test cased were not passed by AI based code. Most of the India based mobile game development services providers have test the AI based code they notice that some test cases were not passed.
Thanks for your comment, Abhiwan. Yes, AI-generated code can sometimes fail on test cases if not properly verified by a human, due to the lack of context.
Interesting. Thanks for sharing!
love how you balance the excitement of AI with the responsibility it demands...
Thanks Parag! Yes, some tools are great, but it doesn't necessarily mean we should trust them without human verification. We should not shift responsibility to AI to that level; otherwise, things can go pear-shaped. Cheers!
Cool!
Thank you for sharing!
Great article!