
The short answer is no, the long is still no but its complicated, let me explain.
I've seen multiple posts and blogs about how the Power Platform ...
For further actions, you may consider blocking this person and/or reporting abuse
and one of the biggest issue that I have is workload due to this would have massive environmental impact. !!!
Task specific model or algorithms are super efficient ..
Why choose when you can have both? We've got a kick ass vibe coding platform, with tons of Low-Code and No-Code features. These are synergetic ideas, and not opposing ideas ...
Psst
Hmm, But i think vibe coding is still very very much dependent on prompts that are made and engineered after ensuring all types of standard practices & ensuring proper secure features & users + machine oriented app.
This is exactly how we're doing it. We've got high level workflows that instructs our own custom made LLM, with template prompts, it will use as it generates prompts that it sends to the "Hyperlambda Generator" which is the LLM that actually produces the code ...
Thats the approach it should be, combined and used where it adds value and good old low-code where AI is not right.
Agree!
Well said bro. I have coded for more than a decade. I used chatGPT regularly. I don't use Claude because I am too cheap (heh). The thing is, AI is very expensive, the main reason why Vibe Coding sucks. If you make the AI look at your whole app (more than 1000 files), be prepared to pay hefty sum. Also, you are screwed when your company wants you to rewrite a lot of things one year from now. The AI might have moved to another model, no access to previous model, no consistency nor knowleddge of your codes from one year ago. Something which won't happen if you build it yourself.
Use AI smartly. Only ask one file / one algorithm at a time. You must still understand 100% of your app.
Vibe coding feels exciting for quick prototyping, but enterprises will always prioritise governance and stability - where low code is much stronger.
Great job!
You could not have further missed the mark if you tried.
"AI will get more expensive"
It's like Deepseek taught you nothing.
AI will absolutely, unequivocally and without any doubt whatsoever be less expensive.
artificialanalysis.ai/models
Speaking to developers the majority use Anthropic/OpenAI. Microsoft uses OpenAI and Chat GPT5 the most, all of these have shown significant price increases, and with the next models forcast to cost $10bill to train vs $1 bill for this generation prices are not going down.
And where is DeepSeek today? Can it compete with paid tools like Claude? It sucks more and more each day, I moved back to chatgpt.
Sorry I think you missed my point, the costs of models goes down, but not the models we use. Legacy models have almost no usage, as everyone moves to latest model. And even the decreasing token cost is misleading, as models now use more tokens. Look at Grok 4, low token cost but when testing real world it's one of the most expensive.