DEV Community

Ben Halpern
Ben Halpern Subscriber

Posted on

What do you think about building when AI models get cheaper?

With Gemini 3.1 Flash-Lite launching today, my mind goes towards thinking about things I wouldn't have thought to build because of expense.

However, when the most inexpensive models get better/cheaper it tends to sort of unlock ways of thinking about features we wouldn't have explored before the cheaper AI-driven tools are possible.

This is kind of abstract but is there a way you think about this sort of thing?

Top comments (7)

Collapse
 
nikola profile image
Nikola Brežnjak

What @heckno said, and then expand it to any (all!?) hardware devices that I own.

The bigger question is: will it indeed be in the end/soon that SaaS will go away and everyone (almost everyone?) will build the tool that they need specifically tailored for them inhouse?

Collapse
 
ben profile image
Ben Halpern

Oh yeah good call

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

I don’t think SaaS disappears.

Cheaper AI makes building more accessible, yes. But building something tailored in-house still requires architecture, long-term maintenance, security, evolution, and governance. AI can reduce implementation effort, but it doesn’t remove structural complexity.

In the short term, more teams might build internal tools. In the long term, the real constraint won’t be model cost — it will be software engineering capacity and organizational maturity.

SaaS solves that by externalizing complexity. That doesn’t go away just because tokens are cheaper.

Collapse
 
heckno profile image
heckno

I'm curious about stuff that can plug into my smart watch data where I can pipe in basic questions about the trends and get answers. It's not complicated stuff but I want to be able to do in high volume

Collapse
 
missamarakay profile image
Amara Graham

I had a slightly different take on this recently and it was in the corporate setting, do people actually know or pay attention to tokens or other expense calculations on their AI use. Is it obfuscated?

I've noticed the people who consider expenses with models tend to be the same people who tinker and likely used a personal card at some point. Others openly mention they work for insert big company here and have essentially unlimited spending on certain tools and models.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Lower costs definitely broaden the perspective. But it’s also worth asking whether there’s a real need in the first place — or whether we’re creating one just to take advantage of cheaper AI.

Collapse
 
francistrdev profile image
👾 FrancisTRᴅᴇᴠ 👾

There is many factors going into this:

  1. Is it getting cheaper where they have efficient data centers? For my Grad program, we had to do research about how data centers are currently inefficient and seeing how researchers are finding solutions to that problem.
  2. Is it getting cheaper because of how there is "so much competition" in a way where we can use the AI as Open-Source? For example, there is Ollama for example where you can simply download and use it locally without using an API key to the cloud.
  3. Is it getting cheaper because of how the AI model is efficiently using its token?

It's something I keep in mind as new models are being published and accessible for us to use. Of course, the main thing about AI is the environment factor that goes into it because you have to build a lot of Data centers to compute your model. I saw a video where it explains that OpenAI just simply "Made it bigger" and the LLM model just became smarter. It's pretty much the whole reason why there is so many data centers and RAM that is needed just to simply use AI. I might be wrong, but let me know!

Thanks for sharing @ben!