DEV Community

Daniel R. Foster for OptyxStack

Posted on

Every Business Should Engage with AI | The Only Question Is How Deep

Over the past few years, I’ve spoken with many founders, engineering leaders, and business owners about AI.

The conversations often start the same way:

“We’re not sure if we really need AI.”

What’s interesting is that this hesitation usually comes from two very different experiences:

  • some organizations have never seriously engaged with AI and feel comfortable staying that way,
  • others rushed in, overspent, and walked away disappointed.

Both groups often arrive at the same conclusion — “maybe AI isn’t for us.”

That conclusion is understandable.

But it’s also increasingly risky.


The quiet danger of “everything works fine”

I’ve seen companies that still:

  • process documents manually,
  • store operational knowledge in shared drives and paper folders,
  • rely on human review for repetitive classification and reporting,
  • make decisions based on gut feel rather than aggregated data.

Nothing is broken.

Invoices get processed.

Reports get delivered.

Customers don’t complain.

From the outside, everything works.

But over time, a pattern emerges:

  • work takes longer than it should,
  • employees spend most of their time on low-leverage tasks,
  • onboarding new staff becomes painful,
  • scaling operations means hiring more people rather than improving systems.

These organizations often remind me of companies that, years ago, insisted on using paper documents instead of spreadsheets or databases.

Back then, that approach also “worked”.

Until it didn’t.

AI today sits in a similar position.

You can ignore it — and your business may continue to function — but your operational efficiency ceiling becomes lower than your competitors’.


AI is no longer a specialized technology

One of the biggest misconceptions is that engaging with AI means building complex models or hiring a data science team.

That’s no longer true.

With tools like ChatGPT, copilots, and enterprise LLM platforms, AI has become a general-purpose working skill.

Much like:

  • spreadsheets in the 1990s,
  • search engines in the 2000s,
  • cloud collaboration tools in the 2010s.

You didn’t need to build Excel to benefit from it.

But organizations that never trained their employees to use it eventually fell behind.

AI has reached the same stage.


A more realistic way to think about AI adoption

Instead of asking “Should we do AI?”, I’ve found a better question to be:

How deeply should this business engage with AI?

Level 1: AI literacy — the minimum viable engagement

Every organization should be here.

This level doesn’t involve building systems or deploying models. It involves people.

Examples:

  • Training employees to use tools like ChatGPT effectively
  • Teaching basic prompting and verification habits
  • Using AI for drafting documents, summarizing reports, and research
  • Establishing clear rules about sensitive data and privacy

This is low-cost, low-risk, and immediately beneficial.

A company that refuses to do even this is effectively choosing to limit how productive its workforce can be.


When businesses jump too far, too fast

On the other end of the spectrum, I’ve also seen companies rush headfirst into AI.

They:

  • commission ambitious AI projects,
  • integrate large models into core workflows,
  • expect automation to replace significant portions of human work.

Then reality hits.

Budgets explode due to:

  • underestimated infrastructure costs,
  • continuous inference and retraining expenses,
  • integration and monitoring complexity.

Performance doesn’t match expectations.
The model works in demos but struggles in production.
Leadership starts questioning the entire investment.

In many of these cases, the problem wasn’t AI itself.

The problem was misalignment — between what the business needed and what was built.

They didn’t need a fully autonomous system.

They needed better tooling and assisted workflows.


Level 2: AI-assisted workflows — where most businesses should aim

This is where AI delivers the most consistent value.

At this level, AI supports existing processes instead of replacing them.

Common examples:

  • Internal chatbots over company documentation
  • AI-assisted customer support drafting and triage
  • Sales and marketing content generation
  • Analytical support for reports and decision-making

These systems:

  • improve speed and consistency,
  • reduce cognitive load,
  • don’t require heavy infrastructure or long-term research investment.

For many organizations, this level alone produces tangible ROI — without the risks of overengineering.


Level 3: AI-driven systems — powerful, but selective

Some businesses will naturally move further.

Here, AI becomes:

  • part of the product itself,
  • embedded in decision-making loops,
  • tied directly to revenue or operational risk.

Examples include:

  • RAG-based knowledge systems,
  • agent-driven workflows,
  • forecasting, personalization, or detection systems.

This level requires real maturity:

  • clean and reliable data,
  • cost and latency controls,
  • evaluation and regression testing,
  • clear ownership after deployment.

I’ve seen many failures here not because AI was incapable, but because organizations skipped the foundational steps.


Why avoiding AI entirely is no longer neutral

Even if AI never becomes part of your product, it will still affect:

  • how fast your competitors move,
  • how efficiently employees work,
  • how customers expect information,
  • how decisions are made.

The largest long-term risk is not failed AI projects.

It’s a workforce that lacks AI literacy while the rest of the market moves forward.

That gap compounds quietly.


Training people matters more than choosing tools

Many AI initiatives begin with vendor selection.

In practice, the higher-leverage starting point is often:

  • training employees to think critically with AI,
  • understanding where AI fails,
  • learning how to validate outputs,
  • knowing when human judgment must override the model.

In multiple cases I’ve observed, organizations gained more value from basic AI training than from complex system deployments.

AI capability grows bottom-up before it scales top-down.


When optimization and rescue become relevant

As companies mature in their AI usage, new challenges emerge:

  • cost spirals,
  • latency issues,
  • inconsistent quality,
  • silent regressions.

At this stage, the question is no longer “should we use AI?” but “how do we operate it responsibly?”

For those interested in how production AI systems are evaluated and improved in practice, this overview offers useful context:
https://optyxstack.com/ai

And for teams already running RAG or agent-based systems that struggle with cost, quality, or reliability, focused optimization work is often more effective than rebuilding from scratch:
https://optyxstack.com/ai/rag-optimization

These are not entry points into AI.

They are late-stage concerns, once fundamentals are already in place.


Final thoughts

AI adoption is not a binary choice.

It’s a spectrum.

Every business should engage with AI at a basic level.

At this point, a natural follow-up question often appears:

If every business should engage with AI, how do we do it without falling into hype or misuse?

That question matters more than most organizations realize.

Engaging with AI without basic understanding — treating it as magic rather than a system — often leads to inconsistent results, runaway costs, and loss of trust.

I explore this in more detail in a follow-up post:
How to Enter the AI Era Properly — Without Treating AI as Magic

The short version: AI adoption only works when people understand how it behaves, not just what it can do.

Some should go deeper — deliberately, cautiously, and with clear ownership.

The real mistake today isn’t moving too slowly or too quickly.

It’s moving without understanding where you are on that spectrum.

AI doesn’t replace good judgment.

But ignoring it increasingly replaces competitiveness.

Top comments (1)

Collapse
 
danielrfoster profile image
Daniel R. Foster OptyxStack

I’ve seen teams succeed and fail at very different stages of AI adoption, often not because of the model but because of expectations and operational discipline.

Where are you and your company today, AI literacy, AI-assisted workflows, or production AI systems? What pushed you forward (or held you back)?