DEV Community

Cover image for The Systems Behind AI: Understanding LLMs and Their Place in Engineering
Benedict Isaac
Benedict Isaac

Posted on

The Systems Behind AI: Understanding LLMs and Their Place in Engineering

You’ve used AI.
You’ve explored it.
You’ve probably even built with it.

But behind many of the AI systems we interact with today is something more fundamental.

The Large Language Model (LLM).

LLMs power a large portion of modern AI systems and AI agents. They act as the core intelligence layer behind many tools and applications.

So before building more systems with AI, an important question becomes:

What exactly are LLMs?

What Are LLMs
A Large Language Model (LLM) is a computational model trained on vast amounts of data and designed for natural language processing, especially language understanding and language generation.

These models learn patterns in language and use those patterns to generate responses, reason through prompts, and assist in solving problems.

At a high level, LLM systems typically involve:

Encoders: which process and understand input information
Decoders : which generate responses or outputs
Modern LLMs combine these mechanisms to process context, relationships between words, and the structure of language.

But the real breakthrough that enabled today’s powerful models came from a particular architecture.

What Is a Transformer
Most modern LLMs are built using an architecture called the Transformer.

The Transformer architecture changed how models process sequences of information by introducing a mechanism called attention.

Instead of processing words one at a time in order like older neural networks, transformers can look at entire sequences of data simultaneously and determine which parts are most important.

This allows models to:

understand long-range relationships in text
maintain context across large inputs
generate more coherent and structured responses
Because of this design, transformers became the foundation for modern LLMs.

The Three Core Components of an LLM
At a simplified level, most LLM systems can be understood as a combination of three elements:

LLM = Data + Architecture + Training

Data
Large-scale datasets containing books, articles, code, conversations, and other forms of language.

This data teaches the model patterns of human communication and knowledge representation.

Architecture
The structural design of the model itself.

Today, this is primarily the Transformer architecture, which enables models to process complex relationships within data.

Training
The optimization process where the model learns to predict and generate language.

During training, the system adjusts millions or even billions of parameters to improve its predictions and outputs.

Together, these three components create the systems that power modern AI tools.

Engineering, AI, and the Missing Bridge
Coding, machine learning, and AI are incredibly valuable skills. But they were originally meant to complement engineering fundamentals, not replace them.

This observation raises an interesting thought.

What happens when we bring AI thinking back into engineering systems rather than keeping it limited to software?

Viewing AI from an Engineering Perspective
In engineering, almost every new technology eventually finds an entry point into physical systems.

So instead of thinking only about AI models running inside applications, another question emerges:

How can engineering systems and robotics systems behave or learn in ways similar to LLM-powered systems?

What would that actually look like?

Instead of only building software agents, we might begin designing systems where learning, decision-making, and adaptive behavior exist within engineered machines.

This idea goes beyond simple automation.

It begins to resemble something closer to an engineering model of intelligence.

From AI Agents to Intelligent Engineering Systems
Today, we often talk about AI agents in the context of software.

Agents can reason about tasks, interact with systems, retrieve information, and perform actions.

But if we extend this thinking into engineering systems, an interesting possibility appears.

Robotics systems and machines could potentially be designed to:

behave similarly to AI agents
integrate multiple data sources and signals
adapt to environments through learning
reason about tasks and actions
In this perspective, AI is no longer just a software capability.

It becomes something that can interact with and shape engineering systems directly.

This is where the worlds of AI, robotics, and engineering start to converge.

A Practical Question Builders Must Answer
As builders begin integrating AI systems into applications and infrastructure, another important question arises.

Should you build your own model, fine-tune an existing model, or use a pre-existing LLM?

Each approach has its own advantages and trade-offs.

Building Your Own Model
Provides maximum control and customization.

However, it requires extremely large datasets, significant computing resources, and specialized expertise.

Fine-Tuning an Existing Model
Allows developers to adapt a pre-trained model to a specific domain or problem.

This often provides a strong balance between performance and cost.

Using a Pre-Existing LLM
The fastest and most accessible option.

Most modern AI applications today rely on existing models and focus on building systems around them rather than training models from scratch.

The best choice depends on the complexity of the system, the resources available, and the specific problem being solved.

AI systems today are evolving rapidly.

But the most interesting opportunities may not only be in building larger models, but also in exploring how these systems integrate with real engineering systems and infrastructure.

Instead of thinking about AI purely as software, we can begin to ask:

How do intelligent models interact with machines, robotics systems, and the engineered world?

The future of AI may not only live in data centers and applications.

It may also live inside the systems we build to interact with the physical world.

Question for builders and engineers:

Is it better to build your own model for a system, fine-tune an existing model, or rely on pre-trained LLMs?

Top comments (0)