DEV Community

Cover image for When a Precise Specification Is Not Enough
mortylen
mortylen

Posted on • Originally published at mortylen.hashnode.dev

When a Precise Specification Is Not Enough

When people talk about requirements for industrial software, most imagine what an operator sees: process screens, alarms, trends, production overviews, or dispatch panels. And that is largely justified. These parts of the system tend to have a relatively clear structure, as they are derived from the technology itself, process diagrams, operational habits, and well-established principles of the SCADA and HMI world.

It would therefore not be accurate to say that customers in industrial environments do not know what they want to see. Operator screens are often quite predictable. They follow established logic, conventions, and expectations.

But that is only the visible half of the system.

The other, less visible half is much harder to define. And it is precisely there that the limitations of traditional thinking about software requirements most often become apparent.

What the Operator Sees Is Only Half of the System

From experience, the greatest uncertainty in industrial projects does not appear on the screens seen by operators, but in the tools used behind the scenes.

In more robust SCADA systems, there is usually not just a single “main” application. Alongside it, an entire ecosystem of service and administrative tools emerges. These are not displayed on the control room video wall, but without them the system would not function properly in the long run.

They include, for example, historical data viewers, tools for working with measurements and data exports, database and archive management utilities, configuration of devices and communication points, alarm setup, communication diagnostics, as well as user management and audit logs.

These tools are typically not seen by operators. They are used by system administrators, service technicians, process engineers, or integrators. They run on service stations, laptops, or administrative workstations. They are not visually impressive, but they are critical.

Because of this, their importance is often underestimated. While the operator interface is perceived as the “real product,” service tools are often seen as just an add-on. In real operations, however, it is often the opposite.

Without them, a well-designed SCADA solution quickly becomes a system that can still be operated, but is difficult to maintain, hard to extend, and unpleasant to service.

Why Precise Requirements Are Often Missing

This brings me to the main idea.

For service and supporting applications, detailed requirements often do not exist. Not because the project is poorly managed or because the customer is not interested. The reason is simpler: customers often do not know what they will truly need when operating a complex system over time.

They can define the goal. They want a system that is stable, maintainable, and extensible. They know they need historical data, configuration management, alarms, or diagnostics. But they cannot precisely describe what tools and details will actually be used in day-to-day operations.

They do not know which filters will be essential when analyzing data, what will need to be changed in bulk, where exports or configuration comparisons will be required, or which information will be missing when troubleshooting issues in production. Many of these needs only become clear once the system is in use, evolving, and expanding.

The customer knows the goal, but not all the tools and processes required to maintain it over time.

In such situations, much of the solution is not based on a precise specification, but on trust in the team and their experience. In essence, the customer is saying: we know what the outcome should be, and we expect the vendor to design the tools that make it possible.

This is not chaos. It is a natural distribution of knowledge.

The customer understands the process. The vendor understands what such a system requires in practice in order to remain maintainable in the long run. And it is at the intersection of these two perspectives that tools emerge which are very difficult to specify in advance.

The Illusion of a Precise Specification

If we try to write a fully detailed specification for these tools at the beginning of a project, the result will often look better on paper than in reality.

A document is created that appears precise and professional, but in fact it only formalizes assumptions. We describe how a configuration editor should look, what filters a trend viewer should have, or how data management should work. However, the correctness of these decisions only becomes clear once someone actually starts using the tools in practice.

This is where the requirements paradox emerges.

The more detailed we try to describe something that has not yet been validated in practice, the more we feel that we understand it. In reality, we are only recording hypotheses with precision.

This is especially visible in service tools:

  • a configuration management tool fulfills the specification but is slow in everyday use
  • a trend viewer displays correct data but lacks the filters needed during troubleshooting
  • a database tool is technically correct but too complex for daily operations
  • an application allows editing individual records but lacks bulk operations
  • a diagnostic view shows a large amount of data, but not the information that matters in critical moments

Formally, everything may be correct. In practice, the solution can still fall short.

Only Usage Reveals What Is Missing

Requirements for service tools often do not emerge at the beginning of a project, but only when the system starts being used in practice.

Some needs only become visible in real operation. When a new technology is added, hundreds of tags may need to be adjusted at once, historical data must be compared, alarm configurations changed, or communication issues diagnosed. Other needs arise when the system owner changes, when the solution is expanded, or during larger interventions such as migrations or shutdowns.

It is at this point that it becomes clear whether the system was designed only for presentation, or also for long-term operation.

This is why industrial software should not be evaluated only by what the operator sees. Its quality is largely reflected in the tools available to the people who will maintain, modify, diagnose, and operate it over the years.

A Better Approach in Practice

For these parts of a system, a different approach tends to work better in practice than trying to create a perfect specification from day one.

It is more effective to rely on experience from previous projects, prepare a reasonable initial design of the service tools, and present it as early as possible to the people who will actually use them.

Only during real use does it become clear where users get stuck, what is unnecessarily complex, and what is missing. Based on this feedback, the design can be gradually refined, simplified, and extended.

This approach is more practical than trying to design everything in advance on paper. When people see a concrete tool for managing measurements, configurations, or data, they can provide much more precise feedback.

Instead of vague statements like “just make it work somehow,” they will say they need bulk operations, saved filters, configuration comparisons, export capabilities for service purposes, or audit trails of changes.

At that moment, trust turns into concrete requirements. Not in a discussion over a blank sheet of paper, but through interaction with a real tool.

Trust Still Requires Rules

The fact that detailed specifications often do not exist for these applications does not mean they should be built without structure or responsibility.

Trust in the vendor is not a substitute for craftsmanship. On the contrary, it places even higher demands on the team. The developer is no longer just an executor of requirements, but a co-creator of the solution.

They rely not only on current requirements, but also on experience from similar systems. They must be able to estimate which service tools will actually be needed, avoid unnecessary complexity, distinguish between a prototype and a production-ready solution, and continuously validate whether the design makes sense in real use.

A good team in this context does not wait for a fully detailed specification, but continuously checks whether its assumptions hold up in real operation.

Conclusion

If I were to summarize it in one sentence: in industrial systems, customers often do not explicitly order the tools that ultimately determine whether the system can be used in the long term.

Operator-facing SCADA screens are usually relatively easy to imagine and specify. The real uncertainty, however, is hidden in the background: in service applications, configuration tools, diagnostic utilities, database tools, and all the supporting parts of the system without which a large solution cannot be sustainably operated.

This is precisely where detailed requirements are often missing at the start. There is a goal, there is the vendor’s experience, and there is trust that the final solution will also make sense in practice.

For this reason, it may be more useful to ask not “what is the exact specification?”, but rather “how will the system behave after one, two, or five years in operation?”

Because the quality of industrial software is not revealed only on the operator screen. It is revealed above all in how it feels to the people who have to maintain and operate it every day.

Top comments (0)