DEV Community

Cover image for Nothing Is Sacred: Rethinking How Engineering Teams Work
Roman Nikolaev
Roman Nikolaev

Posted on

Nothing Is Sacred: Rethinking How Engineering Teams Work

Industry is moving at breakneck speed.

In three years, we went from better auto-complete to autonomous agents. The software engineering landscape is rapidly changing.

Don’t get me wrong. It also happened before.

I started programming in BASIC when programs were loaded from compact cassettes. Ten years later, I used a visual IDE to create graphical user interface applications. Then came web applications. Then the smartphone apps market. Virtualization. Cloud. And then GenAI.

Maybe I missed a few things, but the point is that in the span of one person’s career — barely 40 years — the tools have fundamentally changed. The waves of change come more and more frequently.

This creates great pressure on software engineering organizations to adapt. The organizations that don’t change their ways of working will be left behind.

There is a discussion now about agile becoming obsolete in the fast-moving environment.

I disagree.

Agile Still Matters

The agile way of thinking is as important as ever before. For me, agile is not a framework. It is a way of thinking.

One of the agile values is continuous learning, and it is precisely what teams need to do now. At a time of disruption, the focus of learning changes from incremental improvement to rethinking processes and tools from scratch.

Nothing is sacred.

Engineering leaders should stop treating inherited best practices as defaults and start treating them as reversible experiments.

  • What is the role of code reviews?
  • Does the team need to be aligned over every technical decision?
  • Can developers design and designers ship code?
  • What is the role of a product owner?
  • Do we need estimates?
  • Is rewriting from scratch still a terrible idea?

There are many, many more.

But what if we try and fail? For example, if we relax our code review practices, then we get an outage.

The concern is valid, but if we don’t try, we will never know.

To fully benefit from new tools, we have to embrace experimentation.

A Simple Experiment Framework

I propose following these steps:

1. Identify a bottleneck

Find a handoff, a meeting, or a step that involves additional people.

2. Define an experiment

What if you remove the handoff, cancel a meeting, or move the decision-making to the person who does the work?

3. Set success and failure metrics

Has quality gone down? Do QA and users report more defects? Is team satisfaction down? Is the number of PRs up? Has the number of bugs decreased?

4. Timebox it

Let’s do it for a month. If it’s successful, continue for another month.

5. Monitor it

Follow the metrics. If something starts to go wrong, cancel the experiment.

6. Conclude

Check after the timebox is over. Decide if you want to continue the experiment, roll it back, or make it permanent.

Example 1: Meetings

Daily meeting breaks team focus.

We move the meeting to an async report in Slack. Identified blockers are addressed in smaller groups.

We measure PR throughput and team satisfaction in retrospectives before and during the experiment. We establish the baseline and check against it every two weeks.

Run the experiment for two weeks first. If no red flags appear, continue for another two weeks.

Check PR throughput weekly and team satisfaction every two weeks during retrospectives. If there are red flags, we cancel and return to the previous model.

After one month, we debrief with the whole team and decide together if we keep the new way of working or stop.

Example 2: Process

The team’s UX designer is overloaded with work and cannot concentrate on bigger things because of constant smaller requests.

Empower developers to propose possible solutions to the designer. The designer only needs to approve.

We evaluate whether the designer can concentrate better and move forward on strategic work. How many times per week do developers propose their own solutions instead of asking for design clarification? How many of these solutions are approved?

It is a low-risk experiment and requires a culture shift. We run this for two months with monthly checkpoints.

Interview the designer after one month. Count the number of times the developers took the initiative.

Decide after two months if the approach works or if something else needs to be done.

Example 3: AI-Native Ways of Working

Code reviews are a bottleneck. Senior developers context-switch multiple times a day to review PRs, and authors wait hours or days for feedback.

We introduce an AI coding agent as the first reviewer. It checks for bugs, security issues, style violations, and adherence to team conventions. Human reviewers focus only on what the AI flags or on changes to critical paths such as payments, auth, and data migrations.

We measure time from PR opened to PR merged. We track production incidents before and during the experiment. We survey reviewers on whether they feel they can focus on deeper work instead of routine review comments.

Run for one month with biweekly checkpoints. If production incidents spike, we stop immediately.

Check production incident rate and PR merge time weekly. Gather reviewer feedback at each biweekly checkpoint. Any quality degradation beyond the baseline means we cancel and return to full human reviews.

After one month, debrief with the whole team. Decide together if we keep the AI-first review process, stop it, or make it permanent.

Other Example I’ve Lived Through

Below are a few more changes I lived through:

  • Dropping manual regression testing
  • Replacing sprint planning and estimation with goal alignment
  • Introducing non-blocking code reviews
  • Replacing synchronous UI walkthroughs with Loom videos
  • Introducing RFCs for asynchronous planning

Some experiments failed. But every successful experiment led to the organization getting a little better.

These small things compound and lead to a categorical jump in the team’s performance.

The Real Risk

Changing well-established best practices can feel scary. But the worst thing that can happen is that the experiment fails and we return to where we started.

Inaction, on the other hand, guarantees that we are left behind in a quickly changing environment.

Changing well-established best practices feels scary. But the worst thing that can happen is that the experiment fails and you return to where you started. That’s it.

Inaction has no such safety net. It guarantees you fall behind.

Being truly agile — beyond the frameworks, the dogmas, and the best practices from the past — is no longer good to have. It is how you survive.

Originally posted at: https://highimpactengineering.substack.com/p/nothing-is-sacred-rethinking-how?r=36g804

Subscribe for weekly articles like this at https://highimpactengineering.substack.com/

Top comments (0)