DEV Community

ccarcaci
ccarcaci

Posted on

Pull features delivery

Back to the good old problem of deadlines in software development.

Deadlines and team velocity in software development is THE trending topic since... well, forever.

History

Back in time, there was waterfall. The waterfall approach is taken from manufacturing and construction buildings. Stages are sequentially organized and it is not possible to go back to a closed phase.

Very soon this became the standard also for software development.

Not so late, the software community understood that software development is not completely predictable and organizable beforehand. The waterfall approach in this context was not applicable.

In the late 90s, Agile Manifesto was born. The Agile Manifesto inspired several frameworks like Scrum, Kanban, Scrumban and other Pokemon-like fancy names.

All these frameworks enforced the feedback cycle and the tradeoffs in the project management triangle: time, scope, and cost (resources).

In any case, for any approach proposed, in-time delivery was something not negotiable, and delays are always a source of complaints, tense retrospectives and unsatisfied middle-to-top management.

Not so late, the software community started to understand that also Agile frameworks was not fitting with software development.

Leaking pipes

No one calls the plumber and ask him to leave leaking pipes because time is up

The quote here makes it clear that in some fields we are ready to accept delays without shrinking the scope (leaking pipes).

Why isn't it the same in software development?

If we go back to the project management triangle, often (99%) time is fixed, the cost can be hardly increased to match the deadline and scope is the only card that could be played.

This is because software development is not a repetitive and fully predictable task, this implies that it cannot be parallelized and adding more resources (costs) means slowing down even more the team's velocity.

At the same time, middle-to-top management wants a predictable delivery date to beat the competition.

Getting back to our plumber example, reduced scope, AKA "Leaking pipes", in software development means:

  • technical debt, that often no one will repay
  • unfinished product delivery that does not satisfy the users

Pull delivery in action

Pull delivery methodology could be summarized as

deployment into production is done by product team

This is a strong statement.

Anyway, this statement lives in the usual development team strategy. Scrum teams will continue with Scrum, Kanban teams will Kanban. Pokemon teams will catch'em all!

What this approach introduces is the split of deployment into tiny deployment pieces.

Pull delivery in detail

To illustrate this approach, we take as an example a social network that wants to introduce emoticons in comments and posts.

The task is assigned to a team named "wording team".

The "wording team" brainstorms and details the product and its implementation as usual and starts the first feature iteration which will be 3-week long.

But, this time, decides to follow the "pull delivery" approach.

After reading tons of literature (just this article) they put in place what is needed to follow this approach.

Prerequisites are:

  • existing feedback mechanism
  • no ETA
  • vertical approach
  • the dev team is committed to providing deployable pieces first
  • testing and QA environments are strongly aligned with production
  • test suites alignment
  • the dev team is unaware of when the users will use the new versions of the product

Existing feedback mechanism

There should be a feedback mechanism that allows the team to steer when new findings are discovered. Feedback often is regulated by the chosen framework. For pull delivery, feedback could ideally "go back" touching all the building phases.

No deadlines

Although the entire team knows that in 3 weeks the first iteration should be closed, there is no deadline for the intermediate tasks. And the fine tracking of intermediate tasks makes the 3 weeks deadline irrelevant for the tech team. The focus is shifted from "how to respect the big deadline" to "how to work on fine-grained activities and provide the best feedback to product team?"

Vertical approach

The dev team should start from a first implementation where the goal is to bring the MVB (Minimum Viable Byte) from the "product entry point" to "the product exit point".

The product entry point is what the user does using the product. In this example use the "wording team" part of the product, the comments. The product exit point is what the user experiences from this action.

From a technical point of view, this means that one byte posted by the user reaches all the intermediate layers, databases, and queues and it is visible back to the user through the product.

Deployable pieces first

The slicing of implementation tasks should be focused on providing small deployable pieces quickly. Deployable pieces are deploying pipeline runs that are consistent and stable for the final product. This means that the code changes, migrations, and config updates are consistent.

Each deployable piece is potentially a new product version.

Tech alignment

To avoid bugs and make the development highly predictable, testing, QA, and even the local dev environment should be aligned as much as possible.

This is even more valid for seeding data into the environments. Updating the seeding data to reflect the production ones is one of the most important traits of this framework. An obfuscating process to feed the production data back to the other environments would be the optimum.

Test suites alignment

Tests should match the real users' behaviors. For example: reproducing the same sequence of API calls for an API service. Or simulate the real interaction of a user in the mobile app.

Dev team unawareness

Developers release tiny pieces every day and hide them behind feature flags or the versioned deployment. The product team decides when to enable flags or bring the new version to the users.

Since the deployment is done without the supervision of the tech team there should be a reliable development and testing process. What the developers and testers experience is the same as the final user experience.

Pull delivery example

Chopping the feature

The "emoticons" task is then chopped into tiny deployable tasks.

Task #1

  • create a new endpoint to get the available emoticons and their token GET /emoticons
  • the endpoint will return the smiling face emoticon as svg data into the JSON response payload and the :) token assigned to it
  • the front-end application will call the GET /emoticons endpoint on the page opening
  • the front-end application will replace :) with the smiling face when rendering a comment
{
  "image": "...", // SVG encoded content
  "token": ":)",
}
Enter fullscreen mode Exit fullscreen mode

Task #2

  • the front-end application will parse :)Β and will replace it with the svg image associated when the user type into the comment box
  • the front-end application will still send the comment as plain text to the back-end

Task #3

  • create a back-office endpoint POST /emoticons that allow storing new emoticons into a JSON file
  • secure it behind a JWT mechanism
  • Provide access to the storage to all deployments (dev, test, production)

Task #4

  • improve the GET /emoticons endpoint to make it read the emoticons from the stored JSON
  • return an array of emoticons instead of the single smiling face
{
  "emoticons": [
    {
      "image": "...",
      "token": ":)",
    },
    {
      "image": "...",
      "token": ":(",
    },
    {
      "image": "...",
      "token": ":eye-rolling:",
    },
  ]
}
Enter fullscreen mode Exit fullscreen mode

Task #5

  • the front-end application will improve the parse and replace logic accordingly with the upgraded JSON body response

Task #6

  • the emoticon images will be served through cdns sparse all around the world providing the url into the JSON response
  • POST /emoticons back-office endpoint will update the cdns all around the world to reduce the overhead of getting the emoticons

Task #7

  • front-end application will use a local browser cache with expiration time to store the emoticons

Task #8

  • front-end application will refresh the emoticons through the GET /emoticons endpoint when a :something: token is not recognized

Each of these tasks is deployable independently.

The first task involves the whole stack and, by working on it, the team can discover all the technical details that will affect the rest of the development.

Chopped deployments

For each task, the tech team performs the full testing and QA. The task is deployment-ready and it should be considered "in the hands of the users".

Note how each task has the smallest size possible, the focus at this level is not on providing value to the user, but on providing stable deployments.

The value for the user is delivered when multiple tasks have been released or when the full feature is done. But this choice is in the hands of the product team that has the confidence of releasing stable product pieces.

In case of a bug, break the glass

We don't live in the ideal world.

The product team deploys part of the feature and then a bug is discovered, what to do?

In this case, the feedback mechanism is gold.

The tech team should start tackling the bug, after the mitigation and fix phase there should be a complete review of the technical prerequisites.

If a bug passed undetected to production means that:

  • a vertical layer has not been included in the development. i.e. a production-only gateway tampered some HTTP headers. Include the gateway also in the other environments (dev and testing)
  • dev and testing environments are not aligned with production
  • the data in the dev and testing environments are not reflecting all the production cases
  • the test suite is not covering meaningful cases

Since we have a list of possible causes it is easy to schedule and work on countermeasures.

Steering

Product released the "Task #1", dev team is working on "Task #3". By watching at data the product team discovers that the feature should steer to a different goal.

Of course, the entire team should gather together and discuss the situation.

Aside from that tech team should stop working on new tasks and it's time to use the time-machine capabilities of VCS (git).

For this reason, deliverables must be consistent per each commit.

Pull deliveries expected results

In this way, developers are no more committed to respecting an unfeasible deadline. They are committed to fine-grained slicing of deployments and consistency of environments.

At this point, one could argue that if we apply fine-grained slicing and estimations to a project we can have a predictable release schedule that could be pushed by developers.

That's not true. The main reason for this is that even if you provide a fine-grained estimation of tasks, the sum of the tasks rarely will reflect the sum of the estimations. When you finish a task, you discover or set some details that make wrong a previous estimation about the next tasks. The highest reliability of development estimations is given when the task is (nearly) finished, and the estimation itself is useless.

With pull delivery, we want to overcome this, generating, by design, some positive side efects:

  • smaller merge requests
  • stable and predictable development and testing environments
  • less "tech debt"
  • smaller investigation perimeter for new bugs
  • early discoveries of blockers during the first task
  • fine-grained feature scope management
  • smaller reworks for scope reduction
  • product has full control over what is released, and when
  • less stress

Top comments (0)