DEV Community

Devang Tomar
Devang Tomar

Posted on • Originally published at devangtomar.hashnode.dev on

How one line of code caused a $60 million loss πŸ“‰πŸ˜“

60,000 people lost full phone service, half of AT&Ts network was down, and 500 airline flights were delayed

Image description

On January 15th, 1990, AT&Ts New Jersey operations center detected a widespread system malfunction, shown by a plethora of red warnings on their network use display.

Despite attempts to rectify the situation, the network remained compromised for 9 hours, leading to a 50% failure rate in call connections.

AT&T lost over $60 million as a result with over 60,000 of Americans left with fully disconnected phones.

Furthermore, 500 airline flights were delayed, affecting 85,000 people.

AT&Ts long-distance network was supposedly a paragon of efficiency, handling a substantial portion of the nations calls with its advanced electronic switches and signaling system. This system usually completed call routing within seconds.

However, on this day, a fault originating in a New York switch cascaded through the network. This was due to a software bug in a recent update that contained a critical bug affecting the networks 114 switches. When the New York switch reset itself and sent out signals, this bug caused a domino effect, leading to widespread network disruption.

This software patch had already gone through layers of testing without being caught. This incident was especially surprising because AT&T was known for their rigorous testing.

The Problem πŸ˜“

The root cause was traced back to a coding error in a software update implemented across the networks switches.

The error, within a C program, involved a misplaced break statement within nested conditional statements, leading to data overwrites and system resets.

The pseudocode :

while (ring receive buffer not empty 
          and side buffer not empty):

  Initialize pointer to first message in side buffer
       or ring receive buffer

  get copy of buffer

  switch (message):

    case (incoming_message):

      if (sending switch is out of service):

        if (ring write buffer is empty):

          send "in service" to status map

        else:

          break // The error was here!

        END IF

      process incoming message, set up pointers to
               optional parameters

      break
    END SWITCH


do optional parameter work
Enter fullscreen mode Exit fullscreen mode

The problem:

  • If the ring write buffer is NOT empty, then the if statement on line 7 is skipped and the break statement on line 10 is hit instead.

  • However, for the program to function properly, line 11 should have been hit instead.

  • When the break statement is hit instead of the incoming message being processed and pointers being set up to optional parameters, then data (the pointers that shouldve been held) is overwritten

  • The error correction software identified the data overwrite and initiated a shutdown of the switch for a reset. This issue was compounded because this flawed software was present in all switches across the network, leading to a chain reaction of resets that ultimately crippled the entire network system.

Despite having a network designed for resilience, one line of code was able to bring down half the countrys main line of communication.

The Fix πŸ”¨

It took engineers 9 hours to get AT&Ts system fully back online. They did so mostly by rolling back the switches to a previous, working version of code.

It actually took software engineers two weeks of rigorous code reading, testing, and replication to actually understand where the bug was.

Conclusion πŸ’­

For AT&T, unfortunately, this wasnt even their biggest system crash of the 90s. They encountered many more issues later in the decade.

In reality, it wasnt one line of code that brought down a system. It was a failure in processes.

Todays companies have even better processes in place, and even then, bugs slip through. Google wrote a great retrospective on 20 years of Site Reliability Engineering, where they reflect on YouTubes first global outage in 2016.

The scale of an outage for companies is huge and there are lessons to be learned from each outage. For most, however, outages come down to human error and gaps in processes.

Connect with Me on social media πŸ“²

🐦 Follow me on Twitter: devangtomar7

πŸ”— Connect with me on LinkedIn: devangtomar

πŸ“· Check out my Instagram: be_ayushmann

Checkout my blogs on Medium: Devang Tomar

# Checkout my blogs on Hashnode: devangtomar

πŸ§‘πŸ’» Checkout my blogs on Dev.to: devangtomar

Top comments (0)