Introduction
Disclaimer: This article compares my expectations as a fresh college grad to the realities of the software development industry – this is not intended as a criticism of my college education, but rather as a guide to suggest additional preparation and awareness for any development job seekers – college educated or not.
Before I began working as a professional software developer, I spent a couple years in college obtaining an AAS in computer programming. After that, I went on to study for a business/tech bachelor’s degree. I wound up getting a full-time programming job right around the time I started my studies for the second degree. Both the bachelor’s degree and the first job are behind me now. When I finished my first degree, I believed I would be ready for real, honest-to-goodness, full-time software development work. In a sense, I was. However, the real, day-in/day-out work of software development held a LOT of surprises for me.
What I did learn in college
I value the time I spent in college; it was useful on several levels. In a very literal sense, college was a big part of teaching me how to program. I learned common language constructs – think control flow, loops, OOP, etc. I learned how to apply those constructs in several languages. As I recall, I studied PHP, Java, C#, C++, JavaScript, VB and Python in the course of my college studies. College also taught me foundational tech concepts, like networking, and how the various operating systems worked. In addition, there were the usual general education requirements – math, language, sociology, psycology, and so on. Much of the information I mentioned was useful in my in my actual job, and I’m grateful for all that instruction, but the real world of software development goes beyond this – far beyond, in fact.
End users can’t compile code
When I got my first programming job, I had no idea how code actually got executed in an enterprise environment. I’d usually just run the code from within my IDE, be it Visual Studio (no, not VS Code), PyCharm or NetBeans. Some readers may have noticed I also mentioned PHP – yes, I ran my PHP through a WAMP stack, but I didn’t really understand how it worked, or how on Earth that would translate into real software running in a real environment. Here’s how it works at a high level.
When you write code in an IDE, you’re writing source code. It’s just text files (if you know of some obscure language ecosystem besides Scratch where this isn’t exactly the case, I’d love to hear about it, but I think the general principle still stands) – nothing that a computer understands. A programming language is a formalized set of syntax that is human-understandable, but still has consistent enough rules that it can be converted to something that a computer actually understands. When you run the code through your IDE, what it’s doing is some process of interpreting or compiling the language – the differences between the two approaches aren’t really in the scope of this discussion, but, effectively, the code you wrote is being turned into something that a computer understands and can actually execute.
However, an IDE is a tool for programmers – nothing we expect end users to understand, install and configure! So, in order for code to be used by end users (if that’s the kind of code we’re writing), it has to be able to run without an IDE involved. This is where a process of build and deployment comes into place. The source code you wrote (hand-wave where that lives for now, we’re coming to that) has to get “built” into something that can be run by a computer without an IDE.
The process for all this differs significantly depending on what language you use, and where you want to run it – and in some cases (such as interpreted languages), it may not be necessary at all. Generally, though, there will be some server that builds your code into whatever the desired, runnable state is. This server (hereafter referred to as a “build server”) takes the source code you wrote, and “builds” it into some sort of final product. The exact details of this “building” process are nuanced, language-specific, and beyond the scope of this article, but, generally, the key portions involve pulling together all the dependencies you used along with your code into something that can be interpreted by the platform you’re running on. The result of this build process is often referred to as an “artifact”.
To tie this back together, this process of building source code into a runnable artifact is basically the same process as your IDE would perform when you run your program on your machine as you make changes. There may be various subtle (though important) differences, but the main thing to understand is that there’s usually a separate machine (the build server) somewhere that actually performs the process of building this artifact.
Put your code back where it belongs when you’re done
Now that your code has been built into an artifact, your code has to actually “go” somewhere so it can do whatever its job is. It can’t just live as an inanimate artifact on the build server forever. Your program has to go somewhere to be executed. This process of being sent somewhere to be executed is known as deployment. As with many of these things, the details of this are going to be unique to pretty much any organization, and maybe even to each program that organization maintains, but here’s two fairly common approaches that will help illustrate the principles of deployment.
Send your code to a server
In this approach, you’re writing some sort of server-side application. This application is only designed to run on “one” machine, though it may be consumed be many other programs. More on that later. For now, just know that your program only has to go to “one” place – the server. A very common example of this is a webservice that serves up some data from a database. In this sort of example, the build process would create the artifact, then it would move it onto the server, remove the old version of the artifact, and start it. This typically involves copying files over a network, running commands to shutdown an “application server” (the software that runs your software), running more commands to load your software into the server and remove the old software, then running more commands to start the server back up.
This begins to stray beyond the scope of our discussion, but some of you may be thinking of downtime at this point. If people are constantly using the application that you’re changing, how can we take down the application server so we can deploy the new artifact? There are different approaches to solve this problem, but generally the solutions revolve around having multiple app (short for application) servers, and the new artifact will be deployed to one server at a time, as the other servers, still running the old code, continue to handle any requests to the app until each one of the servers has been updated. If you’re interested in digging deeper into the mechanics of all this, look up “load balancing”, “zero downtime deployments” and “blue-green deployments”.
Send your code to a server again – so it can be pulled onto other machines
With other kinds of applications, the code may not be running on a single server, but may be an application that’s supposed to run on many devices – a classic example would be a desktop app, like an email client or web browser. Another example would be mobile applications. In this case, the deployment process may be a little simpler upfront. All that has to be done is to push the artifact out onto some sort of server where it can be pulled onto the individual machines that want to run it. So, once the artifact is built, the deployment process really just has to copy the artifact onto a remote server, and then possibly perform some command to make sure the artifact is “served up”, or made available in some way to users who want to access it.
Ever heard of the “water bed theory”? It’s this idea that some complexity is unavoidable – if you remove complexity in one area (like squishing a corner of a water bed), it just pops up somewhere else. That’s kind of how this process works. Now that the new artifact is out there, all the different computers that need to run that artifact have to get rid of the old artifact, and install the new artifact. So, even though the initial deployment process was easier (just copying a file, more or less), there’s now the complexity of getting all the computers that use the software on the latest version somehow.
As with all these processes, the mechanics can vary, but one common method is to have a “pull” mechanism on the computer that uses the software – that is, some sort of software to update the other software. It goes out to the server that hosts the version of the software to see if there is a new version of the software, and, if there is, it removes the old version, and copies down the new version, and possibly runs some additional commands to install it. This process of checking for updates may only happen when a user specifically tells the software to check for updates, or it may be something that happens automatically whenever the user runs the software.
It’s a good idea to test your code
When you’re in college, you test the code you write because if it doesn’t work, you’ll get a bad grade. When you’re in professional programming job, you test your code because you could cost the company money if it doesn’t work. However, testing code written for a company is much more involved. The reasons for this are varied, and we’ll cover them more later, but, suffice it to say for now that enterprise code tends to require a lot of other “things” present in order to do its job. For instance, this could be databases, email servers, files, other programs, etc. It’s because of these dependencies, and the nature of enterprise code that we introduce the concept of “environments”.
What’s an environment, Bob?
Environments are, quite simply, the total of all the things your program needs to run. This would include your program, along with dependencies like the ones discussed above. The reason we care about having multiple environments is for testing purposes. Let’s take an example. Say you are a developer who works on a retail company’s website. You’ve just completed your feature, which is adding one of those annoying “please email me about new products and deals” checkboxes in the website’s “cart” function that people only ever leave checked by accident because the only thing your company sells is stuffed animals.
Now, following your company’s build and deployment process you deploy your changes out, and start testing them. You’ve just placed a real life order for 10 stuffed giraffes because you’re testing in production. They should be delivered in the next 5-10 business days.
Production is the environment that runs the business – at least, that’s how it’s referred to at most places. If you have a customer-facing product, like a website, production is the actual website that real users access, and the database where their real orders go. You don’t want to send your code here without testing for two reasons: one, as listed above, is the possibility of you initiating business processes that shouldn’t be initiated. You don’t actually need (or want) those stuffed giraffes, you just need to make sure your new checkbox works, so you don’t want to actually trigger the business process of fulfilling an order (and taking your money!) when you test. Secondly, if your code doesn’t work right, then you have the risk of an actual customer coming in, trying to order 324 plush elephants, and then leaving your website and not completing their order because your checkbox can’t be unchecked!
Both of the possibilities mentioned above are reasons for having additional environments: places where the code you run can be built and deployed without initiating unwanted business processes (like order fulfillment) and without affecting real business (like real customers being unable to use your software).
In addition to production (or “prod”, as it’s affectionately known), most software teams have two other environments, usually referred to as dev or develop, and test/QA/pre-prod. Without diving too far into the SDLC (software development lifecycle) at this point, usually the build/deployment process for a given piece of software has “stages”, in which it will move the code into the dev environment first, then into test, and finally into prod. Depending on other factors (such as source control methodology, a topic for another time), the code may be rebuilt at each step of the way (i.e., built and deployed for dev, then built/deployed again for test, etc.), or it may be built once, and the same artifact may be “promoted” through the environments until it reaches production. In the second approach, “promotion” really just means moving the same code up another level without rebuilding it – so, only performing the deployment step for each level.
The lower environments (lower refers to anything that is not prod) are typically configured so that checking out on the website doesn’t actually take your money, or result in you actually receiving 115 polyester octopi in the mail. They are also not publically visible (at least in most cases – there are exceptions to this) so that live customers/real business users can’t get to them, and they frequently include protections to help avoid doing things like sending emails to real customer email addresses and other nasty things that can cause you trouble.
Depending on the quality of your lower environments, you may experience issues that don’t occur in production. For instance, your company might host your dev and test environments on cheaper, slower servers. This can result in things running slower on the lower environments than they will in production, which means you have to account for this as normal behavior when you test.
Three’s a crowd...
Now you may be thinking: why have three environments? I mean, two makes sense: one is live code, the other is for testing. Well, this is where different kinds of testing come into play. Let’s start at the beginning: the “dev” environment is for developers. In a system with a lot of moving parts like programs that interact with other programs (which is true of most any significant software system), you need to be able to test the way they interact with each other after you make your changes to ensure it all still works. This frequently isn’t possible (or is quite difficult) on your local development machine, due to the other related software not running on your machine (for resource reasons, or perhaps complexity reasons). So, you deploy your changes into the dev environment. The dev environment is typically for developers only – if you break something in dev, you may hear from another developer on your team if they run into it, but it’s not likely to be a big problem. You can run your tests over and over, and make sure everything is working smoothly.
Once that’s done, you’ll want to move your code on to the next “level” of environment. Regardless of the name (test, QA, pre-prod, staging, etc), the general role of this environment is to provide a place where not just you as the developer can test your code, but a dedicated quality assurance (QA) team can also test your code.
Most software teams will have at least a few team members whose full-time role is to test the software you develop. They don’t perform this testing in the dev environment, because the dev environment is generally less stable. They will be involved once you have already done some preliminary testing of your code, and they won’t be wasting their time running tests when your code doesn’t even work at a basic level. Typically, they’ll check to make sure your new changes work correctly, and will check for regression (meaning, does everything that worked before your changes still work?). If you work on software for internal company users, such as an inventory management program, there may also be UAT (user acceptance testing) that will occur in the test environment. This is where someone (frequently the QA team) will coordinate with an actual user of the program, and have them run through their steps in the test environment to ensure that everything still functions the way they need it to, and that the new behavior you added or changed is correct.
There may be more or less environments at some software shops, and the purposes may be different at the different levels, but this is a rough set of concepts you’ll encounter with environments in a lot of shops.
Conclusion
There’s a lot involved in the whole process of getting code out of your IDE and on to being used by it’s final consumer, but that’s just one aspect of professional programming. In the next part of this series, we’ll move from here into the individual practices/tools/techniques I encountered as a developer that I was unaware of. Stay tuned, more to come!
Top comments (0)