As a developer, I’ve been a round the block a few times. Danced with the devil and lived to tell the tale. I’ve seen things.
Among my favorite parts of the job is that indescribable joy when you’ve built something where every new feature request works perfectly with what you’ve already got in place. Where it’s just adding a field there, or dropping in a new component that just flows with everything you’ve already done.
I call this “getting the model right.”
It’s validation that the way you approached the problem was meaningfully correct, at least for now.
There are a few different components to accomplishing this:
- The decisions you make within your code.
- The technical details of the technologies you use to build your project.
- The culture you bring to your code.
In this essay I’m going to explore each these in more detail.
Building Worlds
Being a programmer is a lot different from most other fields in that we get to (in many ways) create the world that we inhabit. Sure, there are rules (both those defined by the languages and platforms we use, and the organizations that pay us) but from day to day, we live within our own creation.
I’ll set the scene:
You’re a programmer working for United Widget, Inc. The company wants to build a new portal for their customers to log in and place new orders for widgets and see the status of existing orders. It’s a brand new project with no legacy cruft.
That’s a bingo!
Your first task is to decide which platform you want to use to build this portal. You have a lot of experience with PHP, but don’t like the language much and want to learn something new. So you decide to use a NodeJS backend with a frontend built in React. For a database you decide to use PostgreSQL since you know it so well already.
This is your first choice, the dirt, wood and stones from which you will build your palace. These early selections will give you the conventions you’ll be using from now on, so they’re very important. We’ll dig more into why later.
Let’s get to building. Being prudent, you decide to first put together a quick schema for the application.
People will need to login, obviously, so you’ll make a users table. They’ll have email addresses, (which they’ll use to login) first names, last names, and passwords, which you’ll probably want to hash for security reasons. And a unique identifier field so we can not have a piece of malleable user data in multiple tables. Oh, and they’ll need to belong to some sort of customer group, so that multiple people from the same company can log in. Let’s make a companies table. No, on second thought, let’s call it organizations, so that it is more generic and doesn’t imply a commercial company. Smart.
We’re only on the first step of this process, and you can already see this world taking shape. These concepts of users and organizations will become real, and will show up in reports for senior management for years to come. Let’s continue.
The organization also needs some fields. Let’s give it a unique ID, name, a bunch of address fields, and a flag to indicate if they’re active or not. Hmm. Maybe we should make addresses a separate table and associate it with the organization by address_id, so we can know when different organizations share the same address. But what if one of them changes it for both, or if one enters it all with caps lock on, and the other hates that? On second thought, nevermind, we’ll just keep those fields on organizations.
Ok, now we have to figure out how we assign users to organizations. Maybe we should add an organization_id to the users table, so that each organization can own their users. It could make sense since we’re dealing with mostly company emails.
Ah, mostly. This is the first fork in the road, and we’re only two tables in.
It turns out that some customers have external contracting agencies order their widgets for them, and that some people from these agencies work for multiple customers. Since we decided to make email address part of the user’s login credentials, we need to have the ability for users to choose their organization they are acting on behalf of after logging in, meaning we can’t have users belong to one and only one organization. From a database perspective, that typically means we’ll need a third table (perhaps organization_users) that associates users with multiple organizations by ID, and possibly contains additional data about the user, like their role within the specific organization. From an application standpoint, it means that we need to put limits on an organization’s ability to edit a user’s information, like their name and perhaps even their email address, because that user’s data has to appear for multiple customers and you don’t want them to be stomping over each other’s data and reports.
If we had chosen to put an organization_id on the users table, we would have gotten the model wrong. For this very specific example, anyway.
And this is only the very first step of the process. In our application, we need to choose everything from server-side routing frameworks to UI widget libraries, each one of which has their own assumptions, strengths, and weaknesses, and repercussions down the line. This is all before we have written a single line of code. Once that starts, a whole new set of decisions must be made:
- What do we want our backend’s API to look like? Is it a simple REST API, or do we want to do something like GraphQL? If it’s REST, what should the URL structure look like?
- How do we organize the code? Do we use raw database libraries or try to use an ORM? How much SQL in the code is too much SQL?
- On the frontend, how do we organize our components? How do we handle routing, and what does that structure look like?
- And so much more.
Each one of these decisions will ripple through to future developers, and perhaps even their children if the project lasts long enough and they happen to get a job at the same company doing the same thing as their parents. Hey, it could happen.
These decisions are all part of the model of your application, the part over which you have the most direct control. Software is a living thing, especially when it’s meant to last. The soundness of its design and the choices that went into it speak to how well it can continue to serve its purpose.
So how can we as programmers be safe in this dangerous pursuit?
Choosing the Right Abstractions
Programmers work with what are called abstractions: we create representations of real-world phenomena that hide a lot of the actual complexity of that phenomena, allowing us to narrow in on the details that are important to us. In many ways this is more an art form than a science, but it gets easier with practice as you come to develop a better sense of what is important and what you can let go.
One very popular paradigm that many programming languages implement is called object-oriented programming. It was especially popular starting in the mid-1990s with Java and C++, but it continues to have adherents today in those languages and others like Python and C#. For the uninitiated, object-oriented languages make this idea of “abstractions” very explicit. You could have code like this:
public class User {
public string emailAddress;
public string firstName;
public string lastName;
public static User login(string emailAddress, string password) {
// ...
}
}
public class Manager extends User {
public User[] employees;
public void addEmployee(User employee) {
// ...
}
public void removeEmployee(User employee) {
// ...
}
}
You can clearly see the real-world ideas of a user and a manager, and that the manager is a specialized type of user which has the ability to add and remove employee users from their employee list. But you can also see all of the stuff that we left out. Maybe those users have children themselves. They definitely have a hairColor, and a shoeSize, but we don’t really care about that for our purposes, so we kept it out of the model.
For these reasons (and others), these languages were often used for modeling business processes, as any programmer familiar with the conventions of the language can (probably) make sense of what we’re trying to represent. This discoverability is very important, as it allows people who didn’t originally write the code quickly get up to speed.
In addition to object-oriented programming, there are other paradigms that are commonly used. Most recently, functional programming has been making lots of waves, with its use in languages like Erlang, Haskell, and (especially) Javascript/Typescript. (Side note: Most languages today aren’t strictly one paradigm or another, and often incorporate elements of other paradigms when it makes sense. The degree to which they do this can change over time, and the culture around the language can influence the relative popularity of different paradigms within a language over time.)
Functional programming lends itself very well to modeling event-driven processes, which happens to be very applicable to things like graphical user interfaces and protocols like HTTP that rely on requests and responses over the network.
In my opinion, functional programming requires a bit more work on the part of the programmer than object-oriented programming, in that they need to keep more context in their head as they do it. With (well-designed) object-oriented classes, everything is simple and laid out in front of you. This class has these fields, and this method takes these arguments. If you want to call it you need to figure out where each one of those arguments is coming from.
On the other hand, functional languages don’t always have a direct path through the code. In functional Javascript, you can easily find yourself in “callback hell”, where your functions call other functions, which may or may not return properly, and it’s difficult to know if your overall operation succeeded until you know the outcome of each of the nested functions. There’s even a website for it. Yes, you can mitigate it through proper error handling, or through using promises, or a bunch of other strategies. The point is that you need to think through more edge cases, and this takes a mental load.
This is the price to be paid for the efficiency the language offers for the task at hand, and it’s a totally acceptable price, but one that must be paid none the less.
If not managed correctly, this additional mental load can impact a program’s discoverability. The programmer that originally writes the module knows exactly what’s going on, and has all of the context for the operation in their head at the moment they’re writing the code. Five years later, the next guy to look at that code might have none of that. If it isn’t encapsulated well, the new guy will have to do a lot of learning and experimenting to understand what’s going on.
Any decently-sized application will need to choose the right paradigm for the right task, and will need to execute it well if the programmer hopes to get the model right.
What is this “culture” you speak of?
I think Steve Jobs summarizes it well in this clip from 1995:
Culture in this sense is the deep understanding of why something is good and necessary, how best to incorporate it into your product, and what the benefits will be once it is added. It’s the product of lots of experience and observation.
Some aspects of culture come from the technologies you use.
Programming has a vast number of subcultures, and this heavily influences the experience of anyone using a certain language. For example, the language PHP is historically used for quick-and-dirty web applications. The culture around it tends to prioritize getting things done quickly and doesn’t give a damn about aesthetics. The primary documentation site, php.net, has a function reference section where all of the build-in functions are shown; each of these pages has a comment section where for the last 30 years people have been posting solutions to some esoteric problem they ran into and the solution they came up with, and PHP developers think nothing of scouring this archive and copy-pasting that code directly into their new application. (Side note: It is possible to write well-constructed PHP code. I think I saw some once.)
Java programmers, on the other hand, tend to be on the enterprise side. This language is often used at larger enterprises that adopted it back in the 1999 when some salesperson convinced the CTO to buy IBM Websphere. These programmers have lots of meetings where they bring in entity diagrams and flowcharts and discuss multiyear timelines for when the software will be released. I am poking fun a little, but I don’t mean this in a pejorative way; the people who write Java applications tend to put the time into understanding what the application needs to do and work out the contingencies ahead of time, which makes sense when the software is powering a billion-dollar company and any downtime costs multiple peoples’ annual salaries.
More recently, we’ve seen a subculture coalesce around Meta’s ReactJS, a Javascript/Typescript library traditionally used for single-page applications. It seems like every week there is a new framework or component library released for React, trying to make the library easier to use, or filling in some new niche. From the outside, it seems like this subculture embraces rapid iteration, code elegance and aesthetics. Typically new entrants announce themselves with beautiful websites and concise tutorials.
The shared context people get from using a technology intensively helps reinforce “conventional wisdom” with that group, and can provide guardrails if applied carefully. But you can’t stop there.
Truly understanding culture involves critically looking at that conventional wisdom, and figuring out what parts work for you and which don’t.
There is no right and wrong in programming.
To be clear, there is shoddy work and quality work, but for any process you follow or ritual you do every single time, there is an argument to be made for not doing it that way in a particular circumstance. Except maybe using version control. Commit your code, everyone.
More specifically, the culture you bring to your code and your project informs how well it will get the model right. Your experience tells you when to be super uncompromising about a certain segment of the code, and which parts you can be quick-and-dirty with to save time, and your application of those standards becomes the culture you pass down to your project.
That was a lot of words.
Yes, it was. I’m working on some other essays and I wanted to get all of this out there first, as I’ll be referring back to it.
Think of this as my tapping sign.
Thank you for making it all the way to the end! While you’re still here, I’d be remiss if I didn’t mention RestlessIDE, the (future) best web-based development environment in the world. You should try it out, I think you’d like it.
Til next time.
(Photo by Jaime Spaniol on Unsplash)
Top comments (0)