DEV Community

Damien Cosset
Damien Cosset

Posted on

The Programmer's oath

It's everywhere !

Software is everywhere. In our cars, our planes, our computers. Our banks use software. We use it to communicate every single day, every single second. Our governments can't pass laws without relying on software at some point. Hell, my grandmother even use software! She has a bank account, a car and some insurance! No one escapes it.

We, as programmer, have the potential to create great things to empower people with technology. We can make their lives better. Unfortunately, where you can do good, you can also do harm.

What would happen to our profession if a disaster happened because of us? A plane crashes into a football stadium because of a defective software? I mean, we already wrote some cheat code to allow a car to pass some pollution test ( Wolkswagen ). In our jobs, we have no regulation at all. Libraries, tools, languages... we can use whatever we want. If we show that we are not responsible with the code we write, and because our jobs are so important, the people in charge will most likely do the only thing that they can do: legislate.

They will tell us what degree we must have to code, what languages to use, what tools to use...

With that in mind, you may have heard about Bob Martin's Programmer Oath. Recently, FreeCodeCamp released a series of short videos where Bob explains its idea of a Programmer's oath. You can watch the first video here.

If you don't want to watch, here is a quick summary of the 9 short videos.

Programming Oath

1) I will not produce harmful code.
2) The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate.
3) I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should.
4) I will make frequent, small, releases so that I do not impede the progress of others.
5) I will fearlessly and relentlessly improve my creations at every opportunity. I will never degrade them.
6) I will do all that I can to keep the productivity of myself, and others, as high as possible. I will do nothing that decreases that productivity.
7) I will continuously ensure that others can cover for me, and that I can cover for them.
8) I will produce estimates that are honest both in magnitude and precision. I will not make promises without certainty.
9) I will never stop learning and improving my craft.

The number of programmers that write code that could kill people is small. I certainly don't. But, software kills, programmers kills. Don't we owe this to ourselves and the people we are trying to empower to make sure that we are doing the best we can?

Before something terrible happens, shouldn't we, as a community, independently of any government or organisation, manage ourselves and make sure that the people writing code are responsible, mature and respect an oath, just like doctors or lawyers might do ?

Should we have the right to tell to a person violating our oath: you can't code anymore, you repeatedly produce code harmful to other people ?

Shouldn't we have the power to tell our bosses and managers: NO, I will not do this, it violates my oath. And say this without any fear of losing your job, or your ability to do great work ?

As Bob Martin said in one of its lectures, Software runs the world. We may not have grasped this concept yet, but programmers run the world.

The very nature of the job market makes it easy for people to get coding professionally. I know this, I'm a self-taught, and I have no fucking clue what I'm doing 95% of the time. What if a person like me writes the code for your car? Your bank account ? Would you trust me?

What do you think? Is this overkill? I believe the idea of ethics is a fascinating one, and sometimes I can't help thinking about the worst case scenario.

Latest comments (8)

Collapse
 
codemouse92 profile image
Jason C. McDonald

Yes! I believe we absolutely need a "Programmer's Oath" in the field, although I might add one more item:

10) I will not use my knowledge or skill to denigrate or devalue others.

Too often, coders display an air of superiority over people who don't know what they do. Sometimes this is the result of imposter syndrome, sometimes not, but either way it's harmful. We should be using our skills and knowledge to build others up and empower them, not to tear them down.

Collapse
 
tomowens profile image
Thomas J Owens • Edited

I need to dig deeper into this and watch the whole series of videos to come to any complete conclusions.

But on the subject of ethics in software engineering, I don't think we need more codes. The major professional organizations that are relevant/related to computing professionals (the IEEE, ACM, British Computer Society, Project Management Institute, and so on and so forth) each have a code of conduct for their members. On top of these, the IEEE Computer Society and ACM collaborated to create the Software Engineering Code of Ethics and Professional Practice - although it hasn't been revised since the late 1990s, it's still tends to be very relevant. When I was in school, the Software Engineering Code of Ethics and Professional Practice was used as the centerpiece for ethical discussions.

Looking at this Oath, I'm not entirely convinced yet that it adds any value.

"I will not produce harmful code." - What does 'harmful code' mean? It's an incredibly ambiguous phrase. Is it code that harms people? Does that mean that the software that is part of a missile system harmful because a missile could be used to injure or kill people? On the other hand, a missile system can also be used to protect more people from harm by destroying other missiles or weapons. Any useful code of ethics needs to be more specific. Consider the SE Code, which talks about quality of life, privacy, environmental harm, public good and so on.

"The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate." - This seems good at the surface. However, in real world projects, you need to balance fixing issues with delivering value. It's about trade-offs. Is the quality appropriate for the particular product? Is the impact of allowing defects or technical debt communicated and accepted by stakeholders? Communicating risk and understanding business needs is part of the domain of software engineering. This should not be a hard and fast rule to follow.

"I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should." - Again, this goes to the demands of the business. This is not always feasible from a business perspective. As engineers, we need to take into account economics (along with mathematics, science, technology, and social knowledge) when designing, building, and maintaining software systems.

"I will make frequent, small, releases so that I do not impede the progress of others." - I don't have a problem with this, for some definition of 'release'. I think it's important to recognize that 'release' doesn't necessarily mean to deliver to end users, but to have a product that we, as engineers, believe is ready to go through the process by which it is released to end users.

"I will fearlessly and relentlessly improve my creations at every opportunity. I will never degrade them." - Although I agree with the sentiments here, it's not always possible in a professional environment. The improvements need to be balanced with the business needs of the organization. If a component is being replaced with an alternative, you may need to degrade the system temporarily for the greater good - easier maintainability, greater security, or improving development speed for new features.

"I will do all that I can to keep the productivity of myself, and others, as high as possible. I will do nothing that decreases that productivity." - I don't know if I can really disagree from an individual contributor perspective. However, after working in regulated industries, there are things that are required. Customer requirements or legal regulations can require things that decrease productivity. Depending on the role in the organization (I'm looking at managers, leads, process improvement experts, agile coaches or Scrum Masters), the goal should be to maximize productivity or minimize decreases in productivity and not to "do nothing that decreases that productivity". Really only the second half of this statement is the problem.

"I will continuously ensure that others can cover for me, and that I can cover for them." - I don't even know what this means yet. Is this a promotion of shared ownership? If so, I'd agree with it. But what does 'cover for them' mean? It could be taken to mean anything from helping colleagues to learn, grow, and develop (good) to covering for unethical behavior (bad). This just needs to be made less ambiguous.

"I will produce estimates that are honest both in magnitude and precision. I will not make promises without certainty." - I prefer 'realistic' to 'honest'. But beyond making estimates, there is a demand on management and leadership to protect and defend those estimates, to protect the team to enable them to do the job that needs to be done.

"I will never stop learning and improving my craft." - I think this is the only thing that I really can't argue with.

In the end, I think that we should be turning to these more well-vetted codes of ethics that have been reviewed and discussed by broad communities across industries. We have the right foundations, but those foundations need more visibility and discussion.

Something that appears to be neglected is that there are two purposes for a code of ethics. The first type is one that can be enforced. Organizations have these codes and can remove members who violate the terms. That adds value to the organization (protection against malicious members) and to members (there's value in claiming to be a member in good standing of an organization that ensures good behavior of members). The second is an aspirational code that gives us guidance on what we strive to be.

Often, in these discussions, people who are promoting various codes of ethics often tout them as a solution to outside regulations. However, unless the code is enforceable with some kind of consequence for violations, it's generally ineffective for this purpose. This code is not enforceable unless it's adopted by an organization, so it's aspirational until then. However, at the same time, I don't think it's suitable to be aspirational due to it's ambiguity and lack of clarity.

Collapse
 
damcosset profile image
Damien Cosset

To clarify on point 7) "others can cover for me, and that I can cover for them". Bob Martin seems to talk about shared ownership. He warns about "silos of knowledge". If one teammate goes down, the others are able to take over. In the video, Bob talks about pair programming as one way to solve this problem.

I agree with the fact that without an entity that would be powerful enough to enforce those rules, they are just concepts that won't change anything to malicious people.

To come back on your first purpose of a code of ethics. Do you believe that such codes should/could be taken to a higher level ? Some professions have those entities to enforce their rules on a large scope.

With that being said, it seems to me that it would be incredibly complicated to achieve this. It could probably also make it harder for people without a compute science background to break into the field.

Collapse
 
tomowens profile image
Thomas J Owens

As far as point 7 goes, if you need to watch a series of videos (even though they all are only a couple of minutes), it seems like the text needs to be improved. People shouldn't walk away with multiple, vastly different interpretations after reading the text.

As far as codes of ethics go, yes, it's complicated. But I think there are a few approaches and facets.

The first facet is professional organizations. The leading global organizations are the ACM and IEEE (specifically the Computer Society). More local organizations also have a role. All of these organizations should have a code of ethics that is clear and enforceable, with membership contingent on following it. This code should be an absolute baseline for things that we agree on, as a global community of software engineers. Local organizations may have other local considerations. In addition to ethics, I think these organizations also have a role in taking policy positions, connecting professionals and experts in various topics to governments, and perhaps even lobbying.

I think the second aspect is one or more aspirational code of ethics. Perhaps even multiple aspirational codes of ethics. The Software Engineering Code of Ethics and Professional Practice is one example. Robert Martin's is another example, although it would be better if it was less ambiguous. These are the types of things that should be taught in schools, from universities to bootcamps. Ideally, it would be nice if professional organizations could normalize on these aspirations for the profession, but these would capture the things that we strive for or give us a framework for making decisions, but no one is bound to them. I can see some things that start in aspirational code eventually being fleshed out, discussed, and rolled up into an enforceable code for members of a professional organization.

The final aspect is licensure. In the US, NCEES does have a Principles and Practice of Engineering (PE) exam for Software. However, there are problems. It's hard to qualify for - there's no good FE exam for recent graduates of many computer science or software engineering programs, it's hard to find PEs to work under, there's no real motivation in many areas to even consider the exam. I'm opposed to blanket licensure of software engineers, but I can see the need for software engineers working in at least some industries (I'm thinking mostly life-critical systems - aerospace, automotive, medical devices, etc.) to be licensed, or at least a licensed software engineer being required to oversee the work of non-licensed software engineers.

I do think that we need to be aware of people who come from non-traditional educational backgrounds - university education in non-technical fields, graduate certificate programs, self-taught individual, people who complete bootcamps, etc. I've had the ability to work with people who fall into these categories and many are great engineers. They also need to be taught these basic ethical concepts and frameworks for understanding and dealing with implications of the software they make. Even outside of critical systems, you can point to ethical challenges around software companies and the products they make - Facebook, Twitter, Snapchat, Uber, and so on.

However, for people who come from these non-traditional educational backgrounds, there may be a set of industries or environments where these people cannot work or can only work in a limited capacity and you must have formal education in software engineering, computer science, or computer engineering to achieve higher career levels. Again, I'm mainly looking at areas with life-critical systems being developed.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

I want to agree with such an Oath, but some of the points just bother me.

2 & 3) The code I produce is constrained by the resources provided to me in the project. Blindly seeking perfection is a sure way to sink a project. Everything is a compromise, and this includes code quality. While we shouldn't write crap code with no tests nor should we attempt to write ideal code with perfect tests.

8) I kind of have this feeling that if a business depends on accurate programming estimates they're already off to a bad start. I agree we should try to make reasonable estimates, but we should also have a process that can deal with inaccuracies.

1) Is a moral statement that would obviously conflict with anybody working on software in the defense industry.

Collapse
 
damcosset profile image
Damien Cosset

Thanks for your input! You're actually one of the people who inspired me to write this after I read 'I'm proud to be a programmer'

I think I agree on the 'harmful code' point. This is where making a summary of the video is actually harmful to the point :D .I apologise for that.

I guess it comes down to every single person's judgement. One could focus on not producing harmful code for your customers, or your fellow developers, or your organisation. Your example about defense industry is a good point. We could also add programmers working for the government and asked to write some code to enforce a law that could be considered 'harmful' to certain citizens. I'm sure we could find many different examples such as yours where the 'harmful' part is only a matter of perception.

Bob Martin makes a difference about code defective in behavior and code defective in structure. I honestly don't fully understand the difference between the two. I'm sure something more experienced than me could explain that.

As you said, it really depends on the environment in which you work. And I believe that we could create many different statements and oaths to make ourselves feel better. But in the end, if the entire community of programmers ( or a big chunk of it ), doesn't believe in such an oath, well, management would believe in it, and nothing will be done.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Maybe we can agree that "harmful" means something counter to the primary stakeholders. I think that's the primary intent. Of course it doesn't prevent virus writers, since their primary stakeholders are the scum who hire them. It's a hard point to make. It has the same problems in the hippocratic oath for doctors "do no harm". It's well meant, but fuzzy.

Defective in behaviour likely means it doesn't fulfill it's requirements. Defects in structure probably means it fulfills its requirements but has problems at the code level, possibly even failing certain non-functional requirements.

Note, I agree with the other points. I wish everybody cared a lot about the work they do.

Thread Thread
 
ben profile image
Ben Halpern

In programming, there is always a strong back-and-forth between what people strive for and what they end up with because of reasonable tradeoffs. I find folks like Bob Martin usually concede that this is clearly the case, but they don't always do a good job of acknowledging that reality in the initial statement.

The open/closed principle is an example of a concept that I find is a nice ideal, but none of my work has really been able to get anywhere near complying with it on account of difficulty predicting the future, so I've found it to be a bit unhelpful in that it is rarely described as a good idea that might not come to be in practicality.

Perhaps some of these concepts would be best understood if the room for error was more baked into the initial wording rather than something tacked on when discussing the matter in practical terms.