I'm a software engineer with experience across the software development lifecycle. My primary interest is software development methodologies and software process improvement.
I need to dig deeper into this and watch the whole series of videos to come to any complete conclusions.
But on the subject of ethics in software engineering, I don't think we need more codes. The major professional organizations that are relevant/related to computing professionals (the IEEE, ACM, British Computer Society, Project Management Institute, and so on and so forth) each have a code of conduct for their members. On top of these, the IEEE Computer Society and ACM collaborated to create the Software Engineering Code of Ethics and Professional Practice - although it hasn't been revised since the late 1990s, it's still tends to be very relevant. When I was in school, the Software Engineering Code of Ethics and Professional Practice was used as the centerpiece for ethical discussions.
Looking at this Oath, I'm not entirely convinced yet that it adds any value.
"I will not produce harmful code." - What does 'harmful code' mean? It's an incredibly ambiguous phrase. Is it code that harms people? Does that mean that the software that is part of a missile system harmful because a missile could be used to injure or kill people? On the other hand, a missile system can also be used to protect more people from harm by destroying other missiles or weapons. Any useful code of ethics needs to be more specific. Consider the SE Code, which talks about quality of life, privacy, environmental harm, public good and so on.
"The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate." - This seems good at the surface. However, in real world projects, you need to balance fixing issues with delivering value. It's about trade-offs. Is the quality appropriate for the particular product? Is the impact of allowing defects or technical debt communicated and accepted by stakeholders? Communicating risk and understanding business needs is part of the domain of software engineering. This should not be a hard and fast rule to follow.
"I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should." - Again, this goes to the demands of the business. This is not always feasible from a business perspective. As engineers, we need to take into account economics (along with mathematics, science, technology, and social knowledge) when designing, building, and maintaining software systems.
"I will make frequent, small, releases so that I do not impede the progress of others." - I don't have a problem with this, for some definition of 'release'. I think it's important to recognize that 'release' doesn't necessarily mean to deliver to end users, but to have a product that we, as engineers, believe is ready to go through the process by which it is released to end users.
"I will fearlessly and relentlessly improve my creations at every opportunity. I will never degrade them." - Although I agree with the sentiments here, it's not always possible in a professional environment. The improvements need to be balanced with the business needs of the organization. If a component is being replaced with an alternative, you may need to degrade the system temporarily for the greater good - easier maintainability, greater security, or improving development speed for new features.
"I will do all that I can to keep the productivity of myself, and others, as high as possible. I will do nothing that decreases that productivity." - I don't know if I can really disagree from an individual contributor perspective. However, after working in regulated industries, there are things that are required. Customer requirements or legal regulations can require things that decrease productivity. Depending on the role in the organization (I'm looking at managers, leads, process improvement experts, agile coaches or Scrum Masters), the goal should be to maximize productivity or minimize decreases in productivity and not to "do nothing that decreases that productivity". Really only the second half of this statement is the problem.
"I will continuously ensure that others can cover for me, and that I can cover for them." - I don't even know what this means yet. Is this a promotion of shared ownership? If so, I'd agree with it. But what does 'cover for them' mean? It could be taken to mean anything from helping colleagues to learn, grow, and develop (good) to covering for unethical behavior (bad). This just needs to be made less ambiguous.
"I will produce estimates that are honest both in magnitude and precision. I will not make promises without certainty." - I prefer 'realistic' to 'honest'. But beyond making estimates, there is a demand on management and leadership to protect and defend those estimates, to protect the team to enable them to do the job that needs to be done.
"I will never stop learning and improving my craft." - I think this is the only thing that I really can't argue with.
In the end, I think that we should be turning to these more well-vetted codes of ethics that have been reviewed and discussed by broad communities across industries. We have the right foundations, but those foundations need more visibility and discussion.
Something that appears to be neglected is that there are two purposes for a code of ethics. The first type is one that can be enforced. Organizations have these codes and can remove members who violate the terms. That adds value to the organization (protection against malicious members) and to members (there's value in claiming to be a member in good standing of an organization that ensures good behavior of members). The second is an aspirational code that gives us guidance on what we strive to be.
Often, in these discussions, people who are promoting various codes of ethics often tout them as a solution to outside regulations. However, unless the code is enforceable with some kind of consequence for violations, it's generally ineffective for this purpose. This code is not enforceable unless it's adopted by an organization, so it's aspirational until then. However, at the same time, I don't think it's suitable to be aspirational due to it's ambiguity and lack of clarity.
To clarify on point 7) "others can cover for me, and that I can cover for them". Bob Martin seems to talk about shared ownership. He warns about "silos of knowledge". If one teammate goes down, the others are able to take over. In the video, Bob talks about pair programming as one way to solve this problem.
I agree with the fact that without an entity that would be powerful enough to enforce those rules, they are just concepts that won't change anything to malicious people.
To come back on your first purpose of a code of ethics. Do you believe that such codes should/could be taken to a higher level ? Some professions have those entities to enforce their rules on a large scope.
With that being said, it seems to me that it would be incredibly complicated to achieve this. It could probably also make it harder for people without a compute science background to break into the field.
I'm a software engineer with experience across the software development lifecycle. My primary interest is software development methodologies and software process improvement.
As far as point 7 goes, if you need to watch a series of videos (even though they all are only a couple of minutes), it seems like the text needs to be improved. People shouldn't walk away with multiple, vastly different interpretations after reading the text.
As far as codes of ethics go, yes, it's complicated. But I think there are a few approaches and facets.
The first facet is professional organizations. The leading global organizations are the ACM and IEEE (specifically the Computer Society). More local organizations also have a role. All of these organizations should have a code of ethics that is clear and enforceable, with membership contingent on following it. This code should be an absolute baseline for things that we agree on, as a global community of software engineers. Local organizations may have other local considerations. In addition to ethics, I think these organizations also have a role in taking policy positions, connecting professionals and experts in various topics to governments, and perhaps even lobbying.
I think the second aspect is one or more aspirational code of ethics. Perhaps even multiple aspirational codes of ethics. The Software Engineering Code of Ethics and Professional Practice is one example. Robert Martin's is another example, although it would be better if it was less ambiguous. These are the types of things that should be taught in schools, from universities to bootcamps. Ideally, it would be nice if professional organizations could normalize on these aspirations for the profession, but these would capture the things that we strive for or give us a framework for making decisions, but no one is bound to them. I can see some things that start in aspirational code eventually being fleshed out, discussed, and rolled up into an enforceable code for members of a professional organization.
The final aspect is licensure. In the US, NCEES does have a Principles and Practice of Engineering (PE) exam for Software. However, there are problems. It's hard to qualify for - there's no good FE exam for recent graduates of many computer science or software engineering programs, it's hard to find PEs to work under, there's no real motivation in many areas to even consider the exam. I'm opposed to blanket licensure of software engineers, but I can see the need for software engineers working in at least some industries (I'm thinking mostly life-critical systems - aerospace, automotive, medical devices, etc.) to be licensed, or at least a licensed software engineer being required to oversee the work of non-licensed software engineers.
I do think that we need to be aware of people who come from non-traditional educational backgrounds - university education in non-technical fields, graduate certificate programs, self-taught individual, people who complete bootcamps, etc. I've had the ability to work with people who fall into these categories and many are great engineers. They also need to be taught these basic ethical concepts and frameworks for understanding and dealing with implications of the software they make. Even outside of critical systems, you can point to ethical challenges around software companies and the products they make - Facebook, Twitter, Snapchat, Uber, and so on.
However, for people who come from these non-traditional educational backgrounds, there may be a set of industries or environments where these people cannot work or can only work in a limited capacity and you must have formal education in software engineering, computer science, or computer engineering to achieve higher career levels. Again, I'm mainly looking at areas with life-critical systems being developed.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I need to dig deeper into this and watch the whole series of videos to come to any complete conclusions.
But on the subject of ethics in software engineering, I don't think we need more codes. The major professional organizations that are relevant/related to computing professionals (the IEEE, ACM, British Computer Society, Project Management Institute, and so on and so forth) each have a code of conduct for their members. On top of these, the IEEE Computer Society and ACM collaborated to create the Software Engineering Code of Ethics and Professional Practice - although it hasn't been revised since the late 1990s, it's still tends to be very relevant. When I was in school, the Software Engineering Code of Ethics and Professional Practice was used as the centerpiece for ethical discussions.
Looking at this Oath, I'm not entirely convinced yet that it adds any value.
"I will not produce harmful code." - What does 'harmful code' mean? It's an incredibly ambiguous phrase. Is it code that harms people? Does that mean that the software that is part of a missile system harmful because a missile could be used to injure or kill people? On the other hand, a missile system can also be used to protect more people from harm by destroying other missiles or weapons. Any useful code of ethics needs to be more specific. Consider the SE Code, which talks about quality of life, privacy, environmental harm, public good and so on.
"The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate." - This seems good at the surface. However, in real world projects, you need to balance fixing issues with delivering value. It's about trade-offs. Is the quality appropriate for the particular product? Is the impact of allowing defects or technical debt communicated and accepted by stakeholders? Communicating risk and understanding business needs is part of the domain of software engineering. This should not be a hard and fast rule to follow.
"I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should." - Again, this goes to the demands of the business. This is not always feasible from a business perspective. As engineers, we need to take into account economics (along with mathematics, science, technology, and social knowledge) when designing, building, and maintaining software systems.
"I will make frequent, small, releases so that I do not impede the progress of others." - I don't have a problem with this, for some definition of 'release'. I think it's important to recognize that 'release' doesn't necessarily mean to deliver to end users, but to have a product that we, as engineers, believe is ready to go through the process by which it is released to end users.
"I will fearlessly and relentlessly improve my creations at every opportunity. I will never degrade them." - Although I agree with the sentiments here, it's not always possible in a professional environment. The improvements need to be balanced with the business needs of the organization. If a component is being replaced with an alternative, you may need to degrade the system temporarily for the greater good - easier maintainability, greater security, or improving development speed for new features.
"I will do all that I can to keep the productivity of myself, and others, as high as possible. I will do nothing that decreases that productivity." - I don't know if I can really disagree from an individual contributor perspective. However, after working in regulated industries, there are things that are required. Customer requirements or legal regulations can require things that decrease productivity. Depending on the role in the organization (I'm looking at managers, leads, process improvement experts, agile coaches or Scrum Masters), the goal should be to maximize productivity or minimize decreases in productivity and not to "do nothing that decreases that productivity". Really only the second half of this statement is the problem.
"I will continuously ensure that others can cover for me, and that I can cover for them." - I don't even know what this means yet. Is this a promotion of shared ownership? If so, I'd agree with it. But what does 'cover for them' mean? It could be taken to mean anything from helping colleagues to learn, grow, and develop (good) to covering for unethical behavior (bad). This just needs to be made less ambiguous.
"I will produce estimates that are honest both in magnitude and precision. I will not make promises without certainty." - I prefer 'realistic' to 'honest'. But beyond making estimates, there is a demand on management and leadership to protect and defend those estimates, to protect the team to enable them to do the job that needs to be done.
"I will never stop learning and improving my craft." - I think this is the only thing that I really can't argue with.
In the end, I think that we should be turning to these more well-vetted codes of ethics that have been reviewed and discussed by broad communities across industries. We have the right foundations, but those foundations need more visibility and discussion.
Something that appears to be neglected is that there are two purposes for a code of ethics. The first type is one that can be enforced. Organizations have these codes and can remove members who violate the terms. That adds value to the organization (protection against malicious members) and to members (there's value in claiming to be a member in good standing of an organization that ensures good behavior of members). The second is an aspirational code that gives us guidance on what we strive to be.
Often, in these discussions, people who are promoting various codes of ethics often tout them as a solution to outside regulations. However, unless the code is enforceable with some kind of consequence for violations, it's generally ineffective for this purpose. This code is not enforceable unless it's adopted by an organization, so it's aspirational until then. However, at the same time, I don't think it's suitable to be aspirational due to it's ambiguity and lack of clarity.
To clarify on point 7) "others can cover for me, and that I can cover for them". Bob Martin seems to talk about shared ownership. He warns about "silos of knowledge". If one teammate goes down, the others are able to take over. In the video, Bob talks about pair programming as one way to solve this problem.
I agree with the fact that without an entity that would be powerful enough to enforce those rules, they are just concepts that won't change anything to malicious people.
To come back on your first purpose of a code of ethics. Do you believe that such codes should/could be taken to a higher level ? Some professions have those entities to enforce their rules on a large scope.
With that being said, it seems to me that it would be incredibly complicated to achieve this. It could probably also make it harder for people without a compute science background to break into the field.
As far as point 7 goes, if you need to watch a series of videos (even though they all are only a couple of minutes), it seems like the text needs to be improved. People shouldn't walk away with multiple, vastly different interpretations after reading the text.
As far as codes of ethics go, yes, it's complicated. But I think there are a few approaches and facets.
The first facet is professional organizations. The leading global organizations are the ACM and IEEE (specifically the Computer Society). More local organizations also have a role. All of these organizations should have a code of ethics that is clear and enforceable, with membership contingent on following it. This code should be an absolute baseline for things that we agree on, as a global community of software engineers. Local organizations may have other local considerations. In addition to ethics, I think these organizations also have a role in taking policy positions, connecting professionals and experts in various topics to governments, and perhaps even lobbying.
I think the second aspect is one or more aspirational code of ethics. Perhaps even multiple aspirational codes of ethics. The Software Engineering Code of Ethics and Professional Practice is one example. Robert Martin's is another example, although it would be better if it was less ambiguous. These are the types of things that should be taught in schools, from universities to bootcamps. Ideally, it would be nice if professional organizations could normalize on these aspirations for the profession, but these would capture the things that we strive for or give us a framework for making decisions, but no one is bound to them. I can see some things that start in aspirational code eventually being fleshed out, discussed, and rolled up into an enforceable code for members of a professional organization.
The final aspect is licensure. In the US, NCEES does have a Principles and Practice of Engineering (PE) exam for Software. However, there are problems. It's hard to qualify for - there's no good FE exam for recent graduates of many computer science or software engineering programs, it's hard to find PEs to work under, there's no real motivation in many areas to even consider the exam. I'm opposed to blanket licensure of software engineers, but I can see the need for software engineers working in at least some industries (I'm thinking mostly life-critical systems - aerospace, automotive, medical devices, etc.) to be licensed, or at least a licensed software engineer being required to oversee the work of non-licensed software engineers.
I do think that we need to be aware of people who come from non-traditional educational backgrounds - university education in non-technical fields, graduate certificate programs, self-taught individual, people who complete bootcamps, etc. I've had the ability to work with people who fall into these categories and many are great engineers. They also need to be taught these basic ethical concepts and frameworks for understanding and dealing with implications of the software they make. Even outside of critical systems, you can point to ethical challenges around software companies and the products they make - Facebook, Twitter, Snapchat, Uber, and so on.
However, for people who come from these non-traditional educational backgrounds, there may be a set of industries or environments where these people cannot work or can only work in a limited capacity and you must have formal education in software engineering, computer science, or computer engineering to achieve higher career levels. Again, I'm mainly looking at areas with life-critical systems being developed.