DEV Community

Cover image for Balancing Bytes and Ethics: A Software Engineer's Journey to Integrating Ethical Considerations into AI/ML Infrastructure
Vaishnavi Gudur
Vaishnavi Gudur

Posted on

Balancing Bytes and Ethics: A Software Engineer's Journey to Integrating Ethical Considerations into AI/ML Infrastructure

Abstract

The advancements in AI have quickly evolved leading to the need for integration of ethical considerations into the AI/ML systems, especially in high stakes domains, including cybersecurity. This commentary explores Microsoft's senior software engineer's role on embedding ethical frameworks into AI and machine learning infrastructure. The narrative examines barriers to enabling explainable AI tools such as LIME; the perceived obstacles to innovation that inhibit ethical performance and the extraction of cross-discipline learnings from biotech to inform the development of AI ethics. The article considers technical issues – differential privacy of AI systems among them - and the need to be prepared for future AI ethics directives. The objective is to provide a more balanced relationship between innovation and responsibility, so that AI is a system of transparency, accountability and equity.

Introduction

The day I truly connected with the realization hit me like a bolt of lightning. I was at a conference, happy, nerding out on AI progress, when a speaker asked (I’d swear) a question that rocked me awake with the question: “What happens when your AI makes a decision that changes your life on the basis of unfair data?” This wasn't just a question. It was a rude nudge into a space I’ve been ignoring. As a Software Engineer at Microsoft who works with AI-powered security systems, I learnt that adding an ethical aspect isn’t just a box to check. It's a duty. When I interact with AI and machine learning (ML) systems, especially in the fast-moving world of cybersecurity for Microsoft Teams, accountability and transparency are not nice ideas, they are ineluctable necessities. But adding ethical considerations to the center of our AI/ML infrastructure is no picnic all together. It’s this dance of being responsible versus being creative, and I learned a few steps along the way.

Outline

  • An Introduction To Ethics in AI: A Personal Awakening.

  • Setting the Stage: Beginning with AI Accountability Frameworks.

  • The Ethics Problem: Are They Bottlenecks or Roadmaps?

  • Lessons From Biotechnology For Other Professions.

  • The Technical Deep-Dive: How to Use Differential Privacy.

  • The Way Forward: Preparing for Ethical AI Rules.

  • Conclusion: the intersection of responsibility and innovation.

The day I realised it hit me like a bolt of lightning. I was at a conference, blissfully gorging myself on AI advancements, when a speaker asked me a question, which left a great surprise in my gut: "What happens when your AI goes on to make a life-altering decision based on biased data?" This wasn't just a question at all. It was an awkward shove into my life, which it had already started moving past without any sort of resistance. What I saw as a Senior Software Engineer at Microsoft working on security systems driven by AI is how building ethical consciousness into the mix isn’t “just crossing boxes”. It is a duty. When I work with AI and machine learning (ML) systems, especially in the fast pace world CyberSecurity for Microsoft Teams, accountability not only isn't welcome but it has to happen. Still, it's not easy to ensure that ethical principles are part and parcel of our own AI/ML infrastructure. Being responsible yet creative is a dance! In this process I have found out some things.

An Introduction To Ethics in AI: A Personal Awakening.

Accountability in AI can be just like making sure your code is free of any bugs; it requires systemic care, and proactivity. At Microsoft, we want to not only correct AI errors, but develop and embed solid frameworks for AI responsibility into our processes. Some of these are explainable AI (XAI) technologies, which have contributed greatly to user trust in brands. The LIME (Local Interpretable Model-agnostic Explanations) method is another example, and when we’re building threat detection systems as a team, we help people understand how AI isn’t able to make trivial decisions. This method has been altered so that we are able to identify security threats while keeping in contact over a long period of time. It's like showing everyone a magnifying glass to all portions of the AI machine. It makes choices that previously seemed enigmatic less mundane. However, by the way, LIME couldn't do an admirable job of being a big-box solution -- which is still not possible on a wide scale as well as too basic. But it is a step that may be taken toward openness.

Setting the Stage: Beginning with AI Accountability Frameworks.

So, the need for accountability in AI systems mirrors the need to ensure the code is bug-free; this requires systematic attention and having proactive solutions. Microsoft is putting a proactive framework for AI accountability into all of its operational pipelines to help mitigate the potential implications of AI. These range from explainable AI (XAI) technologies which I have used and see building trust in the application massively in users. The team employs LIME (Local Interpretable Model-agnostic Explanations) to show people how complex AI decisions are made while building danger detection systems or the like. This approach has been altered to detect security threats while still keeping it truthful when it comes to people. With this approach, you check in on those parts of the AI system. It gets your head around choices that you had very hard time understanding before. LIME is not the best solution. It can't be used for large-scale applications and may actually oversimplify everything. This is a welcome step in encouraging new openness.

The Ethics Problem: Are They Bottlenecks or Roadmaps?

It is the common thought that ethics is going to prevent new ideas from being born. Initially, we felt that when using differential privacy techniques to defend privacy in AI systems, our deployment process took longer. Honestly, some of my coworkers went crazy, because they had to check the extra steps and do iterations more frequently. But I make the point that ethics isn’t a bad thing; it’s a way forward. The real issue is how to build in ethical checkpoints without slowing the development of new ideas. In practical terms, that means we can set clear stage gates for ethical review in our development lifecycle, similar to how we create such milestones in our code reviews to ensure the code is good. It takes building a team that sees ethics as a way to get things done, not as a problem. This mindset has transformed everything for my teams, empowering us to create new concepts.

Lessons From Biotechnology For Other Professions.

I look to biotechnology for inspiration beyond technologies. The handling of highly sensitive information is incredibly similar. Biotechnology, akin to artificial intelligence, has faced a lot of ethical scrutiny too—especially in handling genetic data. As a member of Cerner Corporation, I set up patient registration systems and other healthcare systems, which were heavily regulated for data retention with the same stringent regulations of HIPAA. Biotechnology’s clarity around data management and broad consent policies have shifted my thinking about AI ethics. We have also shifted the methods used to gain user consent for data collection in AI systems ensuring users of these technologies know how their data will be used and that they consent. Information is the key here, particularly in distributed cloud environments with many tenants wherein protecting user data is essential. This method helped engineers by making it easier for people to follow rules and more trusting of AI.

The Technical Deep-Dive: How to Use Differential Privacy.

Differential privacy is often called the best method of keeping user data secure when developing artificial intelligence systems, but putting this into practice is a whole other matter. In my work, I was so baffled to grasp some of the trade-offs between privacy guarantees and model performance. But the work was worth it. In case of differential privacy for AI systems, noise limits have to be adjusted in order to make the trade-off between privacy and usefulness. For instance, if there is too much noise in threat detection models they are not sensitive enough for abnormal situations. To make up for this, my team and I did a fair bit of testing and tweaked the epsilon (which governs privacy loss) to make these models run and not interfere with the security of users’ privacy. Early start with small tests will show how differential privacy is affecting the model. Using this iterative approach, we were able to design a framework that preserves privacy without sacrificing too much accuracy. This is something I hope every engineer would learn this lesson, without the trouble we had at first.

The Way Forward: Preparing for Ethical AI Rules.

The next wave of rules on AI ethics isn’t far down the road – and coming quickly. Having served on advisory committees for AI ethics standards, I know very well how the rules are evolving. It’s the active companies that are making compliance more important to their AI systems in order to compete with their competition today than anybody else. At Microsoft we’ve been experimenting with compliance-first AI development by incorporating ethical guidelines early in the design process. Not only is this proactive approach making us ready for changes in the law, it’s making us leaders of using AI in a way that is ethical. It’s so simple: engineers should begin considering regulatory considerations into the development process. This will save your system from potential infringements and make your brand more credible.

Conclusion: the intersection of responsibility and innovation.

Adding ethics to AI/ML infrastructure is a delicate balance, requiring continuous adjustments to balance out innovation against accountability. The hardest thing to come down the road, I've had many bumps in the road, new possibilities and most importantly, good lessons that I learned. In disseminating these valuable lessons, I hope other engineers see ethical AI as an integral aspect of our tech development rather than simply another hurdle. We can make sure that we are responsible and confident in our innovations that we will follow up with ethical accountability frameworks by examining how other industries do this and by getting ready for changes to the law. Embrace it; this is the first step toward creating AI that you can take real pride in.

Top comments (0)