I do not claim to be an expert on these topics; rather, I just wish to share my perspective in hopes of spurring interesting dialogue around the topics discussed.
Over the past year (or a few years), the tech landscape has seen a significant shift with an extreme focus on generative artificial intelligence (Gen AI). There has been quite a lot of excitement surrounding AI's potential use cases but that has been consistently intertwined with apprehension about its unknown implications.
While the capabilities of AI are undoubtedly powerful, rather than debate the promises or perils of this specific technology, I aim to explore some of the critical sociotechnical themes that I think warrant consideration in the context of AI. I do not intend to focus on the technical details or make bold claims about AI "changing the world." Rather than succumbing to the common narrative that AI will unequivocally "transform the world," I would like to examine how this technology has opened "new" cultural doors that require us to think more carefully about how we should approach this technology and the implications of its use.
Three crucial themes that I think should be focused on as move into the future with AI: the accessibility of advanced problem-solving, the irreplaceable nature of human judgment, and the imperative of responsible innovation.
While AI has been a presence in the business world for a decade (subject to interpretation regarding what formally counts as AI and what constitutes merely a sophisticated algorithm requiring significant computational power), it has only been accessible primarily to the largest organizations. Thanks to the evolution in general computing power and research, AI is now more accessible than ever before. These new AI models, software, and tools are capable of making complex predictions and are no longer confined to large enterprises. Small organizations and individuals can now harness the power of AI to solve problems that previously demanded extensive technical expertise and resources. However, I would like to shift the focus from AI a bit and concentrate on the broader theme of accessibility in technology, specifically how complex problem-solving and automation are becoming more accessible.
For the longest time, the ability to solve complex problems and automate tasks was restricted to those with extensive technical expertise. Speak to any engineer, especially in software, and they likely have a story about automating some random task to save time and effort or building a tool to address issues they've encountered. Software engineers are a remarkably resourceful group, and I say that with the utmost respect (and as one). They exhibit a type of "laziness" in the sense that they are always seeking ways to automate and solve tasks to save time and effort, allowing them to focus on more "fun" and "interesting" problems. What we observe with AI though, is that it is not only software engineers who use software to solve problems; AI has made complex problem-solving more accessible to a wider audience.
This shift in accessibility parallels the historical trajectory of technology, where tools like graphical user interfaces made computing more accessible than the command line. AI, in essence, emerges as another tool enhancing sociotechnical accessibility across diverse user backgrounds. For instance, if you were a photographer, you may have had to learn advanced photo editing techniques to remove unwanted objects from photos, such as power lines or people. Now, with AI, a photographer can quickly remove unwanted objects from photos with a few clicks and instead focus on the more creative aspects of their work.
On the other hand, the misuse of AI can lead to unintended consequences and inaccessibility in created solutions. In The 2018 book, "Algorithms of Oppression" by Safiya Umoja Noble, she explores the use of AI in facial recognition and provides countless examples of algorithmic bias perpetuating racial biases in everything from housing applications to crime predictions.
If you have not yet checked out this book yet, I would highly encourage giving it a read
While AI has become accessible to the masses and allows people to solve complex problems, it also has the potential to make some of these problems worse. Say you are a hiring manager who has to quickly review hundreds of resumes and you choose to have an AI tool summarize key traits of candidates, it might unintentionally affiliate negative keywords to negatively stereotyped demographics. While it is a simple use case, AI can have extreme real-world consequences if misused.
With that said, the misuse of AI is not a reason to avoid using it altogether but rather a call for responsible innovation. As AI becomes more accessible, it is crucial to ensure that it is used responsibly and ethically, especially for those creating solutions to problems that affect others. This is not a tech problem but rather a human problem that requires a human solution. If you are building a solution that affects others, it is your responsibility to ensure that it is not harmful to others. AI is a powerful tool that can be used to solve complex problems, but it is not a replacement for human judgment (more on this next).
Artificial intelligence, despite its name, is not a replacement for human intelligence. Instead, it serves as a powerful tool to assist in solving complex problems. From aiding writers in drafting content to assisting software engineers in coding, AI streamlines mundane tasks, allowing professionals to focus on more intricate and innovative aspects of their work. The collaboration between AI and professionals enhances efficiency and productivity without replacing the unique skills and insights that humans bring to the table.
Unfortunately, an alarming trend that has arisen with the intense focus on AI is the idea that technological automation can replace professional decision-makers. For instance, research by the Pew Research Center found that 52% of Americans are more concerned than excited about AI (reference). When you think about that, that is a terrifying statistic in the wake of the breakneck speed at which AI is being adopted. In the healthcare industry, it was found in the same Pew Research Center research piece, that 60% of people are even uncomfortable with the prospect of a healthcare provider relying on AI technology. While the business landscape may be enticed by the prospect of efficiency gains and cost savings through the integration of AI, the human element in decision-making cannot be understated. The complexities of human emotions, ethical considerations, and the ability to navigate ambiguous situations are integral aspects of decision-making that technology, at its current stage, cannot fully grasp.
There is a quote that I particularly like regarding this topic, and it is from Dr. Werner Vogel, Chief Technology Officer of AWS.
Dr. Werner Vogel, Chief Technology Officer of AWS hammered this home most notably in his re:Invent 2023 keynote:
"AI makes predictions, professionals decide. They are assistants they don't make the decisions for you. We as humans are the ones who make the decisions." ~ Dr. Werner Vogels, re:Invent 2023 Keynote
AWS is one of the leaders in Cloud Computing and demands some of the highest levels of technical expertise. For Dr. Vogel to make this statement, is a testament to the importance of human judgment in decision-making. The resistance to AI replacing professionals is not merely a reactionary stance; it is a call for a collaborative model where the strengths of technology augment rather than overshadow human judgment. Today, we have more data than ever before at any point in human history and AI can serve as a valuable tool to find patterns that humans cannot realistically find in finite time. Using technology, professionals can find key information to drive decision-making they would otherwise not be able to do. This collaborative approach is likened to a magnet in the face of a mountain of unstructured data.
"[...] We have a mountain of unstructured data where you think this may be a haystack with a need for it. So, how do you find a needle in a haystack? You use a magnet--and this magnet is machine learning."
~ Dr. Werner Vogels, re:Invent 2023 Keynote
We are in the early stages of AI, and it is not a replacement for human judgment. It is a tool that can be used to augment human judgment and decision-making. As AI continues to evolve, it will become more powerful, but it will not replace human judgment. It will continue to be a tool that can be used to augment human judgment and decision-making. We need to be smart about how we use AI and ensure that it is used responsibly and ethically. For the tough decisions that require human judgment, we need to ensure that we are using AI to augment human judgment and not replace it. Otherwise, we risk driving decisions that are based purely on metrics rather than empathy and impact. We also risk something more economically alarming, replacing professionals with subpar technology that is not ready to replace them.
In the rush of technological advancements, particularly with AI, I think it should be obvious to say this, but acting with intention and empathy is essential. but, there is a reason there is a strong tug-of-war between AI enthusiasts and skeptics; people are afraid of negative ramifications. A big fear driving this is the potential for AI to replace people. While a lot of businesses are saying "AI will not replace people," the reality is that it is already happening. People are being replaced by AI--but not really, there is a nuance to this. People are being replaced by people... who are using AI. There are a lot of perspectives you can have on this, you can have on this, but I think an interesting one is to compare it to the Industrial Revolution--all be it, a bit cliche. While the Industrial Revolution brought about a lot of positive change, but this relationship between people and technology initially resulted in a wall of negative consequences--namely labor exploitation. Fun fact: there were 37,000 strikes in America from 1881-1905 (reference). In many cases, the goal was to improve working conditions or often to protest the firing of a fellow employee. It is ramifications like this that are driving the fear of AI. People are not afraid of being replaced by AI; they are afraid of being exploited under the guise of "embracing AI" and "innovation."
So, how do we use AI ethically and responsibly? Do not give into apathy and always have a human-first mindset. AI, with its inherent complexities and uncertainties, requires a proactive approach to understanding and mitigating possible consequences. Responsible innovation demands a commitment to building technology that leaves a positive impact and safeguards against unintended consequences. Time and time again, the conversation surrounding the misuse of AI is rarely about AI. It is about layoffs; it is about economic hardship; it's about people. AI at the end of the day is just another technology. But, to effectively implement, people need to move forward with a human-first mindset. People need to be at the center of the conversation and the decision-making process.
As industries navigate the landscape of AI, fostering an empathetic mindset is not only a strategic imperative but also a way to ensure that the integration of technology aligns with human values. Otherwise, we might be in for a bleak future driven by cost-cutting and decisions driven by metrics rather than empathy and impact. The future lies in a coexistence where AI empowers people, allowing them to unlock new possibilities and address challenges with a blend of technology and human insight. Not future should not be one in which we risk people's well-being for corporate profit.
While I don't claim expertise on these topics, the tech landscape's recent shift toward AI has been a fascinating (and at times terrifying) one to observe. The tug-of-war between AI enthusiasts and skeptics highlights that the shift is not merely a technological, one but rather a sociotechnical one.
AI is just another technology at the end of the day. It is wonderful that it has opened new doors that allow us to think about reshaping not only how we approach technology but also challenging and transforming established notions of work and societal roles. While it makes technology-related problem-solving more accessible it also risks great harm if implemented without a human-oriented approach.
In the rush of technological advancements, building with intention is crucial. Responsible innovation requires a commitment to building technology that leaves a positive impact, safeguards against unintended consequences, and aligns with human values. The future lies in a coexistence where AI empowers people, not replaces them. To unlock new possibilities and address the sociotechnical challenges introduced by AI, we just always be human first.
Top comments (0)