This post is about the dangers of AI, and while there is plenty of "fear mongering" and celebration surrounding this subject, it's important to gain more perspective from those integrating or planning to integrate these technologies. A lot of this type of dialog comes from journalists. As builders, we have a relevant perspective that needs to be checked in on.
We are at an inflection point in computing. Generative AI is not "all-powerful", but it is a powerful force in the future of information and truth. These tools are both used and abused in modern society, and this is related to OpenAI, Microsoft AI, Meta's LLaMa, and everything new in this space. With OpenAI's primary API's significant price reduction, tomorrow's utilities will be more powerful than today's.
While you shouldn't feel ashamed of your excitement for this technology, it's essential to note that the downsides don't come tomorrow; they come today, often before the upsides arrive. Tools, infrastructure, and government policy are necessary to ensure the safe, fair, and equitable use of our most powerful technologies, but the best parts often don't surface in time, or ever.
I don't want to naively equate these technologies, but I think it's worth mentioning for illustration: We are still waiting on the promise of crypto, and we have seen a remarkable amount of downside. Pump and dump schemes, crime circles, environmental devastation β downsides so far have been so much more tangible than the upsides.
I am excited about AI, but as fast as the upsides come, we risk the downsides hitting us sooner. Pick your poison: Disinformation, workforce displacement, fraud, etc. Let's talk about it in terms of chaos.
Chaos
Creating chaos can be easier than preventing it because it typically requires fewer resources and less effort in the short term. Chaos can arise from misinformation, lack of communication, or insufficient planning, and these factors can be easier to cultivate than to address. Preventing chaos requires more resources, planning, and coordination. It involves identifying potential problems and taking proactive steps to mitigate them before they spiral out of control. This can be challenging because it often requires a deep understanding of the underlying issues and the ability to take decisive action to address them.
Moreover, chaos can have a snowball effect, where small issues escalate quickly into larger ones, making it increasingly difficult to control the situation. In contrast, preventing chaos requires a sustained effort over time, which can be challenging to maintain.
Overall, preventing chaos requires more proactive effort and resources in the short term, but it can help avoid much greater costs and negative consequences in the long term.
So what to do about all of this?
This post isn't just a nebulous warning. Let's talk about a couple of themes to move forward with.
Scrutiny
Let's be excited about the innovations, but let's also scrutinize these releases consistently. We will see uneven care and thoughtfulness in the rolling out of these technologies. We need to laud the good stuff and scrutinize the bad. For a tangible decision you can make to promote scrutiny, seek out the open source tool. This is often the best thing for your development in the long run, but all else being equal, it is good for security and safety.
Safety and security
The world wide web was not initially built to be secure, and the web security industry was virtually non-existent early on. Only after decades of scrutiny and focus did modern security normalize. Adoption of the web, and society's willingness to place trust in it, was a slow enough process for security to catch up, perhaps still not enough. We have an OWASP top ten, and we have dedicated conferences.
Safety and security in the era of AI will take a different form, but it needs an industrial complex akin to other cybersecurity fields. Investment in mitigation in every type of corporation and government needs to be normalized and prioritized.
Safety and security outcomes in AI are going to be inherently less deterministic than what we are used to, and allaying chaos is an uphill climb, but it is a climb that has to start today.
Thanks for reading, happy coding!
Top comments (5)
Technology has always been about one thing. Augmenting human capabilities
In the early stages of any new technology we have higher levels of risk
As we learn to mitigate them we move to another stage. Increasing the dependency is f the technology
To begin with humans have to be in the loop. Then we advance the technology to only need to be on the loop. Until we only need to be notified of the state
I donβt see AI getting there too soon. But the risks are going to start showing up very soon, if not already present
Good one ... I think it can all be summed up as "no pain no gain".
You can't predict the outcome, nobody has a "recipe", or a crystal ball, that's what's unsettling about it ... take the industrial revolution - people might have had lofty expectations about the upsides, and those did materialize - but almost nobody thought (or even dared to think) about the downsides, and those came thick and fast as well.
Of course the point is, developments like these CANNOT be stopped, even if you wanted to, this is human "evolution" - then the question is, what are you gonna do about the downsides.
I personally think that with every type of innovation, especially some which have the ability to disrupt the modern day-to-day life as we know it, comes with a good amount of skepticism and downsides in its early stages.
I believe it is inevitable nevertheless, since humanity is progressing forward and there is constantly a need for new tools and technologies that make people more productive and make our lives easier.
I can see how and why a lot of people are skeptic specifically about AI, but just like any other groundbreaking innovation I think people need to learn to adapt and embrace it and find ways it can improve things for them as individuals.
Quote of the day:
We are still waiting on the promise of crypto, and we have seen a remarkable amount of downside. Pump and dump schemes, crime circles, environmental devastation β downsides so far have been so much more tangible than the upsides
Yes I've always been skeptical about the usefulness of blockchain, can anyone give me an example of a TRULY useful application? 90% of it feels like vaporware to me, a brilliant solution in search of a problem.
With this new generation of AI tools the issue is almost the opposite - nobody's questioning how powerful and "useful" this stuff is, but you don't need a lot of imagination to predict that these tools could be extremely disruptive.
(well not "could" - I've already heard well-documented examples of various kinds of disruption being caused RIGHT NOW)
Good post. Found this linked in your comment on a recent post about the "Moratorium" story.
I'm curious what you think about the current allocation of priorities in the project of "AI safety." It seems like the category of "trying to prevent jerks from convincing LLM to say disgusting things that offend people" is taking up spaces 1-99 on the priority list, while somewhere much lower down is "planning how people would be able to eat when 75% of the things we call "work" today is being performed by 'AIs'.
I really hate for people to have their feelings hurt, but think it's odd that most of what appears (at least from the outside) to be being discussed centers more on the first goal. Because jerks have always been perfectly capable of creating their own disgusting content. I don't think "not yet having LLMs to generate content" was ever the limiting factor in the spreading of hateful written materials or imagery. So this seems like a goal that, even if successful, won't make a big difference in the world either way.