DEV Community

Cover image for Why is everyone panicking about AI?
Anuroop
Anuroop

Posted on

Why is everyone panicking about AI?

Most of what we see online regarding AI are the two extremes, either complete apocalypse or curing cancer. But like most things, the truth seems to be somewhere in the middle.

What is an LLM?

LLMs are machine learning models trained on petabytes of data to predict patterns in language. Current models have become very capable of doing this. This has people worried about the emergence of sentient behaviour or self preservation tendencies in LLMs. Today thanks to libraries like LiveKit, it's easier than ever to create a multimodal AI system that can produce coherent responses based on not just text, but also image and voice input.
Although this might seem groundbreaking, underneath, the model is still processing text tokens. Most systems use a separate vision or speech to text model for multimodality.

Does AI have preservation tendencies?

Current models do show behaviour that appears to prioritise self preservation over saving humans in certain situations, as pointed out in papers published by Anthropic, OpenAI and others.
People often remark how these models can't really think or have emotions. However, a system doesn't need emotions or human like cognition to cause immense changes in our lifestyles and even harm Flash Crash 2010 is a good example of this, where automated trading algorithms caused a near economic collapse.

One of the major concerns that bothers me is being unable to distinguish between what aspects of the current AI progress is just hype for pulling in investment and what is based on real development. Companies are heavily investing time, money, and talent in implementing AI even though new reports show that the ROI is extremely bad. But is AI really going to work in the long term? Is it really going to improve at the pace we are told? Will it really be good enough to replace human engineers?
Would it help us formulate our thoughts better or would it dilute our reasoning?

These are HARD questions to answer.
The problem is that we are trying to predict the future based on little to no data.

Is AI progress rapid?

When people say that AI progresses at a rapid pace, they often discard the decades of research prior to the "AI Boom" (which is typically marked by the release of chatgpt 3.5). What about the backpropagation research papers published around 1960s? How about RNN research papers from 1990s? Or the LSTM papers of 1997s? These were fundamental for the development of AI and ML. Ignoring these is simply ignorant and naiive.

Top comments (0)