DEV Community

Kevin Naidoo
Kevin Naidoo

Posted on • Updated on • Originally published at kevincoder.co.za

Why ChatGPT and other LLMs are overrated and won't take your job

Since the end of 2022, and most of 2023 - ChatGPT, machine learning, and AI have become hot topics. Every content creator on the internet, including individuals, programmers, and so on, has been incorporating this technology into their workflow in some shape or form (including myself).

So, why are LLMs overrated?

My title is a bit misleading, yes of course LLMs will continue to be popular and grow, now and into the future, and yes they may replace a certain group of jobs, however, the technology behind these LLMs is nothing new and has been around for years.

LLMs simply digest large amounts of data and then use prediction algorithms to figure out what the next word or sentence should be.

There is no intelligent thinking behind the response you get from an LLM, it's just statistically generated content, and thus has a ceiling for which it will reach in the next few years.

Don't get me wrong this tech is useful, for generative and analytical tasks such as generating a blueprint for a blog article or some code or a script for a YouTube video or images, etc...

However, based on the current technology, it's probably going to be integrated into chat clients like WhatsApp and mobile devices as a personal assistant and, therefore will become Google++ essentially.

Instead of searching for content or reading through 100s of pages of documentation, you will have a personal assistant on hand to quickly find and summarize data, assist you with compiling presentations, generate images, etc...

Merely another hammer

WhatsApp is a great example, it's used by billions of users worldwide, and is a handy tool in your pocket, to stay in touch with friends and family, but also helps you with everyday business or work-related tasks.

The same applies to LLMs, eventually, the hype will die down and it'll become another tool in our pocket.

It's cool because it's "new" to most people, just like when the first iPhones were released - people braved crazy queues just to get their hands on the new version.

Several years later, there's nothing special about the new iPhones - no new innovation, just incremental updates.

This is the fate of ChatGPT and LLMs too.

Regulation

Lawmakers as usual are slow to react, it will take several years - maybe even decades to properly regular AI, however copyright claims, lawsuits, and heavy regulations will stunt the growth of AI, and this is inevitable.

Ultimately, besides the expensive infrastructure costs associated with running these models, managing the security and regulatory requirements around such technologies will become too much of a burden for small players, and thus big gatekeepers will dominate.

Look at the AI landscape already: OpenAI, Microsoft, Google, and Facebook seem to be prime candidates to dominate this space and become the gatekeepers.

Once the technology is sufficiently commercialized and profitable, its growth will stabilize and gatekeepers are not going to want to tinker too much because they will be at risk of losing money, unless of course such tinkering results in further market enthusiasm & excitement.

A good example of this is Google, they've been way ahead of the AI game for years and could have trained a model similar to ChatGPT much sooner. Yet, they did not bother with this market, because - it threatened their cash cow advertising business. It's only now because of competition - they were forced to release BARD and play in this space.

What exactly am I saying?

Well, to summarize, LLMs will continue to grow in the next 2-3 years but it'll plateau and, the rate of innovation will slow down. The technology will reach its peak and will become like any other tool we use on a daily basis.

This will not end skilled careers such as graphic designers, programmers, content writers, etc... - in fact, it's going to make these professionals even more productive.

However, the downside, is that there will be some damage. Instead of 3 to 4 engineers - you could hire one or two engineers and use the LLM to automate all the mundane stuff.

Similar applies to graphic designers, instead of hiring a graphic designer for basic tasks - you could use something like Canva+AI to achieve the same result.

Conclusion

LLMs take the work of creators, stuff that exists on GitHub and other open platforms, digest that information, and then re-use that data to generate content. These models do not create their own content, they just re-arrange pre-existing content and regurgitate that content in a format we find useful, using mathematical algorithms.

The human brain, on the other hand, is remarkable, we can be put in situations we've never experienced before and thrive like no other creature on this planet can.

Besides being able to absorb, digest, and make predictions on data - we can create stuff from nothing, and invent new tools, and new ideas.

A set of identical twins, with the same parents, same childhood, same education, and even the same diet can experience an event, or ingest data from the same source but create totally different experiences and outcomes.

This is the power of the human brain, we can adapt and innovate like no other species.

LLMs at this point are nowhere close to this ability, and most likely never will be.

You are unique, and will always be valuable no matter how good ChatGPT or any other LLM gets, however, don't become complacent - keep learning and keep evolving.

Top comments (2)

Collapse
 
greg_ewin profile image
Greg Ewin

Great analysis, but don't you think the blend of AI like ChatGPT and human ingenuity could lead to unforeseen innovations, beyond just being another tool?

Collapse
 
kwnaidoo profile image
Kevin Naidoo

Thank you for the feedback. The transformer architecture, used under the hood is just a mathematical algorithm essentially, It's not AGI and cannot think for itself, so it cannot reason or collaborate with us.

Instead, we give it clear instructions of what we need, and it searches its vast database of knowledge to find information that would satisfy our prompt. This information is knowledge that already exists in the world and ideas that have already been created by humans.

It's possible, by mixing various already existing ideas into a collage of sorts to create something completely new, but still, there needs to be a human prompting it to do so - it's not going to get there on its own.

Just like how laptops assist us in writing code, connecting via the internet, and even designing the next big thing. At the end of the day, it's just a tool because it needs us to be in the "driving seat".