DEV Community

Cover image for Will GPT-3 Revolutionize AI?
Amana Samsudeen for Mozilla Club of UCSC

Posted on

Will GPT-3 Revolutionize AI?

enter image description here

There is been a great deal of hype and excitement in the Artificial Intelligence (AI) world around a newly developed technology known as GPT-3. It's an AI that is better at creating content that has a language structure ,human or machine language than anything that has come before it.

What is GPT-3?

Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 . It’s the third version of the tool to be released.
In short, this means that it generates text using algorithms that are pre-trained. They have already been fed all of the data they need to carry out their task. Specifically, they have been fed around 570GB of text information gathered by crawling the internet along with other texts selected by OpenAI, including the text of Wikipedia.

What can GPT-3 do?

GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.
This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.
As the code itself isn't available to the public yet access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year.

How Does GPT-3 works?

GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.
It can do this thanks to the training analysis it has carried out on the vast body of text used to “pre-train” it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured.
It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that it's output will be what the user needs is gathered from the training texts themselves.

What are some problems with GPT-3?

It is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.
It is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.
The output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful when it is asked to produce something longer or more complex. These are some issues GPT-3 has.
Anyway it’s a fair conclusion that GPT-3 produces results that are leaps and bounds ahead of what we have seen previously. Anyone who has seen the results of AI language knows the results can be variable, and GPT-3’s output undeniably seems like a step forward. When we see it properly in the hands of the public and available to everyone, its performance should become even more impressive.

Top comments (0)