DEV Community

Cover image for AI won't destroy us all... but it might automate your day.
13tinydots
13tinydots

Posted on

AI won't destroy us all... but it might automate your day.

AI (artificial intelligence) is becoming of specific importance in use cases such as analytics, writing copy, decision making, and more as technology advances. A short introduction into these cases is introduced here for thought.

Using language models, AI uses contextual information provided by the user to infer the requested output. It does not have knowledge of facts or do web-crawling like search engines. It simply looks for the answer to the request based on the context in its model. As a result, it will generate text as an output relevant to the language input it sees as if it expects that repeating similar patterns is the expected output (For our purposes it considers numbers to be part of language.)

For example, if one is to ask an AI (such as GPT-3):
“What is the number 70 as binary code?”

Response:
“The number 70 in binary code is 1000110.”

As such it is capable of inferring a number system and doing conversions, including math functions such as integrations and predictive analytics.

One of the things that it does NOT do is synthesize new ideas: If the context is not sufficient to build from, it generally returns what it has seen before, i.e. output similar to the input. It is as if in a newsroom, the editor/user requests an article. In return the copywriter/AI will return something that looks like other articles in context. Specifically in copywriting, this is actually a unique problem because AI is very good at this task - however - it may not be factual in any way. Therefore, when using these models the reader cannot expect that a novel solution will be generated that is a departure from what it has previously “seen” in its model.

Again, these are concepts that the AI has parsed before. In essence, the GIGO (Garbage-In, Garbage-Out) concept applies to the behavior of an AI program.

In its current form, AI is used inside frameworks that are provided by the models. In some cases they are called Application Program Interface (API), and in others they are used in programming language formats such as Python. This is not the limit of possible implementations.

For clarification, APIs are associated with data transmitted using remote servers that host the AI computer program, and requests are transmitted from the computer using internet format rules to do so. This means that even a remote computer can take advantage of these models without having to invest the resources to build one on an onsite computer. Another implementation, Python is a computer programming language that is commonly used. Using tabular data (files that separate data in a way that they can be read by the computer) these files can then be analyzed according to natural language input requests such as “What was the fastest runner’s bib number, and what was the time?” or “Which woman placed second?” and so on.

So, just because you can train a computer to interpret data doesn't mean you can train it to justify harming something. So don't worry about the apocalypse.

Yet.

Top comments (0)