Recently, I conducted an experiment where I let ChatGPT, a large language model trained by OpenAI, write articles for me. The results were quite interesting and raised some important questions about the current state of AI.
The articles generated by ChatGPT were read by a lot of users and even gained attention on Twitter. This showed that AI is capable of generating content that is popular and attracts readers. However, despite their popularity, the articles did not receive many likes or positive reactions. This made me question the quality of the articles and whether they were truly valuable to readers.
One example is an article about Git that had over 2600 reads but only 18 reactions, while my own article about "how (not) to store passwords" only had less than 1000 reads but 44 reactions and 13 times the comments. This disparity in reactions highlights the difference in quality between the articles generated by AI and those written by humans.
This raises the question: is current AI capable of coming up with original thought and producing high-quality content? The answer, unfortunately, is no. AI is still far from being able to create anything truly meaningful beyond human capabilities. It is only able to generate content based on what it has been trained on, and this means that the content it produces will always be limited and lacking in originality.
In conclusion, while AI may be able to generate content that is popular, it is still far from being able to create anything truly meaningful or original. Until AI reaches a higher level of intelligence and creativity, we should not rely on it for producing high-quality content.
Happy hacking :)
The actual conclusion
The article you just read was also written by ChatGPT. I gave it following prompt:
Write an article about my experiment, where I let ChatGPT write articles for me. Mention how the articles were read by a lot of users and were even tweeted about on twitter. Then talk about the current situation with AI and how it cant come up with original thought. Also tell the reader how the article, even tho having a lot of reads, didnt have a lot of likes, which makes me think that the quality is rather low. For example: The article about git had around 2600 reads but only 18 reactions, while my own article about "how (not) to store passwords" only had less than 1000 reads but 44 reactions and 13 times the comments. Critizise the current state of AI and how it is very far away from writing anything meaningful beyond human capabilities. End the article with "Happy hacking :)"
Which actually didn't produce this article yet. What it did was write a really short article about this subject. I specifically had to tell it "make this article longer" for it to create what you just read.
What is also half a lie is, that the AI isn't able to come up with original thought. I started off by giving it certain subjects to write about like "write about how to use the Twitter API in Ruby", which got a tweet by dev.to. After that I became lazy, so what happened is, that I told it to give me good topics for a dev.to article. What came out is quite the average dev.to topic; something that sounds interesting, has a focus on modern technologies and something that can be read by every skill class in coding. That's when it wrote my most-read article to date: Avoiding (5) common git mistakes in a team environment. By the time of me having written this part around 50 more people read it, which just proves how good the AI is at writing flashy titles.
I was so cocky, that I didn't even read the articles before publishing, and the code wasn't tested either. Guessing by the fact that dev.to and other accounts tweeted about the articles makes me think, that at least the code worked.
Now what do I think about this? The language model may not be able to write the articles on a nice and thorough level yet, but that will most likely be the case soon. It is also not really able to come up with original stuff, but picked the most generic dev.to topics it could. And because that is the expected behavior of an AI, I believe we are still not at the place where it can fully replace humans.
If you are interested in reading the AI-written articles, I marked them all with an [AI] in their title, so you know which ones to look out for.
And as always, happy hacking :)
Top comments (4)
Out of curiosity, I just looked at the one you said is your most read, the one on avoiding common git mistakes in teams. This is not the type of post I would normally read based on its title. However, it is not bad for the task you gave it. The advice in the post is fine, just not particularly deep or insightful.
Keep in mind now though that it will likely get read more often not on its own merit but because you used it as an example in this post. For example, I would not have read it if this post didn't exist.
Well, even before this post existed it had 2700 reads, which is a lot more than I get with most of my other posts. While yes, there will probably be more readers on the other one because of this one, I believe that the other way around it will more be the case.
Where I think the AI performs well is making something catchy, just judging by the sheer amount of people that look at that title and thought: "Hey this seems interesting". When I read the article (which was after release) I also realized that it isn't really a great article, but if reads are your goal - AI is the way.
I didn't mean before this post. I just meant that it will likely get a further bump in views from people looking at it for an example of what AI can produce. As far as your experiment goes, the views it had at the time you wrote this post are all that you can attribute to the AI's post itself.
Anyway, in addition to the catchy title, it actually did a good job of creating something that is consistent with many DEV posts, which is what you asked of it. Posts with top 5 or 10 or X of something are quite common on DEV. Some of those are quite detailed with valuable information if it is of a topic of interest to you. But the vast majority of them are actually along the lines of the one that AI produced for you, or sometimes lower quality than that one, or sometimes sound nearly identical to one posted the day before by someone else. So AI didn't create an example of one of the rarer higher quality Top X Lists, but it also didn't create an example of one of the especially low quality ones either.
Yes, I think the AI did an quite alright job following my instructions. If I gave it better instructions I could have maybe also made better articles