So, I have just finished watching Tom Scott's latest video, on how new AI Language Models are a bit scaring, because we really don't know where we're heading from here. I won't paraphrase him here or anything, but, in a nutshell, he says that new technologies follow a typical sigmoid curve, with a slow start as they are being developed, a steep increase as new ideas reshape the technology, and a flattening of the curve as the technology saturates, and that he really don't know where in the curve technologies like ChatGPT are right now, meaning: we could be in the middle, where new, exciting ways to use AI will be developed, in the end, where we've reached the peek of what AI assistance as we know it, or in the very beginning, where AI Language Models are a thunder, a sign in the distance of everything that is about to change forever.
Unless you're living in a cave, you yourself have probably used some form of AI in the past months: maybe you played a little with generating crazy images by prompt with Dall-e, or discussed philosophy with ChatGPT for fun, but the reality is that AI powered programs are already a tool in a variety of industries and fields, from helping people write documents and essays, to assisting researchers in reading and cataloging information from papers. There have been instances of AI models mimicking established artists' styles to generate new images or text. I have even seen a lawyer on Twitter saying that ChatGPT wrote a document for him from scratch, with minor changes needed to be done, and that even if AI won't replace lawyers anytime soon, they might've just replaced interns.
But it seems to me that it is in computer science -- and in software development in general -- that these technologies show more prominent use. Not only ChatGPT, but things like GitHub's Copilot and Replit's Ghostwriter can turn hours and hours of googling and stack-overflowing in solid minutes of straight up problem solving. For the time I've been using these technologies, it seems to me as if the process of discussing coding with ChatGPT can feel like a junior-senior mentorship: there is back an forth, there is encouragement, there is learning and productive idea shaping. The AI make mistakes, that sometimes we have to point out for it. It corrects our misspelling code or catch one missed ';' that has been driving you crazy for days. We tell it what we want to achieve and it helps us develop the steps we need to follow to write an algorithm. It writes our documentation for us. It explains to us back that one colleague's code that simply does not make sense. There's a plethora of ways you can use these tools to make coding more -- and I really mean to use this word here -- fun.
When I prompted ChatGTP if it "think[s] [it is] a game changer in computer science and software engeneering?", it answered me that "AI language models have certainly been a game changer in the fields of computer science and software engineering, but it remains to be seen what the full extent of their impact will be". And this is something completely difficult to measure -- as Tom said, we still don't know where in the curve we are in right now, and as time rolls by, all will be clear, but as of now, we are in the dark. How can this technology improve in the near future? What challenges will it overcome -- and what others will it generate? How are we going to deal with the fact that machine learning systems require gargantuan amounts of data to be fed into it? Are we going to freely give in to reap the benefits of using this tech? And what about the data itself? Will it make AI systems to perpetuate existing biases and discrimination?
I feel like all of these questions highlight the need for continued reflection on the impact of AI, and the overboard considerations involved in their development and use. While the benefits of using AI language models are clear, we have to consider that, well, maybe they need to be weighted.
I started this post aiming it to be an essay, but I've realized halfway that I don't really know what I want to say about these technologies. I don't really understand them, not as much as I do understand what goes inside Ruby's kernel when I type and run some code. That is to say: if it, as a tool, works for me, than it works. It completes its job, to make my life a little bit easier. And as a tool, it is a wonder.
But, more often than not, a tool might not be just a tool. It may come with strings attached.
And if Tom had a minor existential breakthrough while trying to tidy up his mailbox, I might just have had mine while watching his video.
Nonetheless, we are now, again, in uncharted territory. What comes from it might be gold.
Or dragons.
by the way: while this post was completely written by me, I have used ChatGPT to brainstorm ideas and to fine tune my writing. It did even chose the four tags for me. I guess I'll have to thank it for that. So, thanks, ChatGPT!
Top comments (0)