My original title was: 'I have +25 Years of Programming and Yet, ChatGPT is Teaching me New Things', but ChatGPT changed it.
TL;DR: No matter your experience. New tools will empower you even more.
The Past
I've been working as a software developer for 26 years and I've also been teaching at the university at the same time.
I've always been interested in Artificial Intelligence.
I've founded a start-up on chatbots many years ago and also been consistently publishing about Artificial intelligence trends and tools.
I've tried GitHub Copilot, Code Whisperer, Alpha Code, ChatGPT, and many more.
I wrote about how A.I. would replace bad programmers almost 6 years ago.
I strongly believe this is the actual case.
I also published a paper on how to Educate Students to use the tools.
And I talk to non-technical people daily to tell them to be resilient and fast learners.
The Present
Now, I am a bit more frightened it will also replace software designers who don't use A.I. Tools.
I am revisiting my whole collection of 195 code smell articles with ChatGPT and still learning new things.
I try to publish design rules and tips language agnostic.
Since I cannot master 30 programming languages (the ones I use in my articles), I've been heavily using code translation tools (even before ChatGPT).
Now I am one step beyond. I use ChatGPT to teach me about those languages' quirks.
It is an amazing learning journey.
The Future
As a word of caution. Science is based on evidence, citations, and quotes. You cannot make a thesis or explanation without citing your sources. And ChatGPT does not.
My technological centaur approach is to use ChatGPT for research and then manually check if it is right.
This is now how science works and this is not ChatGPT's better usage.
Twitter is full of hate quotes showing when ChatGPT is wrong.
The example is a bit sexist but it is a meme illustrating the point.
To be a decent software engineer, we must be experts at learning, and ChatGPT is an amazing teacher. Not just for juniors.
Top comments (10)
If ChatGPT would be transparent about the sources for it's statements, then THAT would be another real game changer
I'm not even sure that would be possible. It's a language model built from the sum total of all the information fed into it - it doesn't 'know' or 'understand' anything in a real sense, although it does give the illusion that it does.
Having said that, I do think we are getting ever closer to questioning what 'knowing' and 'understanding' actually mean!
Thanks for bringing that up. From my very limited understanding of ML, I always have the urge to ask the system how or why it knows all this. Maybe this would easily reveal how much of it is just assumed or faked but to me it would be much more of importance to get access to the machine's knowledge than being impressed by it's capabilities to generate a realistic, more human-like response.
I already got the argument that I should rather take a look at expert systems but I still don't understand why this can't be both - being able to determine the factual basis of it's knowledge and also derive conclusions from it and being transparent about it, which of them both is part of the current response.
I'm not a pro on ML or something like that, as a disclaimer, but what I get to know is the following:
You can make a model on your own with most languages (I saw JS things in your profile so you can check Brain.js and/or Tensorflow js) then you'll understand the basics of it.
There are different ways of training models, one can be giving feedback to the AI, saying this is a good/bad answer for the "question".
Keep in mind that a "question" in this context can be whatever (showing a picture of cats and pretending the AI to determine if there's a dog in the picture, a proper text question -which involves natural language processing- and so on.
The answer on "how did you know that", if you trained the AI with the reward/punishment method would be "because someone else told me" and from this base knowledge it infers answers on new "questions" related to the topic.
The topic extends to a more difficult path when speaking about Neural Networks where any "Neuron" has been trained it's own way and with given specific datasets.
On the top of that, an AI doesn't "know" things. It determines a probability of something being "truthy" or "falsy", so the output of the AI behind the scenes is more like "I'm 86% confident that's the correct answer to the problem you threw at me" and not something deterministic like "that's like that, no doubts" of something you could replicate with a pure algorithm (because in this situations we would use pure algorithms and not AIs).
If I misunderstood something and anyone wants to correct me, it will be much appreciated.
Cheers!
Yup, that's pretty much how I understand it too... essentially a very large mathematical model that attempts to output a 'correct' answer based on a myriad of parameters inferred from a lexical analysis of the user's input with additional context from previous submissions and responses in the current chat. Ultimately a purely mechanical exercise with no actual 'thought'.
That's probably wildly over-simplified though :)
Why is ChatGPT giving you information you could easily find out from a web search, or from reading basic documentation, going to put programmers and designers at risk?
have tried asking ChatGPT to generate code? is its a lot more easier than googling it or searching in stack overflow
What makes you think that ChatGPT being easier than a web or Stack Overflow search is going to equate to putting programmers' and designers' jobs in peril? Isn't it programmers and designers who are making those searches? Isn't it programmers and designers who have the expertise to make sense of the results of those searches?
I understand your concern about the potential impact of AI-powered tools on the job market for programmers and designers. It's true that AI has the potential to automate certain tasks, but it's also important to note that it can also create new opportunities and improve the efficiency of existing jobs. It's also important to remember that the expertise of programmers and designers will still be critical in creating, training, and maintaining these AI systems. Thank you for raising this important point.
I don't have a concern about the impact of AI-powered tools on the job market for programmers and designers. I have curiousity as to why the author has such a concern.
Your reply sounds like a reasonable response to the author, as you're stating explicitly what I'm implying, but I wanted the author to be more explicit about his reasoning, which is why I've phrased things a certain way.
I'm not interested in debating him, just in understading his underlying logic, as perhaps there's more to it than what I'm seeing in this article (which seems insufficient in and of itself to make this claim).