Yes, and sometimes I'll ask Bard a question and then google to see if Bard is telling the truth. I still have "trust issues" with AI when it comes to professional work :).
Yes, the public chat versions are not to be trusted completely, especially for things where some guesswork may present itself. I think it's because they set the "temperature" parameter to a figure above 0. It's what gives it that generative AI imagination (or we call them hallucinations if it's supposed to generate non-fiction).
If you send a request to the API endpoint with the temperature set to 0, the model will not try to be so creative. It's pretty decent with code generation as long as the tasks are broken down to smaller steps and you ask it to solve each step individually.
That's part of the problem with ChatGPT. If you ask it a question which you know the answer to: "Who was the 16th President of the United States?" and it gives you the wrong answer: "Andrew Jackson" well that's easy to tell that ChatGPT got it wrong. But what about a question you don't know the answer to? Then how do you tell if ChatGPT is correct or not?
Have you seen this news story? A Man Avianca Airline. His Lawyer Used ChatGPT This was a legal professional who should have known better but he trusted ChatGPT when he shouldn't have. ChatGPT can simply make up stuff that simply isn't true. And if you can't tell that it's not true then you can be responsible for peddling falsehoods.
I know the pride of accomplishment of building something and having people use it and I'm sure not trying to rain on your parade. But you need to be aware that ChatGPT has some very real limitations about telling reality from fiction and if you can't sort out what's true and what isn't ChatGPT can't do that for you.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Yes, and sometimes I'll ask Bard a question and then google to see if Bard is telling the truth. I still have "trust issues" with AI when it comes to professional work :).
Yes, the public chat versions are not to be trusted completely, especially for things where some guesswork may present itself. I think it's because they set the "temperature" parameter to a figure above 0. It's what gives it that generative AI imagination (or we call them hallucinations if it's supposed to generate non-fiction).
If you send a request to the API endpoint with the temperature set to 0, the model will not try to be so creative. It's pretty decent with code generation as long as the tasks are broken down to smaller steps and you ask it to solve each step individually.
That is smart. I did not have any issues though, as I have no idea fully what I am doing:) So ChatGPT was all I had:)
That's part of the problem with ChatGPT. If you ask it a question which you know the answer to: "Who was the 16th President of the United States?" and it gives you the wrong answer: "Andrew Jackson" well that's easy to tell that ChatGPT got it wrong. But what about a question you don't know the answer to? Then how do you tell if ChatGPT is correct or not?
Have you seen this news story? A Man Avianca Airline. His Lawyer Used ChatGPT This was a legal professional who should have known better but he trusted ChatGPT when he shouldn't have. ChatGPT can simply make up stuff that simply isn't true. And if you can't tell that it's not true then you can be responsible for peddling falsehoods.
I know the pride of accomplishment of building something and having people use it and I'm sure not trying to rain on your parade. But you need to be aware that ChatGPT has some very real limitations about telling reality from fiction and if you can't sort out what's true and what isn't ChatGPT can't do that for you.