I read that BARD will probably include sources in its results unlike chatGPT.
Despite the disclaimer that say AI-tools are not reliable for now, many people will read answers at face value, even AI-generated code might work but with many glitches and sometimes even completely wrong approaches.
If you don't craft your prompt messages carefully, the AI will suggest stuff that is simply not available on your system even if you mention it, for example.
The other aspect can be plagiarism and license, as the model include various reads, books, and other content without providing useful references that "inspired" it, like we do, as authors.
We see the issue of trust across many forms of information sharing,
Wikipedia is a prime example of a website that has turned to a more ideological standpoint where the original founder has even distanced himself from Wikipedia because of the bias that has crept into articles on that site.
Unfortunately it's a very hard problem to solve, but that source transparency is at least a step in the right direction.
Yes. The glut of information can generate lots of biased articles, and we know what happens with the copy of the copy of the copy: ultimately, it's a dead-end.
It would be nice to have, at least, the references, cause the algorithm or the model in itself does not really indicate anything and can even be open-source.
What matters most is the set of data and the filters applied.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I read that BARD will probably include sources in its results unlike chatGPT.
Despite the disclaimer that say AI-tools are not reliable for now, many people will read answers at face value, even AI-generated code might work but with many glitches and sometimes even completely wrong approaches.
If you don't craft your prompt messages carefully, the AI will suggest stuff that is simply not available on your system even if you mention it, for example.
The other aspect can be plagiarism and license, as the model include various reads, books, and other content without providing useful references that "inspired" it, like we do, as authors.
We see the issue of trust across many forms of information sharing,
Wikipedia is a prime example of a website that has turned to a more ideological standpoint where the original founder has even distanced himself from Wikipedia because of the bias that has crept into articles on that site.
Unfortunately it's a very hard problem to solve, but that source transparency is at least a step in the right direction.
Yes. The glut of information can generate lots of biased articles, and we know what happens with the copy of the copy of the copy: ultimately, it's a dead-end.
It would be nice to have, at least, the references, cause the algorithm or the model in itself does not really indicate anything and can even be open-source.
What matters most is the set of data and the filters applied.