DEV Community

Tim Reber
Tim Reber

Posted on

Can AI Detector Save us?

AI generated content could be detected by other AI systems that are designed to identify and track specific patterns, behaviors, or characteristics associated with artificial intelligence. However, it is not specifically designed to evade detection by AI detectors.

It's worth to mention that the most common AI detectors are those used in security and surveillance applications, such as intrusion detection systems and surveillance cameras, and they are not focused on detecting specific AI models like ChatGPT-3.

It's also important to note that AI detectors are not foolproof and can have false positives or negatives. The accuracy of an AI detector will depend on the specific algorithm and parameters used, as well as the data it is trained on.

Personally i tried it by myself. I generated a Thread in ChatGPT 3 and tried many diffrent AI Detectors. Here is a list:

The funny thing is. Every Site told me that the content is 100% written by human. And i think this is a big Problem. How do you know that a thesis is written by a student and not generated by AI? We have to find a solution to this!

Top comments (5)

Collapse
 
andre_adpc profile image
Andre Du Plessis

This is a very interesting and valid question, Tim, and the same questions came to mind when I started asking "Assistant" some questions and tested it on the AI blog generator. Then, tongue-in-cheek I went on PapeRater and Grammarly to go have a look see what their AI's had to say on the "authenticity of my work" :-). Both were clapping in praise and applause.

I then read an article related to the exact issue, and on Scott Aaronson's Blog, who is working on the Open AI project, stating that they will start "Water-marking" AI delivered content, which makes perfect sense.

It is clear that AI used on various fronts should not become the tool to make us even lazier than we already are, but to supplement the work we do and rather make it more reliable under "copyleft of the human collective".

Another use, which I find quite pressing as well comes into the spotlight of new CRA legislation being in review in the EU community.

It is about getting your software Audited and Certified by qualified and certified third party auditors, prior to being allowed into the EU market. Reading the article is quite an eye-opener and when/if it comes to pass will be a major disruptor in the SW Development life-cycle. And, no it's not only intended for adoption by the large corporates. As all laws go there are always tricky interpretations; one being "should the SW development be done commercially, even via OSS sponsorship (GitHub, et al), it will need to conform".

Again, this makes a lot of sense, no fighting that. We need it!

However, getting this done by humans will be next to impossible, so AI will have to stand in and get this sorted out too. So the same "authenticity" issues will come into play here. Do I have a valid and up-to date CE-Soft , ISO/SOC certification? We did an update on the codebase last week. Are we still covered?

There is no way that it's only the EU considering steps like those proposed in the CRA. I'm 100% sure similar regulations are in consideration in every sane-thinking government out there. It's the logical next-step. Companies in the US like HyperProof comes to mind, but at what cost?

The list is long and gets longer as time passes and the problems regarding safe and secure code on an international scale gets more and more pronounced.

The solutions clearly are to use the tech we are all in awe of, but still have many questions about, no doubt, but like humans.

I agree AI needs ID cards just like us and as "indelible and unchangeable" as our IDs are. And we need to be made aware what work was done by them in an efficient, easy and transparent way.

It might be a good thing to spread the word. Soon lazy students will get nailed the same way when they commit "traditional plagiarism".

Getting software Audited and certified does not fall in the realm of most developers, that's management's problem! Yet, when you are a start-up of one or a few, you are "management", and you need to be aware of the waves coming at you. For they are there, ignoring them won't make them wash past you or your team.

Collapse
 
rebertim profile image
Tim Reber

Hi Andre, Thank you for sharing your thoughts on the topic of AI authenticity and the potential for watermarking and certifications. I agree that it's important for AI to be held accountable for its actions and for users to be able to easily identify when AI is being utilized. The idea of auditing and certifying software before it enters the market is definitely something that should be considered, especially as technology becomes more integrated in our daily lives. I appreciate your insights on the potential for similar regulations in other countries and the impact it could have on the software development industry. It's definitely a topic worth spreading awareness about, especially for smaller companies and start-ups who may not have dedicated management teams.

Collapse
 
andre_adpc profile image
Andre Du Plessis

Hi Tim, No problem at all. You provided the "coffee" for a very real, very active buzz going on in the AI world. I'm surprised to read that Google Search Central went on "Code Red" about getting thier "stuff on" to catch up with MS. One would think that they of all (being a significant leader in AI research) would simply be able to flick a switch and give Open AI a run for thier money regarding using NLP for search functionality.

Another issue crawling out of this is on the image generation side of "regulating" things. I just read an MIT Tech Review spin-off, called "The Algorithm" by Melissa HeikkilΓ€ , that Stable Diffusion is causing issues with artists' "styles" being used to create art-work based on thier particular niches without their concent or attribution given in their credit. Picasso can't be helped, the poor guy is no longer, but living artist's are in serious trouble here, especially digital artists.

On the other side of the "freedom/moderation" coin;
In China, Baidu's platform is already sensored to suit the status quo. "Tianamen Square's masacre never happened", kind of censorship. So there goes that. Next, these platforms will be fed what is considered "good for the people" based on the views of "the powers to be, striving to always be".

And with OSS models available freely to the public, the dark side are surely focussing thier sights on rolling out automated anarchy.

Probably one of the main reagons Google has held back, some might argue, not thier lack of knowledge or skill, but probably a combination of getting nailed over privacy issues all over in the EU, and a deeper insight into the risks of just letting rip, perhaps?

GitHub has released CoPilot for business and the "battle-dust" is rising higer with the lawsuits being nailed to more and more lamposts against them, MS, and Open AI. One could do a sure-bet that ChatGPT is going to get drawn into this battle sooner rather than later.

Again we are unleashing tech, wonderful tech I need to add, into the wild without thinking things through properly. When will humanity learn not to behave like a four-year old playing with a box of matches in the middle of a dry grassland?

Time β€” and not a lot of it β€” will tell, I'm sure.

Collapse
 
jonrandy profile image
Jon Randy πŸŽ–οΈ • Edited

There are quite a few GPT specific detectors around - and they work pretty well. Just search for 'GPT detector'

Collapse
 
rebertim profile image
Tim Reber • Edited

So i tried openai-openai-detector.hf.space/ same result it sais that it is real. Because they are made for the GPT 2 Model.