DEV Community

Cover image for The Trust Factor: The Ethics of Crediting AI in Content Creation
Jason St-Cyr
Jason St-Cyr

Posted on • Originally published at jasonstcyr.com on

The Trust Factor: The Ethics of Crediting AI in Content Creation

With an unbelievable 30% of people now using generative AI tools daily for professional work, content is being generated at an astonishing rate. How can we trust the information being put in front of us?

Disclosing AI content might just be a first step. Readers may have noticed that I use a Credits section in many posts to provide proper attribution. However, recently I have been using these credits also for generated content, such as the following from my article on AI governance:

  • Content: Written and edited by Jacqueline Baxter and Jason St-Cyr using good ol' fashioned human writing.
  • Cover image: Generated by Jason St-Cyr using NightCafe
  • Title: Generated by Jason St-Cyr using ChatGPT

Why call out AI-generated content?

Increasingly, it is becoming harder to distinguish some generated content from fully authored and researched content. One might look at the structure of an article with its distinct sub-headings and short paragraphs and believe it must have been written by an AI. There is no doubt that if the words are exactly the same between a human and an AI, there is no loss in value to the reader. So why call it out?

I believe this is about building trust. According to Dentsu's Consumer Navigator Generative AI 2024 report, 81% of those surveyed want brands to "disclose to consumers if branded content was created with gen AI". By calling out when AI is used, we also call out when it is not used. Beyond trust, there are also possible implications to copyright. Christopher S. Penn stated "If you clearly disclose what content is AI-generated versus human-generated on your site, you reiterate your copyright over the human parts." Indicating an image is generated also indicates to the reader that the non-image content is not generated. This act of transparency builds up trust in your content. Right now, while people are learning to use AI for all sorts of summarizing and generating needs, there is still a natural distrust of information that is purely generated.

For this reason, I think we need to be putting specific notes on generated content so that the audience can distinguish what is a human-created artistic expression. Industry efforts are also introducing AI watermarking for automatic detection:

"For example, the system might be more likely to choose certain rare words or sequences of tokens that a human would be unlikely to replicate.\
The presence of these rare words and phrases would then function as a watermark. To an end user, the text output by the model would still appear randomly generated. However, someone with the cryptographic key could analyze the text to reveal the hidden watermark based on how often the encoded biases occur."

"AI Watermarking", Lev Craig, TechTarget

Unfortunately, the current AI watermarking technology has low accuracy rates and cannot yet be relied on.

Some groups are trying to come at this from the other way: marking that content is real, instead of looking for generated signals. The Content Authenticity Initiative is working at embedding information in digital content so that you can trace content back to the original authors and see historical information about changes to it. It's an interesting idea, but falls over immediately for things like written content in text formats.

For now, we likely have to trust authors to do this. That means establishing it as an accepted practice amongst creators and brand teams.

What about AI-assisted content?

There are extremely valid use cases for using generative AI tools and other AI assistants to help in the writing process:

  • Ideation to get the creative ideas flowing
  • Translating from a foreign language
  • Copy-editor analysis of content for grammar and spelling
  • Suggestions for alternative phrasings or synonyms
  • Structural composition to provide frameworks/templates to the author for a type of content
  • Language assistance for writers who are writing in a language they are less comfortable in
  • (more of course... but that's enough bullets🤣)

In these cases, a human is still involved in the writing process. The tools here are just that: tools. The same as reading a book, or searching the web for relevant articles, or finding an example template from a colleague. In this scenario, do we need to notify folks that we have used a tool to come to our final product?

I think this is where we enter a really gray area. There is no attempt on the part of the author to be misleading or create disinformation. They are involved in the process and approving what is going out. They are manipulating the tools to achieve the result they want. How much manipulation is too much? At what point do we claim that the human was no longer really involved here?

I don't have an answer for this, but I know that for my own use case I'm using the following questions as my guide:

  1. Would it have required a prohibitive amount of additional effort to do this myself?
  2. Is the output distinctly different than what I would have wanted to create in my natural writing style?
  3. Was the content portion influenced greater than 10% of the final output?

If I answer yes to any of these, then I place this in the realm of AI-assisted content that I need to credit. For example, for this article here is how those questions went:

  1. Prohibitive amount of effort: No. I used Copilot and WordPress Jetpack assistant to analyze the content of the post, after I had written it, to look for any suggested improvements. This was work I could have done myself, but I tried to do something quick to find the first layer of changes, and then reviewed myself after that.
  2. Distinct output: Mostly. As I wrote all of the content and implemented the suggested changes, the output of the main content is completely in my style. The title, however, was arranged by me based on suggestions from ChatGPT. I purposefully tried to put it into my own style, but it feels a little less like a title I would write.
  3. Content influenced greater than 10%: No and yes. For the main content, this was clearly my writing. However, the title was greatly influenced by ChatGPT options.

As a result, I decided to attribute the Title as an AI-assisted piece of content in the credits.

Does it even matter?

Okay, so after all that, the real question: if people can get the information they need from the content, does it really matter if it was generated or created by a person?

If the value of the content is measured by how much it helps the reader accomplish their goals, then either way of getting to that content is equally valuable, assuming they both are able to help the same way. That doesn't mean it is trusted.

Ultimately, I think this is where I come down on the need for transparency around AI usage. We need to establish trust in our content, our brands, and as people. Disinformation is so prevalent, there is a huge need for trusted creators. At this moment, generated content is eroding trust.

"The line between human-generated and AI-generated content is blurring, making it harder to distinguish between the two. This lack of transparency around the syntheticity threatens trust and the integrity of information ecosystems,4 emphasizing the need for clear disclosure and mitigation strategies for synthetic content becomes more paramount."

"In Transparency We Trust?", Ramak Molavi Vasse'i and Gabriel Udoh, Mozilla research\
While the Mozilla research has concluded that user-facing declarations aren't effective enough on their own, I believe it's a first step that we can take while the technology to back it up with digital authenticity mechanisms matures. Those of us trying to create content need to establish the trust that we will disclose when something is authentic and when it is generated.

References

Credits

  • Cover image: Generated by Jason St-Cyr using NightCafe
  • Title: Assembled by Jason St-Cyr, AI-assisted with options from ChatGPT.

Top comments (0)