AI tools now allow users to quickly generate images and written content, revolutionizing the creative process. However, the rapid pace of innovation in generative AI also brought risks, such as the creation of fake news and deep fakes. Recognizing these threats, organizations can use AI to identify and mitigate misinformation, and human review is essential for ensuring accuracy and improving generative AI systems.

It was the year of generative AI. The twelve-month period between January and December 2022 gave us DALL-E, Midjourney, and ChatGPT, powerful tools that put the combined power of a search engine, Wikipedia, and a top-notch content generator at our fingertips.

Tools like Bard, Adobe Firefly, and Bing AI quickly followed, rapidly expanding the abilities of your average internet user beyond anything we could’ve imagined just a few years ago. With a couple of simple keystrokes, we can now generate captivating images or pages of written content that, this time last year, would’ve taken hours, days, or weeks to produce—even for illustrators or writers with years of training.

Indeed, generative AI is changing the landscape beneath our feet—while we’re standing on it. But this pace of innovation comes with risks; namely, of losing our footing and letting algorithms override human discernment. As a recent article in the Harvard Business Review highlighted, the creation of fake news and so-called deep fakes poses a major challenge for businesses—and even entire countries—in 2023 and beyond.

Fortunately, innovation in AI is not just producing results for content generation. It’s also a tool that, when coupled with good, old-fashioned human instinct, can be used to resolve problems in the systems themselves. But before examining these strategies in more detail, it’s important we understand the real-world threats posed by AI-generated misinformation.


Recognizing the threats

The potential threats of AI-generated content are many, from reputational damage to political manipulation. I recently read in The Guardian that the journal’s editors received inquiries from readers about articles that were not showing up in its online archives. These were articles that reporters themselves couldn’t even recall writing. It turns out, they were never written at all. ChatGPT, when prompted by users for information on particular topics, referenced Guardian articles in its output that were completely made up.

If errors or oversights baked into AI models themselves weren’t concerning enough, there’s also the possibility of intentional misuse to contend with. A recent Associated Press report identified several risk factors of generative AI use by humans ahead of the 2024 US presidential election.

The report raised the specter of convincing yet illegitimate campaign emails, texts, or videos, all generated by AI, which could in turn mislead voters or sow political conflict.

But the threats posed by generative AI aren’t only big-picture. Potential problems could spring up right on your doorstep. Organizations that overly and uncritically rely on generative AI to meet content production needs could unwittingly be spreading misinformation and causing damage to their reputations.

Generative AI models are trained on vast amounts of data, and data can be outdated. Data can be incomplete. Data can even be flat-out wrong: generative AI models have shown a marked tendency to “hallucinate” in these scenarios—that is, confidently assert a falsehood as true.

Since the data and information that AI models train on are typically created by humans, who have their own limitations and biases, AI output can be correspondingly limited and biased. In this sense, AI trained on outdated attitudes and perceptions could perpetuate certain harmful stereotypes, especially when presented as objective fact—as AI-generated content so often is.

AI vs. AI

Fortunately, organizations that use generative AI are not prisoners to these risks. There are a number of tools at their disposal to identify and mitigate issues of bad information in AI-generated content. And one of the best tools for this is AI itself.

These processes can even be fun. One method in particular, known as “adversarial training,” essentially gamifies fact-checking by pitting two AI models against each other in a contest of wits.

During this process, one model is trained to generate content, while the second model is trained to analyze that content for accuracy, flagging anything erroneous. The second model’s fact-checking reports are then fed back into the first, which corrects its output based on those findings.

We can even juice the power of these fact-checker models by integrating them with third-party sources of knowledge—the Oxford English DictionaryEncyclopedia Britannica, newspapers of record or university libraries. These adversarial training systems have developed sophisticated enough palates to differentiate between fact, fiction and hyperbole.

Here’s where it gets interesting: The first model, or the “generative” model, learns to outsmart the fact-checker, or “discriminative” model, by producing content that is increasingly difficult for the discriminative model to flag as wrong. The result? Steadily more accurate and reliable generative AI outputs over time.

Adding a human element

Although AI can be used to fact-check itself, this doesn’t make the process hands-off for all humans involved. Far from it. A layer of human review not only ensures the delivery of accurate, complete and up-to-date information, it can actually make generative AI systems better at what they do. Just as it tries to outsmart its discriminative nemesis, a generative model can learn from human corrections to improve future results.

What’s more, internal strategies like this can then be shared between organizations to establish industry-wide standards and even a set of ethics for generative AI use. Organizations should further collaborate with other stakeholders, too—including researchers, industry experts and policymakers—to share insights, research findings and best practices.

One such best practice involves data collection efforts that prioritize quality and diversity. This involves careful selection and verification of data sources, by human experts, before they’re fed into models, taking into consideration not just real-time accuracy, but representativeness, historical context and relevance.

All of us with stakes in making better generative AI products should likewise commit to promoting transparency industry-wide.

AI systems are increasingly used in critical fields, like health care, finance and even the justice system. When AI models are involved in decisions that impact peoples’ real lives, it’s essential that all stakeholders understand how such a decision was made and how to spot inconsistencies or inaccuracies that could have major consequences.

There could be consequences of misuse or ethical breaches for the AI user too. A New York lawyer landed himself in hot water earlier this year after filing a Chat GPT-generated brief in court that reportedly cited no fewer than six totally made-up cases. He now faces possible sanctions and could lose his law license altogether.

Source ;


About Author

Leave a Reply

Your email address will not be published. Required fields are marked *