The Eiffel Tower is on fire? At least that is what a picture circulating on social media in January showed.
But wait. Before sharing that post, see if any major news outlets have reported it. The first step in verification is “click restraint.” Stop, and do what journalists do; verify.
As it turned out, no major newspapers or radio reported that fire. The photo was created by Generative AI (artificial intelligence). In past columns, I have covered how images can be manipulated or used out of context to create misinformation. However, with the rapid development of generative AI tools, it is now quite easy to create very realistic images with a simple prompt.
So how can you keep from getting fooled by AI generated images and videos? As shown above, the first step is to pause and consider that an image or video (including audio such as voices) may be fabricated by AI. If you see a video of a famous person saying or doing something very unlikely, you may be looking at a “deepfake” video. If you see an image that shocks or confuses you, it’s time to take a breath and do some “lateral reading.” Search news sources to see if you can find the same image, video or related news.
With all that we encounter online, we can sometimes begin to mistrust all news sources. However, reputable news organizations (including our beloved Moscow-Pullman Daily News) follow journalistic standards. These include procedures like verifying with multiple credible sources and publishing corrections. Search major or established new sources that follow standards for your fact check.
Current AI tools have some limitations that may help you suspect an image or video’s authenticity. AI tools still may have problems generating hands, fingers, mouths and lips. Does a person featured have too many or too few fingers? Do the shadows, light or the background seem odd or out of place? In videos, pay attention to the movement, size or color of the mouth. Is there anything odd about the blinking, eyes, glare of glasses, eyebrows or facial hair?
However, these tools are improving, so you should fact check images by doing a reverse image search. Use your browser’s image search to search just the image, looking for other occurrences or fact checking sites. There may be clues about if it was AI generated in the metadata (e.g. title, description, tags, creation date, technical details) or it may have a watermark or logo from an AI generating site. Some Google images include a link to “about this image” to help you learn the origin.
Other common Generative AI tools are Large Language Models (LLM), like ChatGPT or Gemini. You can ask an LLM a question (called a prompt) and it will scour much, though not all, of the internet and give you a nicely written answer, with citations if requested. While these tools seem very smart, they are only using probability to create their answers. They are predicting – and sometimes they get it wrong, though they may be convincing. At libraries, we now receive more questions asking for help finding the articles from citations produced by an LLM. Quite a few of the references do not exist. Generative AI tools may fill a vacuum of information with something that they predict could fit the need but may be fictitious. In addition, like much of the information we encounter on the internet, LLMs are created by humans and they scour information on the internet made by humans, so they may pass along social biases and injustices while making the information look objective. They may leave out less prevalent perspectives. So again, when you get an answer or citations from an AI tool, stop and double check by searching elsewhere.
Full disclosure: I used Google’s Gemini AI tool to research parts of this article. Gemini was happy to list how to identify AI created images. However, I cross-checked the information with other sources.
And, of course, your local librarians are the original fact checkers. We are glad to help you sort fact from fiction, and real intelligence from artificial intelligence.
Prorak is the reference and instruction librarian at the University of Idaho.