From the horse’s mouth
Increasingly, images and video are being seen as integral to news coverage with most of us seeing them, especially photographs, of importance when corroborating a fact or news story.
But these are becoming weaponised to tell a different truth compared to what is actually captured by the camera. One way is to use the same or a similar image to corroborate a different fact, with this including the use of image-editing tools to doctor the image so it tells a different story.
I have covered this previously when talking about the use of reverse-image-search tools like Tineye or Google Image Search to verify the authenticity of an image and . It will be the same kind of feature that Google has enabled in its search interface when you “google” for something, or in its news-aggregation platforms.
Google is taking this further for people who search for images using their search tools. Here, they are adding images to their fact-check processes so it is easy to see whether an image has been used to corroborate questionable information. You will see a “fact-check” indicator near the image thumbnail and when you click or tap on the image for a larger view or more details, you will see some details about whether the image is true or not.
A similar feature appears on the YouTube platform for exhibiting details about the veracity of video content posted there. But this feature currently is available to users based in Brazil, India and the USA and I am not sure whether it will be available across all YouTube user interfaces, especially native clients for mobile and set-top platforms.
It is in addition to Alphabet, their parent company, offering a free tool to check whether an image has been doctored. This is because meddling with an image to constitute something else using something like Adobe Photoshop or GIMP is being seen as a way to convey a message that isn’t true. The tool, called Assembler, uses artificial intelligence and algorithms that detect particular forms of image manipulation to indicate the veracity of an image.
But I would also see the rise of tools that analyse audio and video material to identify deepfake activity, or video sites, podcast directories and the like using a range of tools to identify the authenticity of content made available through them. This may include “fact-check” labels with facts being verified by multiple newsrooms and universities; or the content checked for out-of-the-ordinary editing techniques. It can also include these sites and directories implementing a feedback loop so that users can have questionable content verified.