As Wired points out, the initiative could one day help social media networks like Twitter and Facebook bolster the automated systems they already use to flag misleading images. Say a tragedy happens and people start sharing photos from the scene, the technology could assist those systems in preventing images that someone claims are from the same event from spreading.
However, the system will only be as effective as the number of companies and organizations that adopt it. To make a dent against all the misleading images shared online, camera manufacturers, software developers, social media networks and media outlets will need to adopt the standard. At the moment, it’s hard to say whether that will be the case.
Limiting Photoshop’s ability to spread misinformation is something Adobe has been thinking about for a while. In 2019, the company worked with researchers from UC Berkeley to train a machine learning-powered algorithm to spot images made with the software’s Face Away Liquify feature, a tool you can use to change and exaggerate a person’s facial features. The difference here is that publishers could use the company’s tagging system to spot a variety of fake images, not just ones created using one tool.