Google Photos to Introduce AI Image Attribution Feature to Combat Deepfakes Future updates could also contain special tags, like ai_info and digital_source_type
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Google Photos is working on a new feature aimed at helping users identify whether an image has been created or altered using artificial intelligence (AI). This update comes in response to growing concerns over deepfakes—images, videos, or audio clips manipulated by AI to deceive or spread false information.
The new feature, discovered in version 7.3 of the Google Photos app, includes hidden code indicating that future updates could contain special tags, like "ai_info" and "digital_source_type." These tags would reveal whether an image has been generated or edited by AI technology, along with details about the specific AI model used. For example, it could show if the image was altered using Google's AI tools or third-party ones like Midjourney.
Deepfakes have become a significant issue, with increasingly sophisticated AI tools making it easier to create manipulated media. These deepfakes can spread misinformation, damage reputations, and mislead the public.
While it's still unclear exactly how Google will present this information to users, there are a couple of possibilities. The AI attribution details could be embedded into the image's metadata using EXIF tags, which are already part of every image file. This method would ensure the AI data is securely stored with the image. Another possibility is the use of on-image badges or labels, similar to what Instagram does, to instantly let users know if an image has been altered by AI.
Though Google hasn't yet announced when this feature will go live, its presence in the app's internal code suggests it could be released soon.