Google is introducing AI info labels for AI-edited images in Google Photos. Starting next week, Google Photos will clearly indicate when an image has been edited using generative AI tools like Magic Editor, Magic Eraser, and Zoom Enhance.
This information will be visible in the image details section of the Google Photos app, providing users with clearer insight into how their photos have been edited.
We’ll see AI info labels on Google PhotosThe new labeling feature will display an “AI info” section in the image details view, both in the app and on the web. This will sit alongside existing information like file name, location, and backup status. Until now, the metadata indicating AI editing was largely invisible to users, but Google is making it accessible to provide more clarity. The decision to make this information available is part of a broader effort to ensure that users understand when and how AI has been used in their photos.
The metadata will specify which tools were used to edit the image. For example, if Magic Eraser was used to remove an object from the background, or if Magic Editor was used to enhance certain elements, these details will be included in the “AI info” section. This helps users understand the extent of AI involvement in modifying the photo, which can be particularly relevant when sharing images with others or for professional purposes.
In addition to generative AI edits, Google Photos will also label images that include elements from multiple photos, such as those made using the Pixel‘s Best Take or Add Me features. These features allow users to create composite images by selecting the best expressions or poses from several shots.
Best Take, for example, lets users choose the most flattering expressions from a series of group photos, while Add Me allows users to insert themselves into a photo where they were initially absent.
Video: Google
Google acknowledges that the system isn’t foolproof. Users with technical knowledge can still remove or alter this metadata if they choose. Metadata can be edited or stripped from an image using various software tools, meaning that those who intend to conceal AI edits may still find ways to do so.
John Fisher, engineering director at Google Photos, mentioned in a blog post that the company is still working on enhancing these transparency features and plans to gather feedback to improve them over time.
Google recognizes that transparency around AI-generated content is an evolving issue, and they are committed to exploring additional measures to help users identify when AI has been used in photo editing. This could include more robust metadata standards, watermarks, or other forms of labeling that are harder to remove.
The growing prevalence of AI-edited photos has led to broader discussions about the role of technology in shaping what we see. Other companies have approached this issue in different ways. For instance, Apple has taken a more cautious stance regarding generative AI in photo editing. With its upcoming iOS 18.2 release, Apple plans to avoid adding realistic AI-generated elements to images, aiming to prevent potential confusion about the accuracy of what people see.
Apple’s senior vice president Craig Federighi has expressed concern about AI-generated content blurring the line between what is real and what is artificially created.
Featured image credit: Google
All Rights Reserved. Copyright , Central Coast Communications, Inc.