In a series of Threads posts this afternoon, Instagram head Adam Mosseri said users should not trust the images they see online because artificial intelligence “clearly generates” content that could easily be mistaken for reality. Therefore, he said users should consider the source and social platforms should help with this.
“Our role as internet platforms is to do our best to label the content generated as AI,” Mosseri wrote. But he acknowledged that the labels would miss “some content.” Therefore, platforms “must also provide context about who is sharing” so that users can decide how much to trust their content.
Just as you remember that chatbots will confidently lie to you before you trust an AI-powered search engine, checking whether a posted statement or image comes from a reputable account can help you consider its authenticity. Currently, Meta’s platform doesn’t offer the kind of context Mosseri posted today, though the company recently hinted that big changes to its content rules are coming.
Mosseri’s description sounds closer to user-led moderation, such as community annotations on X and YouTube or Bluesky’s custom moderation filters. It’s unclear whether Meta plans to roll out something similar, but then again, it’s known to take a page from Bluesky’s book.