The firm will classify photographs on Facebook, Instagram, and Threads when they find AI-generated signs. The company is collaborating with industry partners to recognize AI material, including audio and video. “Imagined with AI” is the label the firm has applied to photorealistic photos produced using Meta AI. Users value openness and want to know when artificial intelligence (AI)-generated material is created, especially as the lines between human and synthetic content become more blurred.
In order to facilitate labeling on websites like Facebook, Instagram, and Threads, industry partners are attempting to develop technological standards for AI-generated content. The goal is to learn about AI content production, transparency, and technological growth in order to guide future approaches and industry best practices. This will be applied in all languages that each app supports.
A New Method for Recognising and Categorising Content Created by AI
Meta AI features photorealistic images with visible markers and invisible watermarks, improving their robustness and helping other platforms identify them. The company is developing tools to identify invisible markers at scale, allowing companies like Google, Open AI, Microsoft, Adobe, Mid Journey, and Shutterstock to label their AI-generated content. Meta is also adding a feature for people to disclose when sharing AI-generated video or audio and may apply penalties if they fail. The company is working on classifiers to automatically detect AI-generated content and developing an invisible watermarking technology called Stable Signature. As AI-generated content becomes more common, debates will arise about identifying synthetic and non-synthetic content.
AI can be used as a shield or a sword
Facebook reduces the frequency of hate speech to 0.01–0.02% (as of Q3 2023) by using AI to identify and remove harmful content. Though their ability to enforce regulations has been limited, generative AI techniques are expected to remove problematic information more quickly and correctly. To detect content breaches and remove content from review queues when they are certain it doesn’t, large language models (LLMs) are being tested. Content created by AI is marked with accurate information and is subject to fact-checking by impartial parties.
AI pioneer Meta feels that accountability and advancement are mutually exclusive. With an open mind regarding the boundaries of what is feasible, they want to assist people in recognising when photorealistic pictures are produced through the use of AI. In order to create shared guidelines and standards, they will cooperate with others and keep learning from users.