Pinterest: New Program That Aims to Improve Safety Elements for Young Users
21/06/2023Proven Strategies for Effective Social Media Growth
27/06/2023It is known that AI tools continue to increase day by day, thus more questions are being raised over the risks of these processes, and we have to consider what regulatory measures can be implemented to protect people from copyright violation, misinformation, and defamation.
The ideal first step should be government regulation, which also requires global cooperation, as we’ve seen in past digital media applications, is difficult to establish given the varying approaches and opinions on the responsibilities and actions required. Additionally, it’ll most likely come down to smaller industry groups, and individual companies, to implement control measures and rules in order to mitigate the risks associated with generative AI tools.
Two of the world’s largest technology companies, Meta and Microsoft, have joined a new framework for responsible AI use. The Partnership on AI (PAI) Responsible Practices for Synthetic Media initiative aims to establish industry agreement on responsible practices in the development, creation, and sharing of media created via generative AI.
Generative AI is a type of artificial intelligence that can create new content, such as images, text, and audio, from scratch. This technology has the potential to be used for a variety of positive purposes, such as creating new forms of art and entertainment or generating realistic training data for AI models.
Thus, it also has the potential to be used for malicious purposes, such as creating deepfakes or other forms of disinformation. The PAI Responsible Practices for Synthetic Media framework aims to address these risks by setting out a set of principles for the responsible development and use of synthetic media.
These principles include:
-Transparency: Developers and users of synthetic media should be transparent about how it is created and used.
-Accuracy: Synthetic media should be accurate and not misleading.
-Non-discrimination: Synthetic media should not be used to discriminate against individuals or groups.
-Privacy: Developers and users of synthetic media should respect the privacy of individuals.
-Accountability: Developers and users of synthetic media should be accountable for the impact of their work.
According to PAI:
“The first-of-its-kind Framework was launched in February by PAI and backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. Framework partners will gather later this month at PAI’s 2023 Partner Forum to discuss the implementation of the Framework through case studies and to create additional practical recommendations for the field of AI and Media Integrity.”
Recently, Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal introduced new legislation that would remove Section 230 protections for social media companies that facilitate sharing of AI-generated content, meaning the platforms themselves could be held liable for spreading harmful material created via AI tools.
“Meta and Microsoft reach billions of people daily with creative content that is rapidly evolving,” said Claire Leibowicz, Head of AI and Media Integrity at PAI. “These companies have both the expertise and the access needed to reach users all around the world and help them learn to discern AI-generated images, video, and other media as synthetic media’s prevalence grows. Their support of the Framework underscores a continued interest in designing interventions to minimize misinformation, ensure that users are informed about the content they’re seeing, and allow for creative expression to flourish.”
“Meta is excited to join the cohort of supporters of Partnership on AI’s Responsible Practices for Synthetic Media and to work with PAI on developing this into a nuanced approach to educating people about generated media,” said Nick Clegg, President, Global Affairs at Meta. “We’re optimistic about the developments in this space and about using this technology to bring more tools for creative expression to our community.”
“Microsoft endorses the Partnership on AI’s framework for collective action and responsible practices for uses of generative AI to create synthetic media,” said Eric Horvitz, Chief Scientific Officer at Microsoft. “We applaud and support PAI’s initiative to build a strong, collaborative community dedicated to protecting the public from malicious actors who aim to manipulate, sow discord, and erode trust in the digital information we consume.”
PAI’s Responsible Practices for Synthetic Media emerged from a year-long process that incorporated inputs from over a hundred contributors. The Framework offers guiding recommendations for those involved in creating, sharing, and distributing synthetic media. It was sparked by industry experts’ consensus that the rising prominence of synthetic media presents a new realm for creativity, but unchecked, it also harbors the potential for misinformation and manipulation.
Over the past four years, PAI has been assessing the challenges and opportunities associated with synthetic and manipulated media. With the support of over 50 organizations, PAI has honed the Framework, creating a solid foundation for responsible practices in this rapidly evolving field.
You can learn more here: https://partnershiponai.org/
Source: Social Media Today & Partnership on AI (PAI)
Find more information here: http://bit.ly/2BPQn38
For more information contact us at: [email protected]