The study, supported by Princeton professor Jacob Shapiro and updated this month, catalogues 96 different foreign influence campaigns that affect 30 countries between 2013 and 2019. Of the 26 campaigns against the US, all were from other countries, and social media campaigns sought to defame prominent people, convince the public, and polarize the debate. The question of whether disinformation is widespread has been underexposed by researchers at Princeton University, the University of California, Berkeley, and the Center for Strategic and International Studies (CSIS).
93% of campaigns produced original content, 86% reinforced existing content and 74% distorted objectively verifiable facts.
Disinformation comes in many forms, and not a single technology will solve the problem of helping people decipher what is true and accurate. Recent reports have also shown that disinformation is spreading in the wake of the COVID 19 pandemic, which has led people to seek a supposed cure that is actually dangerous. Microsoft announced a partnership to educate the public about this problem and to rapidly accelerate efforts to promote technology and education.
The main problem is synthetic media, i.e. photos, videos and audio files, which can be manipulated by artificial intelligence (AI) in ways that are difficult to detect. This could allow people to say things they have not done in places where they have not been. The ability to create deep fakes – fake images and videos of real people and events from which artificial intelligence can learn time and again – makes conventional detection technology more likely to be beaten. In situations like the upcoming US election, advanced detection technologies are a useful tool to help sophisticated users detect deep counterfeits.
Microsoft Video Authenticator launched a new version of its Video Authenticator for Windows 10 and Windows Phone 8.0. VideoAuthenticator analyzes still images and videos to gain a better understanding of artificially manipulated media such as photos, videos and audio files. For each video played, a percentage and percentage of the total number of playback times of the video is given.
The technology was originally developed by an advisory board at Microsoft, which co-developed and responsibly used the new technology. Video Authenticator was created using public records from Face Forensic and tested against leading models. It works by detecting mixing and mixing of deep, subtle fading and grayscale elements that the human eye cannot recognize.
Microsoft states: “We expect that the methods of synthetic media production will continue to gain sophistication. In the long term, therefore, we must look for stronger methods to maintain and certify the authenticity of news articles and other media. Although AI detection methods have a failure rate and must have failure rates in the past, they must be understood and prepared to respond to profound forgeries that bypass detection methods.”
There are few tools to help readers make sure that the media they see online comes from a trusted source and that they have not been altered.
Microsoft announced a new technology that can detect manipulated content and reassure people that everything people see is authentic.
The first is a tool built into Microsoft Azure that allows content producers to add a digital hash certificate to content. The hash certificates live with the content wherever it is embedded online with high accuracy so that they can be verified by any user, even if they are in a third-party browser or on a mobile device. And the second is the ability for readers, who can exist as browser extensions or in other forms, to let people know with greater accuracy that the content is authentic and has not been altered, and to provide details about who produced it.
The technology is being developed by Microsoft Research and Microsoft Azure in partnership with the Defending Democracy Program. A recently announced initiative by the BBC called Project Origin will promote the use of hash certificate technology in its own content.
The nature of this challenge requires a variety of widespread technologies, education efforts reaching consumers around the world, and continuing to address the challenge. “No single organisation will be able to have a complete solution to combat disinformation and harmful deepening, but we will do everything in our power to help.” says Microsoft
First, it is working with RD2020, a San Francisco-based dual-venture company that is engaged in commercial and non-profit activities to empower and protect AI for people around the world. Under this partnership, the video authenticator will be made available to organizations involved in the democratic process, including news media and political campaigns. The video authenticator will also be available to companies that will follow the 2020 model, such as Facebook, Twitter and other social media platforms.
Campaigns and journalists interested in learning more can contact RD2020 here and DeepFakes here and here for more information about the partnership.
The Trusted News Initiative, which includes a number of publishers and social media companies, has also agreed to work with this technology. Second, Microsoft has partnerships with a number of media companies, including Google, Facebook, Twitter, YouTube, BuzzFeed, and other media, that test the authenticity of the technology and help promote it to a standard that is widely accepted. Microsoft are also working on media literacy and hope to expand our work in this area in the coming months to include the use of technology in media literacy initiatives across the UK and beyond.
Improving media literacy will help people manage the risk of counterfeiting deep fakes at a lower price, and it will also help them understand the risks of misinformation.
Although not all synthetic media are bad, it has been shown to help people identify them and treat them more cautiously. Practical media knowledge can enable us to think critically about the context of the media, become more engaged citizens, and still appreciate satire and parody.
Microsoft has launched an interactive experience that will enable voters across the United States to learn about synthetic media, develop critical media literacy, and raise awareness of the impact they have on our society. The Spot Deepfake Quiz is the latest in a series of interactive experiences that Microsoft has developed in partnership with the National Institute of Standards and Technology (NIST) and the US Department of Justice.
The PSA campaign will help people better understand the damage disinformation and disinformation is doing to our democracy and understand the time it takes to identify, share and consume reliable information. The quiz will be distributed through a series of ads that will air in the United States in the run-up to the November 2016 election. In partnership with the US Department of Justice’s Center for Accelerating Democracy, Microsoft is supporting a public announcement campaign to encourage people to pause for thought and check that information comes from reputable news organizations before it is shared or promoted on social media ahead of the upcoming US election.
Finally, NewsGuard, which allows people to learn more about online news sources before consuming their content, has increased significantly in recent months. Created by Microsoft and the Center for Accelerated Democracy (CACD), a nonprofit research and education organization, newsGuard uses nine criteria of journalistic integrity and works with a team of experienced journalists who evaluate online news sites based on them to create the most accurate, reliable, and trusted news source.
Users can access NewsGuard by downloading a simple browser extension that is available for all major browsers. It is important to note that Microsoft does not have editorial control over NewsGuard’s ratings and that the NewsGuard extension does not restrict access to this information in any way. Instead, Microsoft is stating that they want to increase transparency and promote media literacy by providing information on the quality and reliability of online news sources. The Newsguard extension, which is free for Microsoft Edge browser users, is also available in the Google Play Store.
Governments, businesses, nonprofits, and others around the world must play a critical role in combating disinformation and electoral interference in general.
In 2018, the Paris Call for Cyberspace Trust and Security brought together world leaders committed to nine principles that will help ensure cyber peace and security. One of the most important principles is the defence of electoral processes.
Securing democracy in May is an attempt to steer global operations in this direction. We hope that all organisations wishing to join the Paris call to contact us so we can tell our audience about their activities and activities in this area.