
Short-term Outcomes
#1: Consumers are increasingly willing and able to choose products critically based on information regarding AI trustworthiness (e.g. trust labels)
Milestones: High-Level Activities
- Educate and empower consumers with digestible evidence about trustworthy AI products and services
- Translate trustworthy AI principles into a 'trustmark' that can be used to communicate what 'good' looks like to consumers
- Create and promote tools for consumers to test AI trustworthiness in order to enable or change their decision making
Potential Threats
- Products that are deemed trustworthy are unreliable, unavailable, or unaccessible (e.g. too expensive) for mainstream consumers. Alternatively, untrustworthy products stay competitive by lowering prices, creating a "pay to play" environment for trustworthy products.
2020 Partner Initiatives
Anti-Defamation League: Supporting Cyberhate Targets and Vulnerable Populations Online: The creation of new resources, training or tools for marginalized populations that are most vulnerable to online hate and harassment, including guides, training materials, frameworks, or apps. Media projects, from traditional (film, live performance, digital media) to interactive (games, VR experiences, chatbots) that are focused on the experience of targets of hate and harassment in online communities.
Fellow, Emmi Bevensee: Developing technical and political tools to combat the next wave of fascism.
Consumers International: Understanding the impact of AI on consumers and ensuring the consumer perspective is effectively represented in an area where consumers have limited representation.
Fellow, Harriet Kingaby:
- Study the unintended consequences of AI-enhanced advertising.
- Identify issues with targeting, personalisation, and other uses of the technologies, and
- Build tools to mitigate their effects.
MIT Co-Creation Studio:
- Develop a MOOC (massive open online course) entitled AI, Media, and the Message to help the public better understand the workings of AI and its social, cultural and political implications.
- Develop workshops, research programs and production partnerships to pursue a public-facing series that address deep fakes and synthetic media,
Derechos Digitales, Fellow, Narrira Lemos de Souza: Examine misinformation’s impact on public opinion and social practices, and the role automated decision making plays.
#2: A growing number of civil society actors are promoting trustworthy AI as a key part of their work
Milestones: High-Level Activities
- Extend the understanding and adoption of the Trustworthy AI cause to the wider civil society, encouraging groups to share knowledge.
- Run collaborative public education and corporate pressure campaigns by civil society orgs from a variety of sectors
Potential Threats
- Funding for AI-focused work dries up due some external exigent circumstance, and civil society organizations turn their attention elsewhere.
2020 Partner Initiatives
Access Now, Fellow, Daniel Leufer
- Work with civil society organisations (CSOs) to identify the most harmful and pervasive AI myths and inaccuracies,
- Develop resources to help CSOs tackle these myths effectively and in a coordinated manner.
Digital Freedom Fund, Fellows, Jonathan McCully & Aurum Linh:
Create two guides:
- The first will be aimed at digital rights activists, technologists, and data scientists to demystify litigation.
- The second will be aimed at lawyers with clients whose rights have been violated by the development and use of AI.
European Digital Rights, Fellow, Petra Molnar: Engage with NGOs to help build EDRi’s network and broaden the scope of action to non-digital groups beyond the EU, translating these efforts into a global strategy for the governance of migration management technologies.
MIT Co-Creation Studio: joining forces with Mozilla and Witness to incubate, research and support the creation of a media series that explores the role of artists, journalists and documentarians in the growing debates of human rights and artificial intelligence.
Data Ethics, Fellow, Francesco Lapenta: Develop realistic criteria for defining responsible and data ethical use of AI/ML. These criteria and the experience gained through the process will also provide a basis for a best practice guide that will be made openly accessible and spread through various networks including our own and our partner, the MyData network.
#3 Transparency is included as a feature in more AI enabled products, services and technologies
Milestones: High-Level Activities
- Demonstrate and communicate what best practice in transparency looks like, in order to help guide others to build it
- Develop and roll out a large scale method to ensure that consumers know when there is AI inside a product or service they are using
Potential Threats
- Industry convinces policymakers that pro-transparency initiatives threaten their IP.
- Consumers are apathetic towards transparency; the "transparency paradox" continues.
2020 Partner Initiatives
Algorithm Watch:
- Campaign for a register of ADM systems that are used in the public sector
- Report about cases of government, public administration and the private sector developing and using systems that (threaten or risk to) violate the rights of individuals or negatively impact democratic societies as a whole;
- Explain the characteristics and effects of complex algorithmic decision-making processes to a general public.
- Help citizens better understand automated decision-making processes;
- Develop ideas and strategies to achieve intelligibility of ADM processes – with a mix of technologies, regulation, and suitable oversight institutions -- in order to maximize the benefits of this kind of automation for society.
Fellow, Anouk Ruhaak: Address the problem of the lack of options to assess complex ADM systems from the outside.
Witness: Focus on existing and emerging threats to freedom of expression, including overzealous algorithmic content moderation, attempts to address ‘fake news’ that silence alternative voices, and growing anxiety over manipulated media such as “deepfakes" and synthetic media.
#4: Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI
Milestones: High-Level Activities
- Work with consumer groups to mobilize the public on the transparency issues -- focus both on education and data donations.
Potential Threats
- Consumers deprioritize trustworthy AI in the face of other issues like climate change, political polarization, and economic challenges.
2020 Partner Initiatives
Panoptykon: Developing a "labeling" system for algorithms/automated decision making systems, which will cater to the needs of different target groups, from non-expert users to regulators.
#5: Trustworthy AI products and services emerge that serve the needs of people and markets previously ignored
Milestones: High-Level Activities
- Identify, fund and promote early-stage innovations that show promise, including serving those historically ignored
- Identify and enable expansion into under served market segments with a high likelihood of benefiting from AI
- Encourage 'good' product expansion to a wider range of markets (the EU products/services that comply with GDPR to the US)
Potential Threats
- Nation-states with competitive technologies entice under served markets to use their technologies with other incentives.
2020 Partner Initiatives
Algorithm Watch: Serve as a platform linking experts from different cultures and disciplines focused on the study of algorithmic decision-making processes and their social impact.