
Short-term Outcomes
#1: Clear "Trustworthy AI" guidelines emerge, leading to new and widely accepted industry norms
Milestones: High-Level Activities
- Identify and summarize the most important guidelines that currently exist, in order to encourage consensus.
- Work with industry to test the guidelines, documenting what works
- Develop and promote a collection of tools that developers and implementors can use in applying guidelines
Potential Threats
- Industry actors don't collaborate, or only collaborate as much as they feel doesn't threaten their bottom line, thereby undermining the process.
2020 Partner Initiatives
- Access Now (2019): Contributed to the Ethics Guidelines currently developed by the European Commission High-Level Expert Group on Artificial Intelligence.
- Data Ethics: Personal data control in AI: Creating guidelines and standards for engineers on personal data control and ethics in AI through the IEEE P7006 and MyData.org.
#2: Engineers, product managers and designers with trustworthy AI training and experience are in high demand across industry
Milestones: High-Level Activities
- Integrate trustworthy AI guidelines and tools into existing large scale professional development programs for developers, engineers and product managers
- Grow the number of universities integrating ethics and responsible design as part of undergraduate computer science training
- Encourage companies to offer higher pay to those with ethics and AI credentials
Potential Threats
- Existing professional development programs refuse to integrate the new guidelines and tools.
- Universities change the scope or focus of the ethical computer science curricula, ignoring society's needs for industry needs.
- Credentialing system does not come into existence.
- Companies don't recognize ethics/AI credentials.
2020 Partner Initiatives
Anti-Defamation League: Prevent the Radicalization of Individuals Online: Research, Education, or Policy-focused projects to better understand, detect, and prevent the use of technology platforms to move users in the direction of more biased behavior. Much has been suggesting about how excessive bias or excessive variance in machine learning, but little rigorous research in this field has been combined with social sciences to examine when or how such models negatively affect people, much less what alterations can be made to avoid these unfortunate outcomes.
#3 Diverse stakeholders -- including communities and people historically shut out of tech -- are involved in the design of AI
Milestones: High-Level Activities
- Establish guidelines for inclusive and diverse stakeholder engagement in AI design
- Create spaces for end users to co-design products, services and technologies that benefit them
- Establish greater regional diversity of AI development, testing and process participation, to extend service and product reach to previously missed communities
Potential Threats
- Consensus on guidelines not achieved.
- Spaces are taken over by 1 or 2 types of end-users, and true diversity is still not met.
- Other regions that promote AI development instill their own internal agenda that is not focused on building trustworthy AI.
2020 Partner Initiatives
- Derechos Digitales: Research project on automated decision-making and participatory processes for more inclusive technologies. This research aims to understanding trends and patterns in algorithmic discrimination in Latin America due to the lack of cultural diversity in shaping of automated decision-making processes.
- Fellow, Narrira Lemos de Souza: Research, collect, and systematize information on the tools, strategies and infrastructures used to execute and propagate fake news in Latin America.
- EDRi Fellow, Petra Molnar: Research and publish the human rights impacts of migration management technologies.
- MIT Co-Creation Studio Fellow, Amelia Winger-Bearskin: Working with Native American communities to build frameworks for human-centered technology for the next 7 generations.
- Witness Fellow, Leil Zahra Mortada:
* Research content take-down in the North Africa West Asia region,
* Examine the different (if not double) standards in the so-called "moderation," and
* Probe the existing agreements between some governments in the region and Social Media platforms.
* Work to bring more voices from the region to the table, joining efforts to challenge Euro and North American centrism in the debate around tech.
#4: There is increased investment in and procurement of trustworthy AI products, services and technologies
Milestones: High-Level Activities
- Gather an initial set of impact investors to invest in and procure trustworthy AI products, services and technologies.
- Demonstrate success of and returns to investors of people who are building trustworthy AI.
- Educate investors on the risks of funding “non trustworthy” AI.
Potential Threats
- Trustworthy AI products, services, and technologies struggle to perform at the same rate as mainstream technologies.
2020 Partner Initiatives
- Access Now (2019): Contributed to the Ethics Guidelines currently developed by the European Commission High-Level Expert Group on Artificial Intelligence.
- Data Ethics: Personal data control in AI: Creating guidelines and standards for engineers on personal data control and ethics in AI through the IEEE P7006 and MyData.org.
#5: More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, offline data, data commons)
Milestones: High-Level Activities
- Consult technologists to identify the missing building blocks to Trustworthy AI development, in order to establish a research agenda
- Develop projects that demonstrate the technical and legal feasibility consumer scale data stewardship models (e.g. trusts, coops, fiduciaries)
- Establish large-scale academic funding pools for research on and development of foundational Trustworthy AI building blocks
Potential Threats
- Major industry actors buy up innovative technological advances in AI, and don't implement the trustworthy AI element.
2020 Partner Initiatives
- Algorithm Watch: Support systems developers in designing automated decision-making systems with the public good in mind + building data trusts.
- Digital Freedom Fund, Fellows, Jonathan McCully and Aurum Linh
* Research the use of machine learning algorithms by oppressive power structures, in order to identify where human rights are being violated.
* Use this research to create prototypes and explore how litigation can be drafted to shape the space in which existing technologies are growing. - Data Ethics: Developing tools to facilitate the efforts of companies, institutions and investors to integrate responsible AI applications and data ethics (based onDataEthics.eus principles) informed strategies in their practices.