2020 Collaborative Roadmap to Trustworthy AI

2020 Collaborative Roadmap to Trustworthy AI

Policy

New and existing laws are used to make the AI ecosystem more trustworthy.

policy ai

Short-term Outcomes

#1: Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI

Milestones: High-Level Activities

  • Work with consumer groups to mobilize the public on the transparency issues -- focus both on education and data donations.

Potential Threats

  • Consumers de prioritize trustworthy AI in the face of other issues like climate change, political polarization, and economic challenges.

2020 Partner Initiatives

  • Panoptykon: Developing a "labeling" system for algorithms/automated decision making systems, which will cater to the needs of different target groups, from non-expert users to regulators.
  • Consumers International: Developing an AI accountability framework

#2: Governments develop the vision, skills and capacities needed to effectively regulate AI, relying on both new and existing laws.

Milestones: High-Level Activities

  • Translate and adapt Trustworthy AI guidelines into 'model policy'
  • Test out 'model policy' in regions likely to make early progress
  • Help government regulators to increase trustworthy AI expertise, especially for policy makers and people who set procurement policy
  • Work closely with policy makers to move the ball on one big new idea to 'educate by doing' -- focus on interoperability and data trusts

Potential Threats

  • Internal political and social unrest distract policymakers from trustworthy AI issues
  • Civil society struggles to be heard by policymakers, who instead turn to industry.

2020 Partner Initiatives

Access Now: The main focus is within the European Union due to its existing legal frameworks, which may serve to set standards and norms in Silicon Valley and beyond. Chief concerns include:

  • AI’s dependence on vast data-sets,
  • the potential for algorithmic flaws and bias, and
  • the protection of users’ data protection rights, including the right of users to understand how their data is used and the right to understand and contest decisions informed by AI and automated systems.

Algorithm Watch

  • Support decision makers in government, parliament, public administration and the private sector in developing mechanisms to govern ADM systems so that they strengthen the public good, i.e. by creating standardized design processes, oversight and audit mechanisms and, if necessary, new laws.
  • Develop and execute campaigns targeting government, public administration and the private sector in case they implement systems that (threaten or risk to) violate the rights of individuals or have a negative impact on society.

Data Ethics: The European human centric approach to Trustworthy AI: Creating awareness about the European approach/Trustworthy AI. Among others an AI forum 28-29 April in Rome on the European AI Agenda.

European Digital Rights: Contributing to the Council of Europe guidelines on artificial intelligence and data protection and focusing specifically on Facial Recognition technology.


#3 Progress towards trustworthy AI is made through wider enforcement of existing rules like GDPR's 'purpose limitation' and 'right to know'

Milestones: High-Level Activities

  • Establish body of evidence in Europe to fuel litigation around the 'right to know' and 'purpose limitation' clauses of GDPR
  • Roll out litigation plan related to evidence gathered by DFF and others. Look for other areas to litigate that will advance trustworthy AI.
  • Enforce consumer safety regulations by identifying breaches of consumer data and holding government accountable to regulatory responsibilities.

Potential Threats

  • GDPR is rolled back; alternatively, legislators slow-role GDPR enforcement and industry remains unaffected or deems enforcement rulings as the cost of doing business.

2020 Partner Initiatives

Digital Freedom Fund: To map not only the most urgent and biggest threats of AI to human rights, but also the most viable entry points for concrete action, including through litigation. This would be accompanied by a number of learning sessions and possibly dedicated strategic litigation retreats to develop concrete strategies in this area.


#4: Regulators have access to the data they need to scrutinize the trustworthiness of AI in consumer products and services

Milestones: High-Level Activities

  • Create a policy agenda to drive regulatory change regarding data disclosure. Include in overall model AI policy efforts.
  • Develop a 'platform transparency' project that allows users to donate data and researchers to scrutinize platforms, with or without cooperation from platforms.

Potential Threats

  • Consensus on the policy agenda is weak; regulators are not aligned with the movement for trustworthy AI.

2020 Partner Initiatives

Algorithm Watch, Fellow, Anouk Ruhaak

  • Create the Data Donation Platform (DDP).
  • Establish a governance structure for the DDP to guarantee data safety and to secure quality standards.

Consumer International: Engaging partners from business, government and civil society to tackle specific consumer challenges and opportunities.


#5: Governments develop programs to invest in and incent trustworthy AI

Milestones: High-Level Activities

  • Create a policy agenda re: industrial and innovation policy to invest in trustworthy AI companies. Include in overall model AI policy efforts.
  • Encourage government to build a fund of capital for companies and people developing trustworthy AI.
  • Develop and test procurement standards based around trustworthy AI guidelines.
  • Pressure more governments to adopt a set of procurement rules and guidelines around trustworthy AI.

Potential Threats

  • Government programs developed fall short of what civil society envisioned.
  • Industry buys up innovative technologies from government-funded prototypes, and eliminates the trustworthy component.

2020 Partner Initiatives

Data Ethics

  • AI in public procurement: Producing a white paper on data ethics in public procurement of AI.
  • AI governance research: Publication of social science/humanistic research on the power dynamics and interests of the European AI Agenda.