2020 Collaborative Roadmap to Trustworthy AI

2020 Collaborative Roadmap to Trustworthy AI

Innovators

Trustworthy AI products and services are increasingly embraced by early adopters.

ai banner

Short-term Outcomes

#1: More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, offline data, data commons)

Milestones: High-Level Activities

  • Consult technologists to identify the missing building blocks to Trustworthy AI development, in order to establish a research agenda
  • Develop projects that demonstrate the technical and legal feasibility consumer scale data stewardship models (e.g. trusts, coops, fiduciaries)
  • Establish large-scale academic funding pools for research on and development of foundational Trustworthy AI building blocks

Potential Threats

  • Major industry actors buy up innovative technological advances in AI, and don't implement the trustworthy AI element.

2020 Partner Initiatives

  • Algorithm Watch: Support systems developers in designing automated decision-making systems with the public good in mind + building data trusts.
  • Digital Freedom Fund, Fellows, Jonathan McCully and Aurum Linh
    * Research the use of machine learning algorithms by oppressive power structures, in order to identify where human rights are being violated.
    * Use this research to create prototypes and explore how litigation can be drafted to shape the space in which existing technologies are growing.
  • Data Ethics: Developing tools to facilitate the efforts of companies, institutions and investors to integrate responsible AI applications and data ethics (based on DataEthics.eus principles) informed strategies in their practices.

#2: Entrepreneurs develop -- and investors support -- alternative business models for consumer tech that are less focused on data exploitation

Milestones: High-Level Activities

  • Build financial incentive structures to encourage new business model development and investment
  • Engage business schools and tech incubators to support alternative business models.
  • Galvanize and work with large consumer companies looking to differentiate themselves through privacy friendly business models
  • Raise awareness around negative effects of existing business models (including ad driven revenue models).

Potential Threats

  • Alternative business models take too long to reach critical audience.
  • Industry co-opts alternative business models for their own non-trustworthy uses.

2020 Partner Initiatives

Panoptykon, Fellow, Karolina Iwanska: Create a set of criteria for advertising which respects human rights and data protection principles in order to address threats posed by the dominant surveillance-based advertising model. Investigate and expose the negative effects of targeted advertising, and map existing alternatives.


#3 Transparency is included as a feature in more AI enabled products, services and technologies

Milestones: High-Level Activities

  • Demonstrate and communicate what best practice in transparency looks like, in order to help guide others to build it
  • Develop and roll out a large scale method to ensure that consumers know when there is AI inside a product or service they are using

Potential Threats

  • Industry convinces policymakers that pro-transparency initiatives threaten their IP.
  • Consumers are apathetic towards transparency; the "transparency paradox" continues.

2020 Partner Initiatives

Algorithm Watch:

  • Campaign for a register of ADM systems that are used in the public sector
  • Report about cases of government, public administration and the private sector developing and using systems that (threaten or risk to) violate the rights of individuals or negatively impact democratic societies as a whole;
  • Explain the characteristics and effects of complex algorithmic decision-making processes to a general public.
  • Help citizens better understand automated decision-making processes;
  • Develop ideas and strategies to achieve intelligibility of ADM processes – with a mix of technologies, regulation, and suitable oversight institutions -- in order to maximize the benefits of this kind of automation for society.

Fellow, Anouk Ruhaak:

  • Address the problem of the lack of options to assess complex ADM systems from the outside.

#4: There is increased investment in and procurement of trustworthy AI products, services and technologies

Milestones: High-Level Activities

  • Gather an initial set of impact investors to invest in and procure trustworthy AI products, services and technologies
  • Demonstrate success of and returns to investors of people who are building trustworthy AI
  • Educate investors on the risks of funding “non trustworthy” AI

Potential Threats

  • Trustworthy AI products, services, and technologies struggle to perform at the same rate as mainstream technologies.

2020 Partner Initiatives

  • Panoptykon: Exposing (political or business) choices behind the use of AI/algorithms in decision-making processes that affect humans.
  • Data Ethics, Fellow, Francesco Lapenta: AI bench marking: map products and services using AI and ML against a data ethics impact assessment scheme.

#5: Trustworthy AI products and services emerge that serve the needs of people and markets previously ignored

Milestones: High-Level Activities

  • Identify, fund and promote early-stage innovations that show promise, including serving those historically ignored
  • Identify and enable expansion into under served market segments with a high likelihood of benefiting from AI
  • Encourage 'good' product expansion to a wider range of markets (the EU products/services that comply with GDPR to the US)

Potential Threats

  • Major industry actors buy up innovative technological advances in AI, and don't implement the trustworthy AI element.

2020 Partner Initiatives

Algorithm Watch: Support systems developers in designing automated decision-making systems with the public good in mind + building data trusts.

Digital Freedom Fund, Fellows, Jonathan McCully and Aurum Linh
* Research the use of machine learning algorithms by oppressive power structures, in order to identify where human rights are being violated.
* Use this research to create prototypes and explore how litigation can be drafted to shape the space in which existing technologies are growing.