Dr. Ángeles Manjarrés Riesco, Universidad Nacional de Educación a Distancia (UNED)(Spain)


The United Nations (UN) Agenda for Sustainable Development, which was adopted by the UN General Assembly in 2015, commits all member states to make concerted efforts towards building an inclusive, sustainable, prosperous and resilient future for people and planet, and to reaching the universally-applicable goals by 2030. Artificial Intelligence (AI) has the potential to contribute to solving some of the world’s most pressing problems, such as climate change, lack of basic services, poverty, exploitation and violations of human rights, and thereby to the achievement of the UN Sustainable Development Goals (SDGs), bringing positive socio-economic outcomes in both High Income Countries (HIC) and Low and Middle Income Countries (LMIC).

The UN Global Pulse Initiative (see https://www.unglobalpulse.org/about-new) aims to accelerate discovery, development and scaled adoption of big data innovation for sustainable development and humanitarian action. It is of note that experience with projects such as those of Global Pulse has given rise to important ethical concerns, for example, with both the collection and use of data during humanitarian emergencies.

Initiatives such as the "IEEE Global Initiative on Ethics of Autonomous/Intelligent Systems" and the European Commission's High-Level Expert group on AI work on "Ethics guidelines for trustworthy artificial intelligence" highlight the increasing challenges posed by AI in the ethical, moral, legal, humanitarian and sociopolitical domains.

A wide view of ethics focuses on potentialities, not only on risk mitigation, and from such a view arises the ethical imperative to harness AI technologies to the benefit of humanity in order to improve quality-of-life for all. To this end, more R&D in the potential of AI to contribute to the SDGs is urgently needed.

Firstly, there is a need to study the current panorama of AI applications in sectors crucial to the UN SDGs, to share the lessons learned in applying them, in order to identify strengths and weaknesses, and to document and disseminate the development and deployment of the most significant innovative applications. Attention should be drawn to the idiosyncrasy of LMICs and the particular impact AI can have in this context. Secondly, progress in standards, research methodologies and development methodologies that guide the development of ethical AI, respectful of fundamental human rights (dignity, freedoms, equality, solidarity, justice) and of the particular values of the culture where they are implemented, is also essential.

Main topics include (but not restricted to):

  • AI technologies and applications that can make a significant contribution to achieving the UN SDGs. This covers fields such as:
    • Big data for development (agriculture, medical tele-diagnosis,...); geographic information systems (public service planning, disaster prevention, emergency planning, disease monitoring,...); control systems (naturalizing intelligent cities through energy and traffic control, management of urban agriculture,...); etc.
    • Proposals that include a reflection on strengths and weaknesses are particularly welcome (ethical problems arising from use of the technology, possible acceptance problems in a specific context or culture,...), especially if the argumentation is based on impact measurement using quantifiable metrics (associated with compliance with the SDGs).
    • Reviews and analysis of the state of the art in relevant application areas are also welcome.
  • Methodological and technical tools at all levels of AI development processes (analysis, design, implementation, validation, deployment and evaluation), focused on guaranteeing the properties of ethical AI, and examples of their application:
    • Examples of these properties are: explicability, accountability, data governance, design for all, non discrimination, respect for human autonomy, respect for privacy, robustness, safety, transparency and traceability, broad-spectrum impact forecast/monitoring/measurement,... Some of them will be particularly relevant in the case of LMICs: adaptation to the available resources (hardware, software, connectivity,...), impact on the receiving communities, suitability and sustainability,...
    • Some examples of research areas arising in the study of the aforementioned tools are the following:
      • Impact measurement by design
      • Equity-by-design
      • Ethics & rule-of-law by design
      • Privacy-by-design
      • Security-by-design
      • Standardization/harmonization
      • Low-cost AI (mobile lightweight applications, FOSS solutions,...)
      • The "Open AI" paradigm, where this refers not only to FOSS (Free / Open-Source Software) but also to applying FOSS principles to algorithms, scientific insights or other AI artifacts.
      • Privacy-protection frameworks
      • Culture-aware techniques
      • Algorithmic repeatability
      • Robustness to bias and corruption
      • Architectures for trustworthy AI
      • Machine and robot ethics

Please contact the chair by email if you need an extension of the 28th of february deadline