Contenuto principale della pagina Menu di navigazione Modulo di ricerca su uniPi Modulo di ricerca su uniPi

NoBIAS

marieUnipi Team Leader: Prof. Salvatore Ruggieri, Dipartimento di Informatica

Fair algorithms for artificial intelligence

Systems based on artificial intelligence (AI) are increasingly being used in applications automatically issuing decisions or assessments. They can impact individuals or groups of people with regard to important questions like payments or medical treatment but AI bias can be an issue. The sources of biases of AI decisions can be automatically derived data; algorithms processing data; or use of applications. To eliminate AI biases on all of three stages, the EU-funded NoBIAS project will develop fairness-aware algorithms. They will be based on ethical and legal principles and designed as technical solutions in a multi-disciplinary effort of 15 researchers trained in computer science, data science, machine learning, law and social science, and other fields.

Objective

Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impacts on individuals and society. Their decisions might affect everyone, everywhere and anytime entailing risks, such as being denied a credit, a job, a medical treatment, or specific news. Businesses might miss chances, because biases make AI-driven decisions underperform; much worse, they may contravene human rights when treating people unfairly.

Bias may arise at all stages of AI-based decision making processes: (i) when data is collected, (ii) when algorithms turn data into decision making capacity, or (iii) when results of decision making are used in applications. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in the training, design and deployment of AI algorithms to ensure social good while still benefiting from the potential of AI.

NoBIAS will develop novel methods for AI-based decision making without bias by taking into account ethical and legal considerations in the design of technical solutions. The core objectives of NoBIAS are to understand legal, social and technical challenges of bias in AI-decision making, to counter them by developing fairness-aware algorithms, to automatically explain AI results, and to document the overall process for data provenance and transparency.

We will train a cohort of 15 ESRs (Early-Stage Researchers) to address problems with bias through multi-disciplinary training and research in computer science, data science, machine learning, law and social science. ESRs will acquire practical expertise in a variety of sectors from telecommunication, finance, marketing, media, software, and legal consultancy to broadly foster legal compliance and innovation. Technical, interdisciplinary and soft-skills will give ESRs a head start towards future leadership in industry, academia, or government.

Fair algorithms for artificial intelligence
Systems based on artificial intelligence (AI) are increasingly being used in applications automatically issuing decisions or assessments. They can impact individuals or groups of people with regard to important questions like payments or medical treatment but AI bias can be an issue. The sources of biases of AI decisions can be automatically derived data; algorithms processing data; or use of applications. To eliminate AI biases on all of three stages, the EU-funded NoBIAS project will develop fairness-aware algorithms. They will be based on ethical and legal principles and designed as technical solutions in a multi-disciplinary effort of 15 researchers trained in computer science, data science, machine learning, law and social science, and other fields.

Objective
Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impacts on individuals and society. Their decisions might affect everyone, everywhere and anytime entailing risks, such as being denied a credit, a job, a medical treatment, or specific news. Businesses might miss chances, because biases make AI-driven decisions underperform; much worse, they may contravene human rights when treating people unfairly.

Bias may arise at all stages of AI-based decision making processes: (i) when data is collected, (ii) when algorithms turn data into decision making capacity, or (iii) when results of decision making are used in applications. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in the training, design and deployment of AI algorithms to ensure social good while still benefiting from the potential of AI.

NoBIAS will develop novel methods for AI-based decision making without bias by taking into account ethical and legal considerations in the design of technical solutions. The core objectives of NoBIAS are to understand legal, social and technical challenges of bias in AI-decision making, to counter them by developing fairness-aware algorithms, to automatically explain AI results, and to document the overall process for data provenance and transparency.

We will train a cohort of 15 ESRs (Early-Stage Researchers) to address problems with bias through multi-disciplinary training and research in computer science, data science, machine learning, law and social science. ESRs will acquire practical expertise in a variety of sectors from telecommunication, finance, marketing, media, software, and legal consultancy to broadly foster legal compliance and innovation. Technical, interdisciplinary and soft-skills will give ESRs a head start towards future leadership in industry, academia, or government.

 

Coordinator
GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER, Germany

Participants

  • GESIS-LEIBNIZ-INSTITUT FUR SOZIALWISSENSCHAFTEN EV, Germany
  • SCHUFA HOLDING AG, Germany
  • ETHNIKO KENTRO EREVNAS KAI TECHNOLOGIKIS ANAPTYXIS, Greece
  • UNIVERSITA DI PISA, Italy
  • THE OPEN UNIVERSITY, United Kingdom
  • UNIVERSITY OF SOUTHAMPTON, United Kingdom
  • KATHOLIEKE UNIVERSITEIT LEUVEN, Belgium

Start date 1 January 2020
End date 31 December 2023
Project cost € 3 994 775,28
Project funding € 3 994 775,28
Unipi quota € 522 999,36
Call title H2020-MSCA-ITN-2019
Unipi role Participant

Ultima modifica: Gio 04 Mag 2023 - 07:43

Questo sito utilizza solo cookie tecnici, propri e di terze parti, per il corretto funzionamento delle pagine web e per il miglioramento dei servizi. Se vuoi saperne di più, consulta l'informativa