Info_JPI

Joint Programming Initiatives (JPI)

Launched by the European Commission in July 2008, they are one of the 5 initiatives implementing the European Research Area (ERA).

JPIs aim to increase the value of national and European funding through joint planning, implementation and evaluation of national research programmes.

In joint programming, Member States coordinate both national research activities and resource pools and benefit from complementarities, thus enabling the development of a joint research agenda that makes it possible to address major societal challenges. JPIs target challenges that cannot be addressed at national level, and allow Member States to participate in initiatives that are useful to them.

More information:

Joint Programming Initiatives (JPI) Projects

ANTIDOTE: ArgumeNtaTIon-Driven explainable artificial intelligence fOr digiTal mEdicine.

Specific programme: Joint Programming Initiative in “Explainable Machine Learning-based Artificial Intelligence (XAI) and Novel Computational Approaches for Environmental Sustainability (CES)” funded through CHIST-ERA IV and AEI “Proyectos de Cooperación Internacional” (Proyecto PCI2020-120717-2, financiado por MCIN/AEI/10.13039/501100011033 y por la Unión Europea “NextGenerationEU”/PRTR).

APCIN code: PCI2020-120717-2

UPV/EHU Partner Status: Beneficiary

UPV/EHU PI: Rodrigo Agerri

Project start: 01/04/2021
Project end:   31/07/2024
 

Brief description:  Providing high quality explanations for AI predictions based on machine learning is a challenging andcomplex task. To work well it requires, among other factors: selecting a proper level of generality/specificityof the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AItask under consideration; referring to specific elements that have contributed to the decision; making use ofadditional knowledge (e.g. metadata) which might not be part of the prediction process; selectingappropriate examples; providing evidence supporting negative hypothesis. Finally, the system needs toformulate the explanation in a clearly interpretable, and possibly convincing, way.

Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low levelcharacteristics of the deep learning process are combined with higher level schemas proper of the humanargumentation capacity. The ANTIDOTE integrated vision is supported by three considerations: (i) in neuralarchitectures the correlation between internal states of the network (e.g., weights assumed by single nodes)and the justification of the network classification utcome is not well studied; (ii) high quality explanations arecrucially based on argumentation mechanisms (e.g., provide supporting examples and rejected alternatives),that are, to a large extent, task independent; (iii) in real settings, providing explanations is inherently aninteractive process, where an explanatory dialogue takes place between the system and the user.Accordingly, ANTIDOTE will exploit cross-disciplinary competences in three areas, i.e., deep learning,argumentation and interactivity, to support a broader and innovative view of explainable AI. Although weenvision a general integrated approach to explainable AI, we will focus on a number of deep learning tasks inthe medical domain, where the need for high quality explanations for clinical cases deliberation is critical.

info_masinformacionehurope

Contact information:

International R&D Office UPV/EHU
Email: proyectoseuropeos@ehu.es