Artificial intelligence is widely adopted in bioinformatics and bioengineering. As a matter of fact when the diagnosis or selection of therapy is no longer performed exclusively by the physician, but to a significant extent by artificial intelligence, decisions easily become nontransparent. The most common application of machine learning algorithms in the bioinformatics and bioengineering context is automatic clinical decision-making. For these tasks, these are several well-known algorithms (artificial neural networks, classifiers, etc.), which are tuned based on (labeled) samples to optimize the classification of unseen instances. A deep understanding of the mathematical details of the decision behind an Artificial intelligence algorithm may be possible for statistics or computer science domain experts. Clearly, when it comes to the fate of human beings, this "developer’s explanation" is not sufficient. The shift from therapy-relevant decisions based on human knowledge to black-box-like computer algorithms makes the decision-making increasingly incomprehensible to medical staff and patients. This has been recognized in the issuance of guidelines, e.g., by the European Union or DARPA (USA), which emphasize the need for computer-based decisions to be transparent and in a form that can be communicated in an understandable way to medical personnel and patients. To address this problem, the concept of explainable artificial intelligence (XAI) is attracting scientific interest. XAI uses a representation of human knowledge, usually (a subset of) predicate logic, for its reasoning, deduction, and classification (diagnosis). The aim of this workshop is to boost the research and industrial community in the proposal and development of methodologies aimed to (clearly) explain the clinical decisional process to non-domain experts. Topics of interest include, but are not limited to:
- Explainable artificial intelligence
- Biomedical data mining
- Formal methods in medicine
- Model Checking in clinical contexts
- Interpretable data mining
- Biomedical knowledge representation
- Biomedical knowledge discovery
COMMITTEES
Organizers
- Francesco Mercaldo, Researcher of the University of Molise, Italy
- Antonella Santone, Associate Professor of the University of Molise, Italy
- Pan Huang, Chongqing University-Nanyang Technological University, China-Singapore
Program Committee
- Marcello Di Giammarco, IIT-CNR, Pisa, Italy
- Fabio Di Troia, San Josè University, USA
- Fiammetta Marulli, University of Campania, Caserta, Italy
- Paul Tavolato, University of Vienna, Vienna, Austria
- Giovanni Ciaramella, Scuola IMT Alti Studi Lucca and IIT-CNR, Italy
- Luca Petrillo, Scuola IMT Alti Studi Lucca and IIT-CNR, Italy
- Lucia Lombardi, University of Molise, Italy
Authors Submission Guidelines:
Submission Site:
https://easychair.org/conferences/?conf=idsta2024
For complete authors guidlines refer to https://idsta-conference.org/2024/authors.php
Publication:
All accepted papers in IDSTA and the workshops colocated with it will be submitted to IEEEXplore for possible publication.
Program
The program will be announced with the IDSTA2024 program at https://idsta-conference.org/2024/program.php
Venue
For venue and acomodoation information, please visit https://idsta-conference.org/2024/venue.php
Registration
For registration information, please visit https://idsta-conference.org/2024/registration.php
Camera Ready
For registration information, please visit https://idsta-conference.org/2024/cameraready.php