Tesi etd-01302025-115608 |
Link copiato negli appunti
Tipo di tesi
Tesi di dottorato di ricerca
Autore
ALABBASI, WESAM NITHAM
URN
etd-01302025-115608
Titolo
Trustworthy AI in Practice: Modeling, Trade-offs, and Applications
Settore scientifico disciplinare
INF/01 - INFORMATICA
Corso di studi
INFORMATICA
Relatori
tutor Prof. Bacciu, Davide
supervisore Prof. Saracino, Andrea
supervisore Prof. Saracino, Andrea
Parole chiave
- Data Privacy
- Explainable AI
- Privacy-preserving Data Analysis
- Trade-off
- Trustworthy AI.
Data inizio appello
17/02/2025
Consultabilità
Completa
Riassunto
Humans are becoming more reliant on the assistance of intelligent systems. These systems are being implemented to automatically analyze and correlate vast amounts of available data, including highly sensitive data, to produce accurate results used to drive even critical or strategic decisions. Nevertheless, these analysis systems entail some concerns related to trust, like respecting users’ privacy and considering transparency in data processing and decision-making.
To build trust in AI systems, we endeavor to develop a methodology for improved modeling and respect of ethical, social, and legal aspects from both the AI system and the human perspectives. A deeper understanding of how AI systems impact trust, the guidelines and regulations for trustworthy AI, in addition to the requirements of a trustworthy AI and the recent implementation mechanisms, will serve to improve and build a systematic approach to enhancing AI systems’ trustworthiness.
Recently, numerous initiatives have been made by organizations, governmental entities, etc. such as the European Commission and the Organisation for Economic Cooperation and Development (OECD) to analyze the impact of AI systems, and provide design principles and guidelines to model trustworthy AI systems. While the reasons for prioritizing trustworthy AI requirements like data privacy and model robustness may vary, the underlying principles show a common challenge. This is due to the fact that these requirements might be in contrast, like enhancing the privacy aspect might cause the loss of the model’s robustness. Still, these elements are of crucial importance as they pose the basis for the Trustworthy AI paradigm. To that end, we propose analyzing Trustworthy AI principles and requirements, investigating the applicability of implementing related mechanisms for trust within the phases of the AI life-cycle, studying the relationships among trustworthy AI requirements, and developing methodologies to trade off these requirements and finding out an optimal trade-off among all requirements and their implementation configurations.
This study is intended to address multiple aspects of the Trust modeling problem in AI systems. First, we consider the Trustworthy AI requirements provided by the "Ethics Guidelines for Trustworthy AI" as key aspects of the analysis process. Thus, we apply related mechanisms within the entire AI lifecycle phases. As such, we expect a contrast between the mechanisms applied to achieve AI trust principles. Therefore, we propose trade-off criteria among the measures of trust for these mechanisms by regulating their input parameters and analyzing the effect for trade-off optimization.
To build trust in AI systems, we endeavor to develop a methodology for improved modeling and respect of ethical, social, and legal aspects from both the AI system and the human perspectives. A deeper understanding of how AI systems impact trust, the guidelines and regulations for trustworthy AI, in addition to the requirements of a trustworthy AI and the recent implementation mechanisms, will serve to improve and build a systematic approach to enhancing AI systems’ trustworthiness.
Recently, numerous initiatives have been made by organizations, governmental entities, etc. such as the European Commission and the Organisation for Economic Cooperation and Development (OECD) to analyze the impact of AI systems, and provide design principles and guidelines to model trustworthy AI systems. While the reasons for prioritizing trustworthy AI requirements like data privacy and model robustness may vary, the underlying principles show a common challenge. This is due to the fact that these requirements might be in contrast, like enhancing the privacy aspect might cause the loss of the model’s robustness. Still, these elements are of crucial importance as they pose the basis for the Trustworthy AI paradigm. To that end, we propose analyzing Trustworthy AI principles and requirements, investigating the applicability of implementing related mechanisms for trust within the phases of the AI life-cycle, studying the relationships among trustworthy AI requirements, and developing methodologies to trade off these requirements and finding out an optimal trade-off among all requirements and their implementation configurations.
This study is intended to address multiple aspects of the Trust modeling problem in AI systems. First, we consider the Trustworthy AI requirements provided by the "Ethics Guidelines for Trustworthy AI" as key aspects of the analysis process. Thus, we apply related mechanisms within the entire AI lifecycle phases. As such, we expect a contrast between the mechanisms applied to achieve AI trust principles. Therefore, we propose trade-off criteria among the measures of trust for these mechanisms by regulating their input parameters and analyzing the effect for trade-off optimization.
File
Nome file | Dimensione |
---|---|
Trustwor...hesis.pdf | 15.77 Mb |
Contatta l’autore |