Responsible AI
Responsible AI is the practice of developing and deploying artificial intelligence systems that adhere to principles such as ethics, fairness, reproducibility and data privacy. Data ethics refers to the moral obligations and responsibilities of data collectors, processors and users. Fairness means ensuring that AI systems do not discriminate or harm any group of people based on their characteristics or preferences. Reproducibility means ensuring that AI systems can be verified, validated and replicated by independent parties. Data privacy means ensuring that AI systems respect the rights and preferences of data subjects and protect their personal information from unauthorized access or misuse. Transparency or ’explainability’ refers to making the decisions and actions of AI systems transparent and understandable to humans, especially non-experts such as clinicians, patients and the wider public.
Possible topics
- Adversarial data analysis
- Data preparation multiverse analysis
- Named entity recognition for data science pipelines
- Forensic archaeology for scientific replicability
- Federated learning for heteogeneous data quality
Further reading
- Arrieta, Alejandro Barredo, et al. “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.” Information Fusion 58 (2020): 82-115. https://doi.org/10.1016/j.inffus.2019.12.012
- Wang, Y., Xiong, M. and Olya, H. (2020) Toward an understanding of responsible artificial intelligence practices. _Proceedings of the 53rd Hawaii International Conference on System Sciences.Hawaii International Conference on System Sciences (HICSS), pp. 4962-4971. https://doi.org/10.24251/hicss.2020.610