Skip to content

daviderigamonti/Visualizing-the-Gap-Erosion-of-Responsibility-in-AI-Systems

Repository files navigation

Visualizing the Gap: Erosion of Responsibility in AI Systems

CC BY-SA 4.0

Paper written for the Philosophical Issues of Computer Science class at Politecnico di Milano a.a. 2022-2023 under the supervision of Professor Schiaffonati.

Abstract

What is commonly referred as “responsibility gap”, in the context of the philosophical debate regarding Artificial Intelligence (AI), is the impossibility to completely ascribing the responsibility for the actions of an autonomous learning system to its designer. This issue is particularly relevant in modern society, where we are witnessing a significant increase in pervasiveness of such systems. Furthermore, most of the examples presented in this paper will see autonomous learning systems applied to the military field, as it is a critical area where the employment of these systems may be a matter of life and death. In this paper we will focus on the responsibility gap and the erosion of responsibility by analyzing relevant literature, comparing ideas and possibly formulating new perspectives. Distinct emphasis will be put on the inadequacy of classical responsibility frameworks when applied to modern AI systems, as they tipically adopt a traditional functional perspective on engineering artifacts; this perspective contrasts with the opacity and behavioral unpredictability of learning automata. In the paper we will also dwell on the potential consequences of the proposed problem and explore possible solutions, paying particular attention and formulating responses to some plausible opposing viewpoints.

Paper

Visualizing the Gap: Erosion of Responsibility in AI Systems

Author

Davide Rigamonti

References

Bibliography

Citation

@unpublished{VTGEORIAS,
  title = {Visualizing the Gap: Erosion of Responsibility in AI Systems},
  author = {Davide Rigamonti},
  year = {2023},
  url = {https://github.com/daviderigamonti/Visualizing-the-Gap-Erosion-of-Responsibility-in-AI-Systems}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

CC BY-SA 4.0

About

Paper written for the Philosophical Issues of Computer Science class at Politecnico di Milano a.a. 2022-2023 under the supervision of Professor Schiaffonati.

Topics

Resources

License

Stars

Watchers

Forks

Languages