Paper
Algorithmic Opacity: Making Algorithmic Processes Transparent through Abstraction Hierarchy
Published Sep 1, 2018 · Pragya Paudyal, B. L. William Wong
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
12
Citations
1
Influential Citations
Abstract
In this paper we introduce the problem of algorithmic opacity and the challenges it presents to ethical decision-making in criminal intelligence analysis. Machine learning algorithms have played important roles in the decision-making process over the past decades. Intelligence analysts are increasingly being presented with smart black box automation that use machine learning algorithms to find patterns or interesting and unusual occurrences in big data sets. Algorithmic opacity is the lack visibility of computational processes such that humans are not able to inspect its inner workings to ascertain for themselves how the results and conclusions were computed. This is a problem that leads to several ethical issues. In the VALCRI project, we developed an abstraction hierarchy and abstraction decomposition space to identify important functional relationships and system invariants in relation to ethical goals. Such explanatory relationships can be valuable for making algorithmic process transparent during the criminal intelligence analysis process.
Full text analysis coming soon...