AI Ethics
Frameworks
Aggregated Ethical Principles (Correa et al., 2023)
-
Accountability / Liability
-
Beneficence / Non-Maleficence
-
Children and Adolescents' Rights
-
Dignity / Human Rights
-
Diversity / Inclusion / Pluralism / Accessibility
-
Freedom / Autonomy / Democratic Values / Technological Sovereignity
-
Human Formation / Education
-
Human-Centredness / Alignment
-
Intellectual Property
-
Justice / Equity / Fairness / Non-Discrimination
-
Labor Rights
-
Coorperation / Fair Competition / Open Source
-
Privacy
-
Reliability / Safety / Security / Trustworthiness
-
Sustainability
-
Transparency / Explainability / Auditability
-
Truthfulness
Reflections on Aggregated Ethical Principles
I have a major criticism of the research undertaken by Correa et al (2023) regarding the plausability of the expected outcome of the research. They state the purpose their research is: "to determine whether a global consensus exists regarding the ethical principles that should govern AI applications and to contribute to the formation of future regulation." I note that limitations of the study meant that global consensus had to be extrapolated from a sample of representatives of the global population. Consequently, the problem of induction (Hume, 1739; Henderson, 2022) was simply disregarded, yet it often leads to wrong beliefs. This practise is prevalent in psychology, and that's why psychology is labelled as a 'soft science'. From this perspective, I believe that the purpose of Correa et al's research should be rephrased into a more accurate statement of it's epistemological position, that is: "to approximate a consensus of ethical principles, that should govern AI applications and would contribute to the formation of future regulation, by aggregating actual collections of those principles" .
The second part of my critique is about the method that Correa et al used to group principles into aggregated principles - they don't actually specify one. The compositional form of many of the aggregated principles such as "Reliability / Safety / Security / Trustworthiness" has no practical advantage over seperating these principles, as each term still needs to be parsed. Correa et al give the definition as an alternative "this set of principles upholds the idea that AI technologies should be reliable, in the sense that their use can be truly attested as safe and robust, promoting user trust and better acceptance of AI technologies." I claim that a more succinct form of the principle is "safety through reliability" , that many of Correa et al's principles can be made more succinct, and this would be beneficial, as succinct principles are simpler to intepret. The preference of succinct principles is known as aphorism. To further increase the simplicity of intepretating principles, I'd encourage research into general semantic structures that principles can be expressed in. To illustrate the benefit of explicitly specifying the semantic structure of a principle, consider principles that all share the form: '[DESIRABLE STATE] through [TECHNOLOGICAL CAPABILITY]'. The structure has the benefit of including the outcome of the principle, and a capability that if broken would prevent the outcome. This structure is advantageous in practise for specifying a capability to be tested. The aggregated principles of Correa et al were not methodically structured, so I'd encourage an analysis of the aggregated principles using restatements into general semantic structures, to analyse the practicality of the principles. Criticisms of principalism, such as the existence of conflicting principles (Clouser, 1990) might also apply to Correa et al's aggregated principles, and should be checked for.
I'd like to praise Correa et al on producing this aggregated list of principles that can be further analysed.