What rights should we have in a society increasingly being scrutinized, monitored, and controlled via the use of Artificial Intelligence (AI)?
That’s a good question.
To address this thorny and unresolved legal issue, the US White House released on October 4, 2022, a white paper informally referred to as an AI Bill of Rights, which more officially is entitled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. The document is the work of the Office of Science and Technology Policy (OSTP), a federal entity that was established in the mid-1970s and serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance.
Let’s unpack the AI Bill of Rights and examine the pros and cons of this latest pronouncement pertaining to AI and the law.
Rightfully Thinking About AI And Human Rights
The naming of this as an AI Bill of Rights is a bit askew since it might inadvertently suggest that these are rights associated with AI systems that are considered reaching sentience or otherwise nearing a point of legal personhood. Not so. To clarify, this 73-page long document is about human rights amid the ongoing onslaught of AI systems that are being deployed without sufficient attention to humankind’s safety and well-being.
You might be aware that AI has been put into use by numerous private and public organizations and has ended up acting in a variety of discriminatory ways. Our civil rights and civil liberties are under attack by how AI is crafted and utilized. AI at times is ruinously undercutting data privacy. AI permeates all manner of social media and can wrongfully suppress the speech of those criticizing hate speech, ironically so. AI can be used to stalk someone across both electronic and physical worlds, endangering their personal safety.
On and on, the litany of AI endangerment goes.
A technical companion portion within the AI Bill of Rights describes dozens of real-world examples showcasing how AI is being improperly devised and fostering potential harm. The examples suffice to get the hair standing on the back of your neck. As an additional harbinger of concern, keep in mind that AI is expansively being rolled out and will ultimately be ubiquitous. You can anticipate a non-stop barrage of AI amidst nearly all of our daily apps on our smartphones and likewise AI-powered applications used by major companies and by governmental agencies.
If we are inexorably going to be immersed in an AI-permeated way of existence, the logical response is to stand up for the rights of humankind. Thus, the reasoned basis to forge an AI Bill of Rights that can valiantly protect people.
The US Constitution famously has a historic Bill of Rights that includes vital guarantees of personal freedoms and mindfully addresses the codification of legally stipulated rights. The first ten amendments of the Constitution are breathtaking in their scope and significance. This AI Bill of Rights attempts to leverage the revered nature of the Bill of Rights to draw public attention to what needs to be considered in an AI era (some might readily criticize trying to somewhat “exploit” the famed Bill of Rights in this naming manner, perhaps overstepping a proper sense of decorum, though it could be a small price to pay for engaging society in the upcoming AI legal morass).
The AI Bill of Rights posits five keystones (excerpts quoted from the official white paper as cited earlier):
- Safe and Effective Systems: “You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.”
- Algorithmic Discrimination Protections: “You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”
- Data Privacy: “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.”
- Notice and Explanation: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”
- Human Alternatives, Consideration, And Fallback: “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate.”
AI that is programmed by humans can contain a plethora of hidden risks.
I am not alluding to existential risks such as AI that rises and takes over humanity (we aren’t yet in that ballpark). The kind of AI that is being confronted consists of non-sentient algorithmic AI. Efforts to legislatively contend with algorithmic AI include the U.S. Congressional ongoing efforts toward crafting the Algorithmic Accountability Act, while in the European realm there is the EU Artificial Intelligence Act (AIA) currently under review.
An Appetizer But Not A Meal
You would be hard-pressed to argue against the proposed precepts of the newly unveiled AI Bill of Rights. The five keystones are indubitably sensible. It is possible to quibble with some of the wording here or there, but overall, the indicated protections are what we need to be diligently considering.
That being said, the AI Bill of Rights has perhaps only whetted our appetite. Envision that this is the precursor or appetizer leading up to a fuller meal.
We have already seen this appetizer in other guises, such as the US Department of Defense (DoD) officially stated Ethical Principles of AI and even the somewhat comparable directives by the Vatican in its Rome Call For AI Ethics. A much more extensive elucidation of these types of AI-relevant humankind rights was well-documented in the Recommendation on the Ethics of Artificial Intelligence released last year by UNESCO (United Nations Educational, Scientific, and Cultural Organization) which garnered adopted approval by 193 member countries of the United Nations .
In that sense, the AI Bill of Rights has a lot to draw upon and yet also measure up to.
The AI Bill of Rights can be said to be insufficient in many ways, including but not limited to:
- Not legally enforceable and completely non-binding
- Advisory only and not considered governmental policy
- Less comprehensive in comparison to other published works
- Primarily consists of broad concepts and lacks implementation details
- Going to be challenging to turn into actual viable practical laws
- Seemingly silent on the looming issue of possibly banning AI in some contexts
- Marginally acknowledges advantages of using AI that is well-devised
- Doesn’t appear to acknowledge arduous tradeoffs between AI benefits vs AI downsides
Despite those aforementioned insufficiencies, there is certainly something to be said about trying to place a stick in the ground and get the ball rolling on the regulatory governance of AI. Apparently, selected areas of the U.S. federal government will attempt to try out the five keystones of the AI Bill of Rights (as suggested in the white paper as part of “leading by example”). The belief seems to be that this will illuminate the efficacy of the keystones and reveal ways to bolster and sharpen them.
Lawmakers are ultimately going to be in the driver’s seat on all of this.
Those tasked with making our laws are going to be immensely challenged with the complicated chore of bringing together a veritable smorgasbord of recommended soft-law AI ethical practices and patchwork hard-law AI laws that are springing up throughout the states. Furthermore, our lawmakers should be wisely eyeing the globally emerging AI soft-laws and AI hard-laws that are available for the world to see and reuse.
Make no mistake, all of this is a burgeoning part of the law and growth is abundant.
Attorneys and law students will soon see that AI & Law is bubbling up to the surface. As more AI is devised and unleashed, companies and governments will need to seek out savvy AI-aware legal advisors. Meanwhile, the coming glut of new or imagined AI laws will require legal minds that can ensure that the laws as codified are sensible and practical. And the potential harms produced by AI will require lawyers that are willing to fight for humankind’s rights against the blitz of dour AI systems.
Per the wisdom of Louis Brandeis, former Associate Justice of the U.S. Supreme Court: “If we desire respect for the law, we must first make the law respectable.”
Let’s all get into the action and make humankind’s rights associated with the advent of AI a top priority. It assuredly seems like a respectable thing to do.
About The Author
Dr. Lance Eliot is a global expert on AI & Law and serves as a Stanford Fellow affiliated with the Stanford Law School (SLS) and the Stanford Computer Science Department via the Center for Legal Informatics. His popular books on AI & Law are highly rated and he has been an invited keynote speaker at major law industry conferences. His articles have appeared in numerous legal publications including MIT Computational Law Journal, Robotics Law Journal, The AI Journal, Computers & Law Journal, Oxford University Business Law (OBLB), New Law Journal, The Global Legal Post, Lawyer Monthly, Legal Business World, LexQuiry, The Legal Daily Journal, Swiss Chinese Law Review Journal, The Legal Technologist, Law360, Attorney At Law Magazine, Law Society Gazette, and others. Dr. Eliot serves on AI & Law committees for the World Economic Forum (WEF), United Nations ITU, IEEE, NIST, and other standards boards, and has testified for Congress on emerging AI high-tech aspects. He has been a professor at the University of Southern California (USC) and served as the Executive Director of a pioneering AI research lab at USC. He has been a top executive at a major Venture Capital (VC) firm, served as a corporate officer in several large firms, and been a highly successful entrepreneur.