White House Lays Down AI “Bill of Rights” Blueprint
The White House has announced a blueprint for what it calls an AI Bill of Rights – to ensure that the rights of US citizens’ individual privacy are not breached by the ongoing evolution of AI technology.
The blueprint is entirely non-binding, but the Government hopes it will be taken up and applied by AI tech firms, almost like Asimov’s Three Rules of Robotics, as guiding principles to help technological evolution and human liberty to exist side by side in the face of future AI development.
Principles of the Blueprint
The headline principles of the blueprint are:
- Safe and Effective Systems: People should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections: People should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
- Data Privacy: People should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Explaining the thinking behind the blueprint, Dr. Alondra Nelson, Deputy Director for Science and Society at the Office of Science and Technology said the blueprint had to be more than a dream if humans and AI technologies were to exist harmoniously in the future.
“Automated technologies are driving remarkable innovations and shaping important decisions that impact people’s rights, opportunities, and access. The Blueprint for an AI Bill of Rights is for everyone who interacts daily with these powerful technologies — and every person whose life has been altered by unaccountable algorithms. The practices laid out in the Blueprint for an AI Bill of Rights aren’t just aspirational, they are achievable and urgently necessary to build technologies and a society that works for all of us.”
The Devil In The Details
It’s fair to say that legislation has not kept up with the development of AI technologies in recent years, but the devil, from AI companies’ perspective, will be in the detail and the definition of some of these phrases – when does a system cease to be safe, for instance, and where exactly is the line on effectiveness?
While there’s a fairly clear definition of what an equitable algorithm should look like, AI use in recruitment sifting has proven that it can be more difficult to design than we might think, because it will inevitably carry forward the often-unconscious biases and privileges of both its programmers and the datasets it uses.
READ NEXT
AI digests supply chain risk
Perhaps unsurprisingly given the legislative void in which AI has developed recently, there are no set laws in place to govern the use of AI in cases like the recruitment sifting algorithms that exclude black people because previous holders of high-paying jobs have been white, or (as in Amazon’s case) exclude women because the dataset for high-paid executive roles that was used to train the company’s AI included only men. Similarly, there’s no concrete regulation around the use of AI for facial recognition software by law enforcement agencies, despite its distinct electronic preference for identifying black men as criminals – a practice that has led to several wrongful arrests already.
Privacy advocates have cautiously welcomed the blueprint, while pointing out that while it remains only a blueprint, it’s ultimately a toothless wish list of behavior to which tech companies developing AI products, and the companies that are using them, are expected to adhere out of some vague notion of digital decency. It would be significantly more effective – and therefore a significantly more meaningful achievement for the administration – were the blueprint to be laid into the chaotic framework of federal privacy laws.
Can a Toothless Dragon Become an Industry Standard?
Nevertheless, there’s overall nodding approval of the idea that the blueprint has been published at all, and while privacy advocates are keen that there be no work-arounds, exceptions, carve-outs, or loopholes, the unenforceable blueprint gives the tech industry as a whole the chance to act as good data citizens, and to embed the blueprint’s principles into the understood expectation of the American people when interacting with AI systems.
That means there is time – before any complex enforcement of blueprint-based hardcore federal laws becomes necessary to save the privacy of human beings from the onslaught of technological innovation – for the tech industry to fix such problems as have been shown to exist with AI technology in the past, such as its tendency to mimic prejudiced datasets and exclude people from opportunities and include them in suspicions.
Whether the Biden-Harris administration has the time, the support, or the vigor to turn the blueprint into anything more biting remains to be seen – November’s midterms will be key to finding the answers to those questions. But the opportunity to act now – and potentially be lauded and supported as early adopters of the blueprint – is not one the AI industry should cast aside lightly.