
Last week, Stanford Medical Center revealed how they used an algorithm it used to prioritize its staff for receiving vaccines. Of the 5000 selected for the first wave, the algorithm chose only 7 resident physicians (frontline workers) while the rest included hospital administrators, senior doctors and other employees working from home with no in-person patient responsibilities. They soon realized that the “… algorithm, that the ethicists, infectious disease experts worked on for weeks … clearly didn’t work right”.
Interestingly, there was no complex ML model involved but a rule based algorithm to determine the order in which the thousands of medical workers at Stanford should be vaccinated. A lot has been discussed on this story – including some takeaways like the need to review for structural biases in the algorithm, to validate algorithms before and after deployment, to include all stakeholders in decision making etc. But what’s most intriguing is how at least the initial blame in such cases is almost always on the (usually blackbox) algorithm. But with users expecting greater transparency (read Explainable AI) “an error in the algorithm” claim may need to be substantiated sooner or later.

The algorithm used by Stanford Medical Centre. Credits: MIT Review
Competency a curse?
Thanks to Sci-Fi movies, we credit Rogue robots with Doomsday AI – a day when we are in trouble because its goals aren’t aligned with ours. Agreed, there is a real risk with robots that don’t assume best intent.

But as Stephen Hawking said, “The real risk with AI isn’t malice but competence“. I would add compliance. When an algorithm does a fine job what its told to do, to the T – is it alone to blame? An algorithm reflects not only its creator’s rationale but assumptions and biases. It is an expression and execution of its creators preferences.

Taking constructive feedback from AI systems
In most cases, the perceived lack of priority for a certain section may not be the intent at all. But despite the best intent to build equitable systems, unfairness and bias has always been a cause of worry for users of an AI system. AI practitioners who assume best intent will treat AI as an ally. AI can help reveal unconscious biases in us that is otherwise so difficult for us to see within ourselves. AI does a great job of making it plain visible, for us to see. Our openness and willingness to accept this constructive feedback, as painful as it maybe, from the AI system is a step forward in successfully auditing and course correcting where required.
Leave a Reply