“Hundreds of AI tools have been built to catch Covid. None of them helped”- MIT
When the pandemic hit, a bunch of us rallied around to build a chatbot to address the massive infodemic. The problem statement- the overabundance of reported information, some inaccurate, about the virus made it hard for people to find trustworthy sources and reliable guidance when they needed it. If there was something more disturbing than the virus itself, it was the perception of the virus.
It was a problem to solve for and the technology was at our disposal. We sourced from WHO and CDC. Only from them. The bot could answer basic FAQ and debunk myths. Had we tapped into the Whatsapp university sources, our success rates would have been higher. The responses more attractive. Should we have?
Like practitioners across the globe, even I thought, ‘If there’s any time that AI could prove its usefulness, it’s now.. I had my hopes up.” But the MIT article (Link in comments) headers says it all- Hundreds of AI tools have been built to catch Covid. None of them helped.
I still believe AI has the potential. Why did it fail the pandemic’s test? The reasons are multifold but it starts- no surprises here- with the poor quality of data. A pandemic is not the best time to collect data. And so publicly available data sets with mislabeled data and Frankenstein sets (poorly spliced from multiple sources) fed tools that were trained and tested on the same data were immaturely rushed to the frontline.
To be data ready for future health crisis, scientific groups are calling for better mechanisms for data access and sharing across the borders. Perhaps, AI teams who were called out for their secrecy and incorporation bias could collaborate better with clinicians and explain their algorithms.
So maybe AI couldn’t help us much during the pandemic. Perhaps the pandemic could help make AI better?
Lessons from the pandemic for better AI?

Leave a Reply