Google AI Transparency Tool : TCAV
Despite a thorough understanding of Neural Networks for image classification, the models are still like black boxes to machine learning engineers since we don’t know which layer learns which set of features. A deeper understanding and knowledge of what is being learnt at each level can help correct biases or incorrect features being learnt by making changes to our dataset and increase the efficiency of analysis part for model improvement exponentially. However, the focus of most AI tools is towards ease of implementation and reduced deployment time currently.
Google research labs have come up with TCAV, or testing with concept activation vectors, which recently made its debut in Google I/O 2019. Though still in its nascency, it’s a great step forward in automated testing of neural networks.
In an example that Google offered, it could tell you that the software identified an image’s subject as a doctor partly because of the white coat and stethoscope, but also partly because the person appeared to be male — presumably because it was trained on a dataset in which men were more likely than women to be doctors. Just identifying that bias doesn’t fix the problem, of course, but it’s a necessary first step toward confronting and correcting for those sorts of biases.
In another example offered by Google, you can see that Horse and Stripes are important to identify a zebra, however, the importance attributed to Savanna shows that the dataset needs to be diversified more to improve classification efficiency in non grassland backgrounds.
TCAV is a big leap forward in automated testing in AI and will definitely push Google’s competitors to redirect focus and resources into this novel avenue. It may also lay the foundation for the development of tools for a broader range of neural networks and Speech Recognition.
It will also indefinitely increase enthusiasm for AI among budding developers because of the improved testing and understanding of neural networks.