“Are easy-to-interpret neurons actually necessary? It might be like studying automobile exhaust to understand automobile propulsion.” Understanding the underlying mechanisms of deep neural networks (DNNs) typically rely on building intuition by emphasising sensory or semantic features of individual examples. Knowing what a model understands and why it does so is crucial for reproducing and improving…

The post Are Easy-To-Interpret Neurons Necessary? New Findings By Facebook AI Researchers appeared first on Analytics India Magazine.