<p>The word “learning” often conjures images of school or other human endeavors. Neuroscientists have used the word for a wide range of phenomena in the animal kingdom. For those engulfed in python code, perhaps learning is also associated with gradient descent or other technical terms from computer science. What do we gain from using the same name for all these cases?<br>In this talk, I will argue that systems that learn can be useful models of one another. This is because of general principles that seem to transcend specific instances, such as multiplicity of solutions, low-rank perturbations and more. I will demonstrate these properties using several examples. These include representational drift, the connection (or lack thereof) between neural activity and behavior, and more.<br>Throughout the talk I will try to highlight the benefits, dangers and challenges of this approach.</p>