Tuesday, September 15, 2020

The Origin of Artificial Olfaction


Apr 2020, phys.org

MIT researchers have a new and better way to compress models.

It's so simple that they unveiled it in a tweet last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want.



In other words, in order to make a more efficient artificial brain, you grow it from scratch, like a person. 

This is a welcome development for artificial olfaction enthusiasts, because we won't see fully-functioning synthetic olfactory systems until we can first get a "lifetime" worth of autobiographical data for that system. 

That feeling you get when "the smell of grandma's attic" hits you, it will not work if you didn't have a grandma. The data used by an olfactory system, artificial or otherwise, will come not only from infinite odorous molecules and their physiochemical properties, but also from the limbic system. And not just a limbic system, it has to be one that is preloaded with physiological datapoints as they relate to different combinations of odorous molecules. That requires a lifetime of matching bodily experiences, social experiences, and ultimately autobiographical moments to odors. 

To decode olfaction is not so much a phylogenetic (species) problem as an ontogenetic (individual) problem. There is so much variety in the way we perceive smells, that to use a phylogenetic approach would exclude the majority of what makes smell such a powerful experience. It needs meaning; it is by nature subjective, not objective. In other words, it needs a subject, and in ways that other senses can do without (see object recognition, for example).

Image source: Hiroto Ikeuchi cyberpunk

Notes
Comparing Rewinding and Fine-tuning in Neural Network Pruning, arXiv:2003.02389 [cs.LG] arxiv.org/abs/2003.02389

Post Script
June 2020, phys.org

"We study spiking neural networks, which are systems that learn much as living brains do," said Los Alamos National Laboratory computer scientist Yijing Watkins. "We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development."

Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. "It was as though we were giving the neural networks the equivalent of a good night's rest," said Watkins.

No comments:

Post a Comment