aka Inceptionism vs. Trypophobia
this is what happens when a computer is asked to dream about the above images...
|
[I make no
apologies for reposting this year-old stuff from my previous blog; this is
still the coolest thing to happen to the art world since Malevich’s Black
Square.]
Investigating the artificial unconscious is a primary
objective for Hidden Scents, and was instigated by the popularization
(or neuro-popping) of deep learning neural networks. This “newfound” form of
computation is contributing loads of media-worthy content to the datasphere,
but it’s also starting to make an impact on the culture-at-large in a more
visceral way, via the activation of Google’s Deep Dream and its subsequent geek
porn for art nuts – Inceptionism.
Let’s begin with a description from the Google engineers themselves:
“Each layer of the network deals with features at a
different level of abstraction, so the complexity of features we generate
depends on which layer we choose to enhance. For example, lower layers tend to
produce strokes or simple ornament-like patterns, because those layers are
sensitive to basic features such as edges and their orientations.”
"...overinterpret [...] oversaturated with snippets
of other images."
And a deeper analysis by the masters of MindHacks:
“…by using the neural networks “in reverse” they could
elicit visualisations of the representations that the networks had developed
over training.
…pictures are freaky because they look sort of like the
things the network had been trained to classify, but without the coherence of
real-world scenes.
The obvious parallel is to images from dreams or other
altered states – situations where ‘low level’ constraints in our vision are
obviously still operating, but the high-level constraints – the kind of thing
that tries to impose an abstract and unitary coherence on what we see – is
loosened. In these situations we get to observe something that reflects our own
processes as much as what is out there in the world.”
***
Deep learning neural networks are a kind of reverse
algorithm. Using a very broad definition, an algorithm is a set of
instructions written by a programmer. The program, or algorithm,
instructs the computer in the solving of a problem. As it relates to artificial
intelligence and visual object recognition, a plain-old algorithm starts with a
database of objects and features. Red things, round things, fuzzy things, and
flat things. Higher features, like “automobile” or “person” won’t be recognized
until after the lower features. How does the system decided whether the
automobile is a firetruck or an ambulance? More features in the database means
more specific recognition. This mass of features is thus organized in a
hierarchy, as determined by the algorithm.
In a “neural net”, this (relatively) new kind of
algorithm, it’s like the hierarchy is not yet organized; the organization of
the feature-layers is done during the act of recognition. Because
it is not written in advance, in the way of a typical algorithm, and because it
actually works backwards relative to the typical mode of
operation, neural nets are seen as a very novel, and potentially disruptive
approach to artificial intelligence. In the midst of the stern warnings from
on-high against total AI takeover, a sensory recognition system that
teaches itself sounds especially portentous.
(Let the reader note that 1. Neural nets have been around
since the beginning of AI research, and 2. They do still require training b
humans in order to work; they need to be encoded with their own database of
pictures and descriptions, on e of the most widely used being AI
Sentibank.)
This deep learning approach, as artificial as it is, is
more akin to the process of our unconscious mind, as opposed to the more
rational, conscious mind. We cannot tell our unconscious mind what to do, it
works the other away around. This is what makes such a fitting subject in a
discussion about olfaction. To smell is the closest thing we have to glimpsing
the unconscious mind at work, and brings us to the next subject of interest,
that of hallucinations.
***
Inceptionism is a hallucinating computer. It uses the
'reverse' algorithm' approach of deep learning, and turns it back on itself,
favoring pure visual sensation over a “unitary coherence”. Without the
organizing principles of abstract, conceptual models, the world is a
phantasmagoria, a synesthetic mess of confusion.
Oliver Sacks does a wonderful job at describing this
world in his 2012 Hallucinations. In his pages, we see in working detail
how the brain makes sense of the world, and how that process can go awry. He
even has a chapter on the osmic family – anosmia, dysosmia, phantosmia, etc.
In closing, deep learning neural networks are a lot like
the olfactory-perception system. And olfaction is unavoidably hallucinatory.
Far afield, when a computer can dream about smells and not just visual imagery,
I doubt it will be as interesting as this here Inceptionism. To smell is
already a kaleidoscope of sensation, undulating, refracting, and
redintegrating*. We experience Olfactive Inceptionism on a daily basis.
*Redintegration is the restoration of the whole of
something from a part of it. If you typically associate citrus scents with
cannabis, you may eventually smell (hallucinate) cannabis when only citrus is
present.
Post Script:
Post Post Script:
Speaking of dreams, smell tends not to
make an impact in that arena.
No comments:
Post a Comment