Jul 2017, phys.org
"Deep learning" and "neural networks"
are terms that have become firmly planted in our popular lexicon. They all
refer to the same thing, which is an artificial brain-like thing that teaches
itself via feedback loops. I talked about this in Hidden Scents because the
way our brain decodes olfactory information is a lot like the way these deep
learning networks process their own big data.
This deep learning approach is way more effective than
traditional computing for lots of problems like facial recognition or natural
language translation. They're also really good at handling Big Data, you know, like all that stuff cybercriminals keep
stealing for ransom? Thing is, once these networks 'figure out' how to do
whatever it is that they do, we have no idea how they did it.
Usually, with traditional programming, we write the code,
so we know what it does and how. With this, the network essentially writes its
own program, and since it seems to know what it's doing, we don't ask how. We
just take the results.
Until now. This is one of the researchers, quoted in
phys.org:
"We catalogued 1,100 visual concepts—things like the
color green, or a swirly texture, or wood material, or a human face, or a
bicycle wheel, or a snowy mountaintop," says David Bau, an MIT graduate
student in electrical engineering and computer science and one of the paper's
two first authors. "We drew on several data sets that other people had
developed, and merged them into a broadly and densely labeled data set of
visual concepts. It's got many, many labels, and for each label we know which
pixels in which image correspond to that label."
Here is the link to
their work.
No comments:
Post a Comment