SS2010 Highlights: Day 1

SS2010 Highlights:
Day 1: The Future of Human Evolution
Michael Vassar: The Darwinian Method
A solid talk about the scientific method and rationality. People can be rational without being scientific; good organizational structures can protect against bias (but may be being eroded by the internet conjoining universities); there are different types of scientific method (“Scholarly science” (scholarly consensus) vs “Enlightenment science” (testing)); the Scientific Method is amazing because non-geniuses can still contribute to scientific progress. No big surprises, but a good kickoff. Vassar seems pretty familiar with philosophy.
Gregory Stock: Evolution of Post-Human Intelligence
A light talk about the future, progress, and evolution. Interesting points were
1. when Stock veered off and talked about his company, Signum Biosciences. They have an alzheimer’s drug just entering human trials based on compounds in coffee.
2. Stock posed the question, “Why would love or human values survive?” — presumably in AIs or post-human intelligences these things would be competitive handicaps and the ones burdened by human values would die out. I think it’s a good point. Perhaps the point could be extended to any intelligence driven by emotion. Or perhaps even consciousness itself, given that something could somehow be intelligent yet nonconscious. Is the future owned by anhedonic, zombie AI?
Ray Kurzweil: The Mind and How to Build One
He teleconferenced in from vacation (boo). Nothing really new, but he has clearly spent a lot of time thinking about reverse engineering the principles the brain uses. According to Kurzweil, the spacial resolution of our brain imaging doubles every year. Talked some about simulation progress projections (Markham of the Blue Brain Project says 2018, Kurzweil says late 2020s to fully simulate a human brain). Interesting points included that we’ve basically completely reverse engineered the wiring of the cerebellum (essentially the same neuron structure is repeated 10 billion times); we’re working on the cerebral cortex, and though it’s a lot more complex, we’re learning about its datastructures (it functions much like LISP’s linked lists). Likewise, we’ve deciphered that vision is essentially an amalgamation of 7 different low-resolution information streams. Progress.
A big problem in brain simulation, which Kurzweil mentioned, and Goertzel brought up when I spoke with him, is training brain simulations. Training will not only be difficult, but simulations will need to be trained before we can evaluate how good they are– and even if we can raise them at 20x speed, it’ll still take a year before we know enough to tell much.
Ben Goertzel: AI Against Aging
Goertzel’s dream is a computer that can do biology better than people can. We’re a long way off. He’s using ‘narrow’ AI programs in order to narrow down promising drug targets from thousands to dozens, specifically in the context of longevity compounds. Smart datamining.
His view on the etymology of aging (contra de Grey):

Cross-species data analysis strongly suggests that most age-associated disease and death is due to “antagonistic pleiotropy” — destructive interference between adaptations specialized for different age ranges. The result is that death rate increases through old age, and then stabilizes at a high constant rate in late life.

Steven Mann: Humanistic Intelligence Augmentation and Mediation
Mann is known as the “first cyborg”. Very into wearable computing. Wears a camera on his head and records everything. VR overlay capacity (he calls it “mediated reality”). Interesting in the context of technologies like Layar for the iPhone. Also had designed a water-based instrument (to fill in the orchestral gap between the “solid” instruments like drums and strings, and “air” instruments like woodwinds and brass).
Mandayam Srinivasan: Enhancing our bodies and evolving our brains
The father of haptic (touch feedback) technology. Talked about different ‘levels’ of haptic technologies– everything from using haptic tech to interact with digital objects, to perhaps brain-computer interfaces where our brains grow our sense of self to encompass an artificial prothesis (a third arm, say). Bottom line: the brain and our sense of self are very plastic, particularly given a feedback mechanism.
Brian Litt: The past, present and future of brain machine interfaces
Probably my favorite talk. Very grounded and accessible, but with speculative undertones. Talked about the neuroscience and engineering difficulties of BCIs. I’m posting some excerpts, because his talk was very content-rich:
different types of BCIs-
– one way vs two way (open or closed loop)
– invasiveness (non, partial, very) (influences bandwidth)
– spacial scale (topology, degrees of freedom)
– temporal scale (precision)
levels of organization- where to interact with the brain?
– neuron
– cortical column
– nuclei
– functional networks
– cortical regions
afferent BCIs (inject a signal)
– map the network
– choose ‘connection’ site
– inject a signal (MUST contain information)
– “neuroplasticity” helps interprets over time
– performance = f (information quality, accessibility, bandwidth…)
efferent BCIs (find signal, take it out)
– map the network
– find a recording site
– transduce a signal
– algorithms ‘interpret’
– ‘neuroplasticity’ (but you get less help from the brain going out than going in)
– performance=f (resolution, signal quality, algorithms, information)
major challenges in BCIs:
data dimensionality
data rates- up to 25 bits/min in 2000 (almost double now)
biocompatability
tissue/electrode interface
mapping circuits for meaningful injection/extraction points
state of the art for electrodes is bad…
12 million neurons gets represented by 1 electrode. Likewise, electrodes don’t measure the same neurons during different experiments.
Litt also talked about the technology behind coclear implants and a bit about vision implants. The state of the art in coclear implants is 22 1-dimensional channels, and a lot of useful information can be packed into this datastream if some audio filters and harmonic extractions are performed on the original sound.
I was curious how plastic Litt thought brain structure was– e.g., if you hooked up a coclear implant system to the visual nerves, would you get sonar? He seemed sympathetic to this idea in correspondence. More speculatively, I found myself wondering whether there’s any reason to believe we could coax the brain into productively utilizing and relocating function to a digital “third hemisphere” type prothesis?
Demis Hassabis: Combining systems neuroscience and machine learning: a new approach to AGI
Hassabis essentially made the argument that there are three main ways to approach to modeling intelligence, and that only two of these niches are being filled. He calls his third approach “systems neuroscience”.
Marr, whom Hassabis refers to as the “father of computational neuroscience,” identifies three levels of analyzing complex biological systems:
– computational – defining goals of the system (e.g., Opencog)
– algorithmic – how the brain does things – the representations and algorithms (this guy)
– implementation – the medium – the physical realization of the system (e.g., Blue Brain, SyNAPSE)
So there are productive opportunities for people to try to reverse-engineer then formalize and/or reuse the brain’s algorithms.
Hassabis also broke knowledge up into three categories that may(?) roughly correspond to these three levels: perceptual, conceptual, and symbolic. Analysis of perceptual knowledge is serviced by tools such as DBN, HMAX, HTM. Symbolic knowledge is serviced by logic networks. Conceptual knowledge is, according to H, not very well serviced.
It was an interesting talk, and I may need to watch it a second time to organize it into a coherent narrative. It felt content-rich and smart but somewhat conceptually disjoint.
Terry Sejnowski: Reverse-engineering brains is within reach
A smart but content-lite talk about extracting computational principles from the brain. More of a set-up to the debate than anything. Noted that our models of the brain come in many levels of abstraction, and progress in reverse-engineering brains will involve connecting these different maps (CNS, Systems, Maps, Networks, Neurons, Synapses, Molecules).
Dennis Bray: What Cells Can Do That Robots Can’t
A smart but impenetrably dense survey of some complexities of cellular operation. Bray was brought in as a skeptical voice of the Old Guard Biological Establishment.
“As a card-carrying cell biologist, my loyalty lies with the carbon-based systems.”
His focus seemed to be that cells are incredibly complex (there are 10^12 protein molecules in each of our cells) and impressively adaptable. It’d be very difficult to model the complexity or replace the adaptability, and the two are tightly linked.
Sejnowski/Bray debate: Will we soon realistically emulate biological systems?
An extremely polite and tame cagematch between Sejnowski and Bray. They seemed to converge on the idea that in principle, we could emulate biological systems– but our current models are Very Far from being realistic simulations. Sejnowski exhibited some tools (MCell, a monte carlo simulator of microphysiology) which apparently do a good job at modeling certain aspects of cell biology. Bray held out for full simulation, relating that Francis Crick once told him, “explanations at the molecular level have a unique power, because they can be verified in so many ways.”
One important point that emerged was cells are not stateless– certain kinds of memory are embedded in epigenetic memory, which have been shown to help restore memories in a damaged brain. In general, cells have a significant amount of memory/learning that help them predict future conditions and prime them for future behavior… to a significant extent, to get realistic neural behavior, presumably this memory will need to be modeled.
The short version: nobody knows how simple a model of a neuron we can get away with to realistically emulate a brain. However, it’s safe to say we’re not there yet.
All in all, a very interesting day. Not as many superstars as last year, and not a ton of diversity in topic (no 3d printing, no economics, etc), but a lot of good things were thought and said.