Cross-species data analysis strongly suggests that most age-associated disease and death is due to “antagonistic pleiotropy” — destructive interference between adaptations specialized for different age ranges. The result is that death rate increases through old age, and then stabilizes at a high constant rate in late life.
different types of BCIs-– one way vs two way (open or closed loop)– invasiveness (non, partial, very) (influences bandwidth)– spacial scale (topology, degrees of freedom)– temporal scale (precision)levels of organization- where to interact with the brain?– neuron– cortical column– nuclei– functional networks– cortical regionsafferent BCIs (inject a signal)– map the network– choose ‘connection’ site– inject a signal (MUST contain information)– “neuroplasticity” helps interprets over time– performance = f (information quality, accessibility, bandwidth…)efferent BCIs (find signal, take it out)– map the network– find a recording site– transduce a signal– algorithms ‘interpret’– ‘neuroplasticity’ (but you get less help from the brain going out than going in)– performance=f (resolution, signal quality, algorithms, information)major challenges in BCIs:data dimensionalitydata rates- up to 25 bits/min in 2000 (almost double now)biocompatabilitytissue/electrode interfacemapping circuits for meaningful injection/extraction pointsstate of the art for electrodes is bad…12 million neurons gets represented by 1 electrode. Likewise, electrodes don’t measure the same neurons during different experiments.
– computational – defining goals of the system (e.g., Opencog)– algorithmic – how the brain does things – the representations and algorithms (this guy)– implementation – the medium – the physical realization of the system (e.g., Blue Brain, SyNAPSE)
I’ll be at the Singularity Summit this weekend in San Francisco. Look for M. Edward Johnson. I’ll also have rocking sideburns.
The brain is extraordinarily complex. We are in desperate need of models that decode this complexity and allow us to speak about the brain’s fundamental dynamics simply, comprehensively, and predictively. I believe I have one, and it revolves around resonance.
Read More
I’m pretty sure I’ve found the future of medical diagnosis– it’s elegant, accurate, immediate, mostly doctor-less, comprehensive, and very computationally intensive. I don’t know when it’ll arrive, but it’s racing toward us and when it hits, it’ll change everything.
In short– the future of medical diagnosis is to use a gene expression panel along with functional and correlative connections between gene expression and pathology to perform thousands of parallel tests for every single human illness we know of– no matter whether it’s acute, chronic, pathogenic, mental, or lifestyle. In short: one simple test that’ll uncover all health problems.
Read More
I’m flying out to Salt Lake City tomorrow for a month of thinking about neuroscience; I process ideas by writing, so I’m kicking off an open-ended series of pieces dealing with the stuff I’m thinking about.
Part 1: Neurobiology, psychology, and the missing link(s)
Part 2: Gene Expression as a comprehensive diagnostic platform
Part 3: Neural resonance + neuroacoustics
The central problem of neuroscience is that despite all the advancements happening in medical science, we have embarrassingly few ways to quantify, or talk quantitatively about, mid-level functional differences between peoples’ brains.
It’s not that we have no tools at all for quantifying function and individual differences: we can draw correlations between specific genes and certain behavioral traits or neurophysiological features. We have the DSM IV (and soon, DSM V) as a sort of handbook on the symptoms of common brain-related problems. We have the Myers-Briggs and related personality-typing tests, we have psychometric tests, we have various scans that pick up gross neuroanatomy (and we can sometimes correlate this with behavioral deficits), and we have the fMRI, which can measure raw neural activity through the proxy of where blood flows in the brain.
The problem is that these methods of understanding brains are heavily clustered in two opposite areas: the reductionist neuroanatomical approach, which is great as far as it goes, but doesn’t go far enough up the ladder of abstraction to explain much about everyday behavior, and the symptom-centric psychological approach, which may be a great description of how various people behave, or some common neural equilibria, but really explains very little.[1][2] There’s a great deal of room in neuroscience for an ontology with which to talk about, and mid-level tools which attempt to measure and correlate things with, this underserved middle-level of brain function.[3]
Of course, the natural question regarding these mid-level approaches to understanding the brain is whether we can find ontologies and tools which can be said to “carve reality at its joints,” or not be based on a terribly leaky level of abstraction (as, for example, the DSM IV fails at), yet have direct relevance to psychological events as we experience them in ourselves and in others (as, for example, the DSM IV does). I don’t have any answers! But I do have ideas.
[1] To paraphrase Sir Karl Popper, implicit in any true explanation of a phenomenon is a prediction, and implicit in any prediction about a phenomenon is an explanation. So a good way to figure out how much of a field is true scientific explanation vs. ‘mere stamp-collecting’ is to check how much it deals with predictions, whether explicit or implicit. Psychology seems to be a primarily descriptive field that’s attempting to translate its rich (yet predictively shallow) descriptive ontology into a more prediction-based science.
[2] This point on the fuzziness of psychiatry was made rather eloquently in an Op-Ed by Simon Baron-Cohen (the famous autism researcher, and the first cousin of British comedian Sasha Baron-Cohen) in this week’s New York Times:
This history reminds us that psychiatric diagnoses are not set in stone. They are “manmade,” and different generations of doctors sit around the committee table and change how we think about “mental disorders.”
This in turn reminds us to set aside any assumption that the diagnostic manual is a taxonomic system. Maybe one day it will achieve this scientific value, but a classification system that can be changed so freely and so frequently can’t be close to following Plato’s recommendation of “carving nature at its joints.”
Part of the reason the diagnostic manual can move the boundaries and add or remove “mental disorders” so easily is that it focuses on surface appearances or behavior (symptoms) and is silent about causes. Symptoms can be arranged into groups in many ways, and there is no single right way to cluster them. Psychiatry is not at the stage of other branches of medicine, where a diagnostic category depends on a known biological mechanism. An example of where this does occur is Down syndrome, where surface appearances are irrelevant. Instead the cause — an extra copy of Chromosome 21 — is the sole determinant to obtain a diagnosis. Psychiatry, in contrast, does not yet have any diagnostic blood tests with which to reveal a biological mechanism.
[3] I realize this is somewhat vague. I plan to expand this description of what I think of as “mid-level functional attributes” and the sorts of concepts and tools I think may be useful for dealing with them. One example of a mid-level measurement that struck me as promising was a work correlating lack of microstructural integrity in the uncinate fasciculus with psychopathy.
The new site redesign is now live! Thanks to some beautiful artwork by my friend Corby, and some ugly html hacks by me, Modern Dragons now features a dragon. Speaking of which, it’s high time to answer the question:
What’s a Modern Dragon anyway?
Back in the Middle Ages, cartographers used to (anecdotally, at least) mark unknown or dangerous territories on their maps with the Latin phrase, HIC SVNT DRACONES– literally, “Here be Dragons”. By metaphor, then, the purpose of this blog is to locate, explore, and perhaps take a swing at the analogous dragons in our modern age– the puzzles, frontiers, and dangerous elements within science, culture, and this terribly uncertain future of ours.
Here, I am reminded not of the recent past but of a huge change that occurred in the middle-ages when humans transformed their cognitive lives by learning to read silently. Originally, people could only read books by reading each page out loud. Monks would whisper, of course, but the dedicated reading by so many in an enclosed space must have been an highly distracting affair. It was St Aquinas who amazed his fellow believers by demonstrating that without pronouncing words he could retain the information he found on the page. At the time, his skill was seen as a miracle, but gradually human readers learned to read by keeping things inside and not saying the words they were reading out loud. From this simple adjustment, seemingly miraculous at the time, a great transformation of the human mind took place, and so began the age of intense private study so familiar to us now; whose universities where ideas could turn silently in large minds.
Dr. Barry Smith, University of London, while discussing Edge Magazine’s 2009 question, What will change everything?
Edit: a commenter has suggested it was actually St. Ambrose, not St. Aquinas, who first broke this ground.
Earlier this summer a pediatrician friend of mine was asking about ideas for health care reform since Olympia Snowe was going to stop by her hospital and talk with the doctors there. Unfortunately Snowe cut her visit short, but this is what I came up with:
A simple and cheap proposal for improving American health:
In short, I’d like to see a federally-funded, state-by-state performance-based incentive program to improve public health. Specifically, the federal government sets aside a decent chunk of money and sets targets for curbing health problems: e.g., “Reduce the growth of childhood diabetes in your state by 50% by 2012” or “Reduce the growth of cardiovascular disease in your state by 40% by 2013.” If state A meets the target, they get generous federal funds for doing so. If state B fails to meet the target, they don’t. Ideally, this would generate a lot of creativity in actually solving the targeted problems (since real money for the state would be on the line), but states would also have incentive to copy what works.
This program might cost some money– but we’d be paying for results: if it flopped and nobody hit these targets, well, it’d have cost nothing. On the other hand, if this program got results, even if we consider the money going to states to be ‘wasted’ the program would still be a net financial gain from perspective of decreased strain on our health systems. In other words, with a results-based incentive system, we have nothing to lose if it flops and plenty to gain if it works.
Now, I’m sure the devil would be in the details. We’d need to pick targets that are easy to representatively measure and hard to game. It also seems like we could have a yearly governors’ conference revolving around this incentive program for states to share tips on what strategies are working and which aren’t. Make this conference (and the incentive program in general) a big deal, and make it competitive– make states proud of their successes and ashamed of their failures.
In general, it seems to me that this sort of grand state-by-state competition for funds could be extended to a lot of social problems. Since it incentivizes results instead of naive/bureaucratic thinking, it might encourage some smart, actionable analysis about the roots of various social problems. But that’s something to explore another time. My point is, I think this would work really well for improving public health, and we should do it.
p.s. Anyone have a good way of getting this idea into the hands of some congressperson?
Via a NY Times article on the US-China financial relationship:
Deng Xiaoping, the Chinese leader who ushered in its market reforms starting in the late 1970s, famously gave his country the following advice: “Observe calmly; secure our position; cope with affairs calmly; hide our capacities and bide our time; be good at maintaining a low profile; and never claim leadership.”
This seems to be the general trend in Chinese foreign policy; if the Chinese leadership decide this is no longer necessary or desirable, we could suddenly live in a very different world.
I think a particularly interesting and volatile element to this is that the Chinese Government has a relatively solid hold on power, but this hold is largely tied to the year-over-year economic growth China has been experiencing for decades. The Chinese are content to tolerate their government because life is getting better, and looks to get better still. Should this growth dry up, there’s no telling what may happen domestically, or what nationalistic conflicts the Chinese Government may enter into as a ploy to unify their people.
If I had to put together a list of the 7 Wonders of the Internet, archive.org would most certainly be on it. It’s the website of a non-profit organization which runs a huge server farm that tirelessly crawls the internet and saves what it finds. On the website, you can use their Wayback Machine to essentially turn back the clock and experience the internet frozen at a particular instant. The NY Times’ website as of December, 1998? Check. Yahoo.com as of January, 2001? Check. That Geocities blog you started as an angsty teenager and later deleted in shame? Yes, probably that too.
My latest archive.org-assisted rediscovery is of a wonderful little essay on the difficulty of writing vs programming by Paul Graham. Archive.org isn’t google-searchable, and so when Graham deleted his infogami blog this gem vanished down the memory hole. I’m re-hosting it here with Paul’s permission.
Paul Graham
Why Writing is Harder than Programming
3 Oct 06
I spent most of this summer hacking a new version of Arc. That’s why I haven’t written any new essays lately. But I had to start writing again a few days ago because I have to give a talk at MIT tomorrow.
Switching back to writing has confirmed something I’ve always suspected: writing is harder than hacking. They’re both hard to do well, but writing has an additonal element of panic that isn’t there in hacking.
With hacking, you never have to worry how something is going to come out. Software doesn’t “come out.” If there’s something you don’t like, you change it. So programming has the same relaxing quality as building stuff out of Lego. You know you’re going to win in the end. Succeeding is simply a matter of defining what winning is, and possibly spending a lot of time getting there. Those can be hard, but not frightening.
Whereas writing is like painting. You don’t have the same total control over the medium. In fact, you probably wouldn’t want it. When it’s going well, painting from life is something you do in hardware. There are stretches where perception flows in through your eye and out through your hand, with no conscious intervention. If you tried to think consciously about each motion your hand was making, you’d just produce something stilted.
The result is that writing and painting have an ingredient that’s missing in hacking and Lego: suspense. An essay can come out badly. Or at least, you worry it can.
I think good writers can push writing in the direction of Lego. As you get more willing to discard and rewrite stuff, you approach that feeling of total control you get with Lego and hacking. If there’s something you don’t like, you change it. At least, as I’ve gotten better at writing that’s what’s happened to me. I’ve become much more willing to throw stuff away.
But though you get closer to the calmness of hacking, you never get there. What a difference it is walking into the Square to get a cup of tea with a half-finished essay in my head, rather than a half-finished program. A half-finished program is a pleasing diversion– a puzzle. A half-finished essay is mostly just a worry. What if you don’t think of the other half?
It’s possible that hacking is only easy because we have poor tools (and low expectations to match). Maybe if you had really powerful tools you’d tell a computer what to do in a way that was more like writing or painting. Lego, pleasing as it is, can’t do what oil paint can. That would be an alarming variant of hundred year language: one that was as powerful and as frightening as prose. But that’s exactly the sort of trick the future tends to play on you.