The question of whether the official government inflation numbers are right (currently 2-3%, depending on inclusion of food and energy) is really important. It would be hard to understate how important this is. But it’s considered a fringe topic, a settled issue. Here’s Krugman scoffing at the doubters.
We measure inflation with the Consumer Price Index, which is basically an aggregate, pared down cost-of-living metric. How much it costs, month by month, to buy life’s essentials for the average consumer. If this ‘basket of representative goods and services’ goes up, we call that inflation. As a cost-of-living metric it’s pretty good, and as Krugman notes, alternate approaches turn up about the same numbers.
But ‘how far does a dollar go for the average consumer?’ and ‘how much a dollar is worth compared to everything else?’ are very different questions. Here’s my take:
When we get down to it, everyone has their own inflation rate, based on what they want and need to buy. Averages here miss a lot of trends. One trend that comes to mind is the current economic polarization and concentration of wealth. Let’s keep in mind, dollars are a commodity like any other, and inflation is just supply-and-demand. It happens when too many dollars are chasing too few goods and services. In 2012, there’s a huge oversupply of dollars held by rich people and investors; stuff that these people want to buy (investments, high-end and luxury goods and services) is getting bid way up. On the other hand, in the lower tiers of society, there isn’t an oversupply of dollars. Official inflation rates are calculated based on cost of living for the majority of people— NOT, e.g., the cost of what people who hold most of the dollars want to buy. It’s an important distinction: in short, we look at inflation from the average person’s perspective, whereas we should look at it from the average dollar’s perspective.
I’m sure this alternately weighted, dollar-centric rate of inflation would be much higher than the official CPI. How could we calculate it reliably? I’m not sure. But you could make a lot of money if you figure out how.
 Walmart-style globalization, Procter & Gamble-style manufacturing efficiency gains, and Moore’s Law-type exponential improvement should all be strongly blanket deflationary factors– that is, making peoples’ dollars go further. That such deflation tends to be narrow (only a few things get cheaper, while most keep inflating) suggests to me these factors are effectively subsidizing inflation.
 Some of this inflation in investment commodities is driven by the current extreme levels of uncertainty, but some isn’t. One could presumably quantify some of this by looking at option premiums. Market analysis of dollar-denominated commodities gets really complex when there may be hidden inflation, however, and government-numbers-derived tools like TIPS are pretty worthless qua tools.
 The money supply may be said to be one of many tools the powerful use to extract wealth from the less powerful. If there is indeed a currency crisis ahead, involving inflationary and deflationary shocks, a reasonable guide for their timing would be to look at what would benefit the central bankers’ balance sheets the most.
– Is this driven by an oversupply of currency or of credit? Probably both, e.g., “Some insights from my visit to the ECB“. And due to many people being desperate to hold anything but currency (ABC).
– The US is creating a lot of currency, but this is definitely not limited to the US dollar. Every other government in the world has two incentives to print money: more competitive exports, and free money. Trends like this persist until they can’t.
– People think of inflation and deflation as opposites; I would say they’re cousins, in that they’re both products of and drivers of volatility. Both erode the leverage of the tools we use to manage our fiscal affairs, and I suspect both could happen in short succession, particularly with a whipsawing money supply– or even at the same time, in different sectors of our economy (just like different inflation rates).
– Why does this matter? Aside from skewing all economic statistics, this adds a great deal of volatility to anything connected with currency. Back in 2005 people scoffed at the possibility of a housing bust, pointing at a variety of statistics (all of which looked very solid and reasonable at the time). Now, people are scoffing at the possibility of a currency crisis, pointing mainly at the stability of the CPI. I don’t know what the future holds, but I know that’s not a good argument.
Additional References (but, caveat lector):
I camped over at the Occupy LA protest a few weeks ago. It was fun– most of the people seemed thoughtful and genuine and I sympathize with a lot of their concerns. I have some friends who are “occupying” as I write this. People are mad, and I get it.
I also have friends and family who work in the greater investment community. Their opinions of OWS are all over the map: some welcome the protests if they can bring about more market transparency and accountability, others are quite frustrated by the protesters’ general lack understanding, sophistication or solutions. To put (my) words to their feelings: things aren’t perfect, but many of the protesters’ demands betray a striking lack of comprehension about how the market works.
I’m not here to pick a winner. I am here to say that this dispute hides a much bigger problem, one independent from any issue of corruption and one that will unravel the fabric of society if we sleepwalk into it. Settle in, get a cup of coffee, and I’ll explain why.
I’m often pleasantly surprised by how much smart laypeople are interested in physics. Regardless of their educational background, peoples’ ears perk up when the discussion turns to how weird quantum mechanics is, issues in contemporary physics, or even odd physics thought experiments. I’d go so far as to say, once we cut through the jargon, physics is one of the most inherently interesting fields, because
(1) physics is ultimately the foundation for basically everything,
(2) when we get down to details it’s pretty darn weird, and
(3) while most fields have moved away from metaphysical questions and toward inaccessible problems of complex emergence, there are still cool, unsolved, fundamental mysteries in physics.
People seem really engaged by the weirdness and mysteries in theoretical physics, even to the point of feeling an ownership interest in them, and I think that’s awesome.
And so with this year’s Nobel Prize in Physics announced, I wanted to give readers a quick rundown on current Big Mysteries in Physics. It’s not a comprehensive list, but I argue that most other questions will ultimately trace back to these three.
1. How do we combine General Relativity and Quantum Dynamics?
Right now Physics rests uneasily on two fundamental theories. General Relativity deals with relationships between spacetime, velocity, and gravity (generally speaking, properties associated with large objects) and is amazingly predictive at what it does. Quantum Dynamics deals with sub-atomic particles, the quantized nature of the strong, weak, and electromagnetic forces, and the weird statistical rules these things obey (generally speaking, properties associated with very small things), and is amazingly predictive on quantum scales. We have one theory for big things like planets and spaceships, and another for small things like electrons and quarks.
The trouble is, the math– and metaphysical assumptions about reality– of these two theories are very different, and we don’t have a good way to fuse them together to talk about things like black holes or the big bang, things which straddle both the quantum and relativistic. Most physicists find the situation very troubling, not to mention deeply ugly, since it feels like the universe must have a single set of rules, not two. Presumably, if we found a more general model which explained each theory as a special case of a more general system, all sorts of little mysteries in physics might solve themselves (just like the theory of DNA solved lots of mysteries in biology). String theory, quantum gravity, and other, even more esoteric field theories are attempts at unification, but to date no attempt at unification has made any successful prediction that departs from what each separate theory suggests.
2. What is Dark Matter?
There are two huge fudge-factors in physics. One is Dark Matter– a hypothetical sort of matter that interacts with other matter only via gravity (“dark” means “we can’t see it”). It was introduced in 1934 to explain why galaxies rotate so fast: according to our equations, without this fudge factor, many galaxies rotate fast enough that they should simply fly apart. However, instead of disappearing quietly like fudge factors often do, we still need it today to explain galactic dynamics and certain other observations. Most cosmologists agree that it’s very probably not just an artifact of some mistake in our calculations, but some very real and very mysterious type of matter.
What we know: based on our calculations ~83% of all matter is “dark”. We think this dark matter is found in most or all galaxies, and there’s a good chance some passed through you as you read this. There are conflicting theories about where it’s most heavily concentrated– some models have it primarily concentrated in the dense center of galaxies, some have it more spread out, some in a halo. We’re pretty sure, whatever it is, that dark matter is “cold” — i.e., not moving at a significant fraction of the speed of light. There are a lot of experiments trying to conclusively detect dark matter, either (1) from its gravitational effects or (2) directly, if dark matter happens to occasionally interact (‘weakly interact’, in the lingo) with normal matter. (A shoutout here to the Sanford Underground Laboratory, which is in the running, and within spitting distance from my folks’ house.)
3. What is Dark Energy?
The 2011 Nobel Prize in Physics was awarded for discovering a mystery: that our universe’s expansion hasn’t slowed down since the Big Bang. In fact, it’s actually sped up. And we have no idea why.
The standard assumption prior to 1998 was that our universe was either going to contract due to gravity (the “big crunch”), or was somehow exactly balanced (Einstein’s static universe hypothesis), or that the initial energy from the Big Bang would keep the universe expanding, albeit ever more slowly as gravity tried to pull everything together.
An examination of a specific type of star explosion– Type Ia supernovae, which due to various dynamics all explode with roughly equal energy and brightness– provided a basis for an historical record of the universe’s expansion. Since we know how much energy is released in these explosions, we can calculate how far (which is another way of saying ‘how old’) it is based on how bright it is for us. Likewise, we can tell if it’s moving toward us since the light will be “blueshifted”, or if it’s moving away from us, it’ll be “redshifted” (think of how a siren’s frequency changes depending on whether it’s moving toward or away from you).
What we found when we put these things together was that basically everything is moving away from us, but– here’s the kicker– the closer, newer stars are moving away from us proportionally faster than older stars. The universe is not only expanding, but the expansion is accelerating.
Cosmologists don’t know what’s causing this. The convention has been to refer to it as “dark energy” since the cause of the expansion is generally fudged-in as an energy term in our equations, but we don’t really know if it’s a hidden form of energy, an emergent property of space, or something even more esoteric. There are theories, but they tend to be mathematically inelegant – and given our lack of a high-resolution expansion timeline, remain little more than untested guesses. If it is actually energy, there’s a lot of it:
A shameless plug for a pet theory:
I don’t know what Dark Energy is, but I do actually have a guess. If you’re in the mood for some cosmological speculation, and particularly if you’re in a position to give feedback on such, I encourage you to check it out. Like any new theory, it’s probably wrong– but based on my reading of the field, I don’t think it’s more likely to be wrong than other theories on the topic, and throwing one’s hat in the ring is how science progresses.
 Another major mystery is why there’s way more matter than antimatter in the universe. “Antimatter” sounds so weird and esoteric, but it’s actually rather common– there’s probably lots of antimatter popping in and out of existence in the room you’re in now. We commonly create antimatter in labs, and it actually forms the basis for tech like PET scans. It’s just that matter is WAY more common, and there’s no a priori reason we can see that this should be the case. I talked a bit about this in my 2008 obituary of John Wheeler.
John Hawks, on the mathematics of family trees and recombinant DNA:
In practice, even though we have billions of nucleotides, our DNA cannot follow billions of genealogical lines. Recombination over 30 — 40 generations does not divide chromosomes down to individual nucleotides. In the medium term, most human DNA is separated by recombination hotspots into lengths of around 50 kilobases. Across very short spans of 30 generations, DNA is for the most part inherited in chunks of hundreds of kilobases or longer. So dividing six billion nucleotides by 50 kilobases yields a number of around 120,000 ancestral lines at most from which any individual inherits his or her DNA. Recombination will increase this number somewhat further and further back in time, but not nearly so fast as the doubling of possible ancestral lines in every generation. This means that the vast majority of your ancestral lines more than around 17 generations ago have left no DNA to you whatsoever.
Granted, this is relative to the massive redundancy in our family trees– humankind is one huge, partially-inbred extended family. I.e.– if you go back 40 generations, you have over a trillion great-great-great-(etc) grandparents. There weren’t a trillion people alive in 1000AD, so a lot of those slots were filled by the same people.
In a previous post I sketched out the importance of frequency normalization in studying the brain, and a possible way to approach the problem. I don’t know if mine is a workable approach- frequency normalization in the brain is a hard problem, due to complex topology and variable state. But comparative frequency analysis within and across brain regions, however we accomplish it, will be really, incredibly important for understanding what’s going on in the brain, and how brains can differ, and maybe even how emotions work. I have a pet theory as to what we’ll find when we’re able to do this sort of frequency analysis in the brain. As with any new theory it’s most likely wrong, but since everybody’s theories on this are similarly disadvantaged (what few big-picture theories are out there), and it’s a topic worth figuring out, I have no qualms about throwing my hat in the ring.
Most importantly, I want to get people thinking about what emotion is, without cop-out references to ‘happiness neurochemicals’ or ‘regions of the brain which control emotion’. When it gets down to it, those are just ways of saying, “we don’t know what emotion is.” For instance, using these poor, correlative explanations of emotion, it would seem we could build a computer that could feel happiness by dumping some dopamine extract on its processor, or make it feel pain by fusing some human nociceptor nerves onto the motherboard. Clearly this is not the case. If we want to deal with emotion at anything except a trivial level, we need to dispense with correlative explanations and move toward an information-theoretic approach, to be able to explain affect in our brains as a special case of more general equations.
So what is emotion? I suggest we look to the mathematics of music theory for a possible answer.
(This is really technical and hypothetical; if you don’t enjoy mathematics and speculative neuroscience and would prefer alternative entertainment, why not check out these captioned pictures of cats instead?)
Lots of very intelligent people are putting lots of effort into mapping the brain’s networks. People are calling these sort of maps of which-neuron-is-connected-to-which-neuron ‘connectomes‘, and if you’re working on this stuff, you’re doing ‘connectomics‘. (Academics love coining new fields of study! Seems like there’s a new type of ‘omics’ every month. Here’s a cheatsheet courtesy of Wikipedia– though I can’t vouch for the last on the list.)
Mapping the connectome is a great step toward understanding the brain. The problem is, what do we do with a connectome once it’s built? There’s a lot of important information about the brain’s connectivity packed into a connectome, but how do we extract it? Read on for an approach to broad-stroke, comparative brain region analysis based on frequency normalization. (Fairly technical and not recommended for a general audience.) Read More
Ken Jennings, on his match with IBM’s Watson supercomputer:
Indeed, playing against Watson turned out to be a lot like any other Jeopardy! game, though out of the corner of my eye I could see that the middle player had a plasma screen for a face. Watson has lots in common with a top-ranked human Jeopardy! player: It’s very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman. But unlike us, Watson cannot be intimidated. It never gets cocky or discouraged. It plays its game coldly, implacably, always offering a perfectly timed buzz when it’s confident about an answer.
Perhaps somewhat less funny from Ken’s perspective, a question asked during his reddit interview:
How’s it feel to be owned by something that asked “What is leg” ?
Richard Hamming used to go around annoying his colleagues at Bell Labs by asking them what were the important problems in their field, and then, after they answered, he would ask why they weren’t working on them. Now, everyone wants to work on “important problems”, so why are so few people working on important problems? And the obvious answer is that working on the important problems doesn’t get you an 80% probability of getting one more publication in the next three months. And most decision algorithms will eliminate options like that before they’re even considered. The question will just be phrased as, “Of the things that will reliably keep me on my career track and not embarrass me, which is most important?”
And to be fair, the system is not at all set up to support people who want to work on high-risk problems. It’s not even set up to socially support people who want to work on high-risk problems. In Silicon Valley a failed entrepreneur still gets plenty of respect, which Paul Graham thinks is one of the primary reasons why Silicon Valley produces a lot of entrepreneurs and other places don’t. Robin Hanson is a truly excellent cynical economist and one of his more cynical suggestions is that the function of academia is best regarded as the production of prestige, with the production of knowledge being something of a byproduct. I can’t do justice to his development of that thesis in a few words (keywords: hanson academia prestige) but the key point I want to take away is that if you work on a famous problem that lots of other people are working on, your marginal contribution to human knowledge may be small, but you’ll get to affiliate with all the other prestigious people working on it.
And these are all factors which contribute to academia, metaphorically speaking, looking for its keys under the lamppost where the light is better, rather than near the car where it lost them. Because on a sheer gut level, the really important problems are often scary. There’s a sense of confusion and despair, and if you affiliate yourself with the field, that scent will rub off on you.
Academia does plenty of good things– but the opportunity cost of our systemic incentives toward ‘safe’ research (I include both the derivative and the esoteric) is rather staggering.
Edit, 8-13-11: A friend blogs,
The answer comes down to ethics. Service as an ethic is alien to so many academics. “I serve.” They don’t get it. Some do. A few. But a number of my friends have gone into the academy for longer or shorter periods of time, and the observations have always been similar – it’s not a place of scholarship and diligent service, but rather of all sorts of politics and backbiting where you desperately try to carve out your own private sphere in a confusing bureaucratic jungle.
I think academia used to have a strong shared sense of duty (Sebastian uses the term ‘warrior ethic,’ where service to a noble cause is its own reward), but for several reasons this has largely eroded or isn’t sustainable in today’s academy. It’s still present, but it’s much weaker. We could point to institutional factors, a changing demographic of who goes into academia, a crossover from our increasingly mercenary private-sector culture, or getting more of what we pay for, but at the end of the day– it seems like many people in academia think of it primarily as a career, not as service. It’s a big loss.
TMS ‘Sonar’ for mapping brain region activity coupling
Modern neuroscience is increasingly suggesting that a great deal of a person’s personality, pathology, and cognitive approach is encoded into which of their brain regions are activity-coupled together. That is to say, which of someone’s brain regions are more vs. less wired together, compared to some baseline, determines much about that person.
Right now such coupling is largely invisible and unquantifiable. If we are to move toward a clearer understanding of individual differences, not to mention psychiatric conditions, it would be invaluable to have a test for this activity coupling. A combination TMS+fMRI alternated pulse device- as it could stimulate a specific brain region/network, and measure how it affected the activity in other regions- may very well provide an objective basis for psychiatric diagnosis and treatment recommendations, and perhaps even a firmer foundation for psychology as a whole.
The following is a somewhat technical writeup of the idea. Not into detailed neuroscience stuff? Click here.
A friend of mine on difficult video games and accomplishment:
Have you heard of the game NetHack?
It’s been in continuous development for the last 20 years or so. It’s all text-based graphics, very spartan in that sense, but those limited graphics make for extremely rich and deep gameplay and interaction with the world.
Oh, and it’s really, really fucking hard.
When you die, you’re dead forever. And it’s really easy to die.
Basically, every time you touch a key on your keyboard, a turn passes.
If you hold a direction key on your keyboard, 20 turns will pass and you’ll have moved 20 squares.
It’s very, very possible to have a threat emerge and kill you in 2-3 turns. The game requires extreme patience, caution, and planning to get through. Even that might not be enough, but it’s definitely required.
Beating NetHack is called “ascending” – I finally did it after a few years of playing.
And afterwards, I thought to myself – you know, I bet it’s easier to start a bank in the real world than it is to ascend in NetHack.
… Anyways. I haven’t started a bank yet. But I really seriously suspect it’s easier than beating NetHack. If you took 200,000 perfectly normal people and split them into groups of 100,000 – and half of them were instructed to beat NetHack and the other half were instructed to start a bank, and it was a really big deal if you succeeded or failed… I bet you’d get more new banks than NetHack ascensions.
1. If you can win a hellishly difficult video game, you should be able to do almost anything.
2. If you can structure your life like a video game– e.g., forgiving learning curves, point-based progression systems, rewards for difficult accomplishments, carefully selected addictions– it can really help better yourself.
I basically agree. Regarding #1, I think video games are somewhat akin to very broad IQ tests (probably a much better IQ test than the “standard” psychometric suite!), and as such don’t test for everything. There’s more to achievement than IQ. But if you can beat, say, any of the Civilization games on the hardest difficulty, it’s good evidence that you can handle complexity much, much better than most people. (Maybe you should start a bank!)