Category Archive for: Science [Return to Main]

Monday, March 12, 2007

The Dark Side

Off and on, we've been discussing the progress economics has made as a discipline. In those discussions, we often hear what a sorry state economics is in, that the theoretical models don't predict well, that economics isn't a real science, and so on. A comparison to physics often comes up.

One response to this onslaught against economics is to point out that physicists also have a theory that doesn't predict well, in particular the motion of galaxies and the matter within them is at odds with theoretical predictions. Visible matter cannot account for the gravitational "stickiness" of clusters of matter.

The solution physicists have adopted is not to question the theory, at least not for the most part - see below, but instead to make stuff up - dark matter, dark energy, I expect there will be a "dark light" soon as well - whatever is needed to make the equations work must exist.

When I respond to attacks on economics by pointing out that these theories seem every bit as controversial and uncertain as ours, in fact perhaps more so with this "making stuff up" step that assumes the theory is correct and then assumes the matter needed to make the theory work is present in just the right invisible amounts to get motions to agree with theory, I get told it's because I don't understand (and let's not get into the vacuousness of string theory - when I say the absence of testable predictions makes string theory philosophy, not science, I guess I don't understand either).

I'm going to get myself in trouble again by not understanding one more time. This is an astrophysicist from a blog in the NY Times called Across the Universe:

Searching for the Dark Side, by Chris Lintott, Commentary, NY Times: ...The ... kind of matter we – and everything we can see directly – are made of accounts for only a sixth of the total mass in the universe. In the absence of knowledge, we label the rest as “dark matter” and look to detect it via its effects on what we can see.

We have long known that there is more to our universe than we can see. The great eccentric of American astronomy, Fritz Zwicky, realised in 1930 that the galaxies within galaxy clusters were moving more rapidly than they should be. The only explanation was to assume that there was more matter present than scientists had thought... A similar problem exists on the scale of individual galaxies; if only the visible disk of a galaxy such as the Milky Way existed, the galaxy would fly apart in just a few million years. The solution is the same; if the Milky Way is embedded in a halo of dark matter, then all is well.

What is this dark matter? We know very little about it... [M]ost cosmologists now believe that dark matter is composed of slow-moving exotic particles that have yet to be identified.

Such a situation is unsatisfactory to say the least. Although particle physicists are working on the problem, ... it is embarrassing not to know what the main component of the universe is. The alternative is to assume that there is something wrong with our knowledge of gravity, and that the work of Einstein needs revising. Theories that attempt to do just that have been somewhat successful. ...

[It will be] a stringent test for theorists attempting to dispose of dark matter, but I for one hope they succeed. It would be tidier, somehow, to lose the enigmatic dark matter, and exciting to discover a successor to Einstein’s relativity. As George Bernard Shaw said in 1930, “Ptolomy invented a universe and it lasted 2000 years. Newton invented a universe and it lasted 200 years. Now Dr. Einstein has invented a new universe, and no one knows how long this one is going to last.”

"[I]t is embarrassing not to know what the main component of the universe is." That's why economists should hang their heads in shame when real scientists enter the room.

Tuesday, March 06, 2007

Using Self-Interest to Explore Consciousness

What is consciousness? Neuroscientists are borrowing from economists to find out. Here's the introduction:

Introduction by David Dobbs, Editor, Mind Matters:  Consciousness -- the awareness of perception, sensation or thought -- presents one of neuroscience's most slippery problems. How do we become aware of something? When can we be said to be aware of something? Being tired and being conscious of being tired are two different things; how does one become the other?

Philosophers, having debated these heady questions for centuries, have recently been joined by neuroscientists determined to test them with replicable measurements of awareness. In the paper described here -- "Post-decision wagering objectively measures awareness," published in Nature Neuroscience on 21 January 2007 -- the authors, Navindra Persaud of the University of Toronto and Peter McLeod and Alan Cowey of Oxford University, made a splash... In a set of experiments drawing equally on Oliver Sacksian neurological anomalies and Friday night poker, Persaud and colleagues manage to test awareness of knowledge that the subjects don't even know they have. ..[A]s our experts Christof Koch and Kerstin Preuschoff write, applying this tactic to awareness appears to give consciousness researchers a much-needed but elusive tool -- a way to assess consciousness without disturbing it.

Here's a few passages relating to economics, but the core of the argument and discussion is in the linked article:

Measuring Consciousness: A Neuroeconomic Gamble Pays Off, by Christof Koch and Kerstin Preuschoff, SciAm: Much of what we do goes on outside the pale of consciousness -- whether we adjust our body posture or decide to marry someone, we often have no idea why or how we do the things we do. The Freudian notion that most of our mental life is unconscious is difficult to establish rigorously. While it seems easy to answer the question "Did you (consciously) see the light turn on?", more than a hundred years of research has shown otherwise. The key problem is defining consciousness such that it can be measured independent of the internal state of an individual's brain while still capturing its subjective character. ...

In "Post-decision wagering objectively measures awareness," Persaud, McLeod and Cowey introduce ..[an] objective measure of consciousness that exploits people's desire to make money. This method is adapted from economics, where it is used to probe a subject's belief about an event's likely outcome: people who know that they have information are willing to bet on it, that is, they are willing to put their money where their mouth is. ...

Persaud and colleagues use this sort of wagering to reveal consciousness or lack thereof. ... The wagering techniques used by Persaud, McLeod and Cowey rely on people's instinct for reaping a profit. Compared to forcing subjects to become aware of their own consciousness -- and in the process perturbing the very phenomenon one wishes to measure -- wagering provides a more subtle way to assess awareness. This is an exciting and revealing new way to study awareness and consciousness. From such small steps comes progress in answering the age-old question of how consciousness arises from experience.

Saturday, March 03, 2007

The War on Science

Tomtomorrow
Click on Image for Full Comic

[via C&L]

Tuesday, February 13, 2007

Risk Taking, Capitalist Attitudes, and Genetics

There seems to be quite a bit of discussion about how genetics affects risk-taking behavior and capitalist behavior today. First, from comments:

Wow, Phelps is really taking a beating here!

I heard him interviewed once ... and I was actually amazed at how reasonable he sounded. In particular, he voiced interest in the phenomenon of entrepreneurship, which he said was complete terra incognita for economics, yet forms the basis of progress and growth for society.

As a tech entrepreneur I completely agree with him on this point: it's vital to economic growth, it's not captured well by economic models, and Americans are culturally much better disposed to entrepreneurial risk-taking than their European counterparts.

Aside from cultural factors, some of it may even be genetic: risk seeking behavior has been shown to have a genetic component, and most entrepreneurs must be very risk tolerant. It seems quite likely to me that the descendants of immigrant risk takers (which most Americans are) might have a higher representation on the far tail of risk seeking. (Even if average differences are small the population fractions on the tail of a distribution can be wildly different -- the US might have several times as many people per capita who are wild-eyed-optimistic enough to start their own company.)

A psychologist at Johns Hopkins has even claimed that American entrepreneurs are largely hypomanic. In my experience with lots of Silicon Valley types, he is completely correct. [More here and here].

There is more discussion in comments. Tyler Cowen at Marginal Revolution asks:

Are the British genetically capitalist?, by Tyler Cowen: Greg Clark is one of my favorite economists, but I am not convinced by his latest paper.  Here is the abstract:

Before 1800 all societies, including England, were Malthusian. The average man or woman had 2 surviving children.  Such societies were also Darwinian. Some reproductively successful groups produced more than 2 surviving children, increasing their share of the population, while other groups produced less, so that their share declined. But unusually in England, this selection for men was based on economic success from at least 1250, not success in violence as in some other pre-industrial societies. The richest male testators left twice as many children as the poorest. Consequently the modern population of the English is largely descended from the economic upper classes of the middle ages. At the same time, from 1150 to 1800 in England there are clear signs of changes in average economic preferences towards more "capitalist” attitudes. The highly capitalistic nature of English society by 1800 – individualism, low time preference rates, long work hours, high levels of human capital – may thus stem from the nature of the Darwinian struggle in a very stable agrarian society in the long run up to the Industrial Revolution. The triumph of capitalism in the modern world thus may lie as much in our genes as in ideology or rationality.

There is considerable evidence that commercially successful Englishmen had more kids than average, starting in medieval times.  There is much less comparative evidence about other societies; do see pp.31-2, 55-7, but his best example concerns one Amazon tribe, where the warlike reproduced with greater frequency.

Of course the commercial revolution and then the so-called industrial revolution came out of England, not Germany or Italy. If it could be shown that the English family pattern stood out with regard to the rest of Europe, I would see greater heft in the idea. ... I'm back to thinking it is institutions (most of all for science) and peer effects, not genetics, at the relevant margin of take-off.  Did commercially active Germans and Italians, during the Renaissance, really fail to propagate their seed?

And Felix Salmon follows-up with:

Are we genetically capitalist?, by Felix Salmon: Tyler Cowen takes issue with Greg Clark, who has an interesting thesis – that world economic history can be explained by a move from a Malthusian world where the most successfully violent were the most reproductively successful, to a capitalist world where the richest were the most reproductively successful. Here's the chart:

history.jpg

Cowen's problem is that Clark hasn't explained a particular advantage for England, where the industrial revolution was born, over the rest of Europe. But I don't see this as a problem. I see Clark's thesis as explaining the economic history of all of Europe, not as trying to explain why England's growth rate took off marginally earlier than other countries' growth rates.

I'm quite convinced by Clark, actually, because his thesis fits neatly into that of Dan Gilbert, of Stumbling on Happiness fame. We're genetically bound to strive to make money, and to believe that making more money will make us happier. And maybe the real hope for a country like Bhutan, which seeks to maximize "gross national happiness" rather than GDP, is that its population doesn't have the same genetic makeup and therefore doesn't have the same urge to destroy the commons in the search for wealth.

I could be convinced, I suppose, but I'm not convinced yet.

Sunday, January 28, 2007

Larry Summers Issues an SOS

As in "Save Our Sciences":

America must not surrender its lead in life sciences, by Lawrence Summers, Commentary, Financial Times [free here]: The 20th century was shaped by developments in the physical sciences. ...[S]olid state physics ... allowed mankind to take flight and split the atom. Advances in … physics also led to the development of the transistor, the semiconductor and ultimately to the information technology explosion that transformed economic life. The 20th century was an American century in no small part because of American leadership in the application of the physical sciences...

[T]he 21st century will be defined by developments in the life sciences. Lifespans will rise sharply as cures are found for chronic diseases and healthcare will come to be a larger share of the economy than manufacturing. Life science approaches will lead to everything from further agricultural revolutions to profound changes in energy technology and the development of new materials. ...

It is natural to ask whether the US will lead in the life sciences ... as it did in the physical sciences... It is a profoundly important economic question, but one whose implications go far beyond... At present, ... the US is clearly leading in the life sciences. But past performance is no guarantee of future success. ... If America is to maintain its leadership in life sciences..., important steps must be taken.

Most abstract but most important, there needs to be respect for the scientific method and its results. In sharp distinction to … other industrial countries, there is an increasing move away from respecting the scientific method in US schools. Polls demonstrate that up to one-third of high school biology teachers have as much faith in intelligent design as in evolution …[and] that as many as 70 per cent of the American people agree with them. Matters are not helped when the president advocates the teaching of intelligent design alongside evolution as a “different school of thought”.

Second, funding has to be a priority. During the past three years, when there has been more possible in the life sciences than there has ever been, when we are on the cusp of achieving important breakthroughs in everything from stem cells to the treatment of cancer, government funding for science research has been cut in real terms. This has been particularly hard on young researchers...

Funding, however, is ... also a matter of … compensation levels… In today’s economy a … graduate of a leading business school earns a substantially higher salary than a ... graduating ... PhD in biology. Several years after graduation the differences are even more pronounced. It should not be a surprise that ... more of our talented young people are not headed towards careers in … the life sciences.

Third, we need to control the role of politics in allocating science dollars, which are currently tossed around like so many political footballs. The fact that diseases that afflict the relatives of key congressional appropriators receive a disproportionate share of research dollars is not a step towards scientific progress. And it is not a step towards a healthier 21st century to allow the views of a vocal minority in effect to cut off funding for embryonic stem cell research – which is likely to lead to revolutions in the treatment of Parkinson’s disease, diabetes and cancer within the next generation.

Finally, we need to support clusters of extraordinary performance. If competition is individualistic, the US is going to have a very difficult time because salary levels … are going to be much lower in other parts of the world. Rather than focus on each individual …, the US needs to focus on fostering clusters of innovation – such as Silicon Valley in information technology, Boston in the life sciences, New York in finance – where each talented individual derives his or her strength from all that is around. Competing with that on price is much more difficult.

These are not issues that can be addressed in a year or even a presidential term. Nor are they issues that will have a large predictable impact over a period of several years. But over the long run, few issues are as important...

Update: In comments, dale says:

Save US superiority in the life sciences. Save US superiority in financial services. Why didn't we act to save US superiority in manufacturing? Why aren't ordinary Americans deserving of such centralized industrial planning projects?

Why would foreign dominance in the more intellectual and ethereal pursuits be worse for the US than the Chinese ascendancy in manufacturing (for example)? We are told by some economists that outsourcing and other aspects of economic globalization are good for Americans. But Summers and others now say we must save certain industries.

I suspect class bias is in play.

Is dale right, or is there some fundamental difference in the two industries that justifies a different level of government response (e.g. a market failure in research that is not present in manufacturing)?

Friday, January 26, 2007

Asymmetric Loss Functions

This neuroscience research provides evidence that we react to potential losses more strongly than we react to equal sized potential gains. This provides evidence for prospect theory and has implications for how economists model loss functions. Often, for mathematical convenience, loss functions are assumed to be symmetric (usually quadratic), but this research implies that symmetry may not reflect how we actually evaluate potential gains and losses:

How does your brain respond when you think about gambling or taking risks?, EurakAlert: ...In the Jan. 26 issue of the journal Science, UCLA psychologists present the first neuroscience research comparing how our brains evaluate the possibility of gaining versus losing when making risky decisions.

Participants in the study, mostly UCLA students in their 20s, were given $30 and then asked whether they would agree to ... gambles... Would they, for example, agree to a coin toss in which they could win $30 but lose $20? While the 16 participants were considering the possible wagers, they were in a functional magnetic resonance imaging (fMRI) scanner...

On average, participants needed to be offered a 50 percent chance of winning about $19 to risk losing $10, but that amount varied widely among the subjects. ... The researchers could predict people's tolerance to risk by analyzing their brain patterns.

"Looking at how your brain responds to potential gains versus potential losses, we can predict how risk-averse you are going to be in your choices," said study co-author Russell Poldrack, UCLA associate professor of psychology... "Brain activity predicts behavior."

"Individual differences in brain activity correspond very closely to individual differences in participants' actual choices," said co-author Craig Fox... "The people who show much more neural sensitivity to losses relative to gains are the same people who are very reluctant to gamble..."

Thinking about the possibility of winning money turns on some of the same areas of the brain that are activated when people take cocaine, eat chocolate or look at a beautiful face, Poldrack said.

The researchers studied which parts of the brain became more active or less active as the amount of money participants could win or lose increased. Regions that become more active as the amount increases are considered "reward centers" in the brain...

The researchers also found that reward centers in the brain respond not only when we actually receive rewards but also when we make decisions about potential rewards, and that when we make decisions, the reward circuitry in the brain is more sensitive to possible losses than to possible gains. ...

What happens in our brain when we think about potentially losing money? Some of the same areas that get turned on when we think about winning money get turned off when we think about losing money.

A surprising finding is that as the amount of a potential loss increases, the parts of the brain that process fear or anxiety, such as the amygdala or the insula, are not activated.

"What we found instead," Poldrack said, "is you don't turn anything up. You turn down the reward areas of the brain, and you turn them down more strongly for losses than you turn them up for gains. Just as people respond more strongly to a $100 potential loss than a $100 potential gain, the brain responds more strongly to a $100 potential loss versus a $100 potential gain." Fox, a behavioral decision theorist, said the study ... provides, for the first time, neural evidence to support this pattern.

When Fox was an undergraduate at the University of California, Berkeley, his faculty mentor was Daniel Kahneman, who later won the 2002 Nobel Memorial Prize in Economic Sciences. A key principle from Kahneman's seminal prospect theory, which describes how individuals evaluate losses and gains, is loss aversion: When people consider future actions, they are more sensitive to potential losses than to potential gains. Most people are about twice as sensitive to potential losses as to potential gains, which leads to risk aversion.

"In this new study, we found for the first time neurophysiological evidence for prospect theory, the most important behavioral model of decision-making to emerge in the past 50 years, whose components include the asymmetry between how losses and gains are valued," Fox said. ...

Tuesday, January 23, 2007

Why We're Inconsistent

This isn't economics related, and it's not about politics either, just something I thought was interesting about why we're hard-wired to be inconsistent in the execution of physical tasks:

On the golf tee or the pitcher’s mound, brain dooms motion to inconsistency, by David Orenstein, Stanford Report: If you've ever wondered why your golf swings, fastballs or free throws don't quite turn out the same way each time, even after years of practice, there is now an answer: It's mostly in your head. That's the finding of new research published in the ... journal Neuron by electrical engineers at Stanford University.

Continue reading "Why We're Inconsistent" »

Saturday, January 20, 2007

"The Hitchhiker's Guide to Altruism"

Why would evolution favor traits such as altruistic behavior that are costly to the individual?:

The hitchhiker's guide to altruism -- Study explains how costly traits evolve, EurekAlert: Darwin explained how beneficial traits accumulate in natural populations, but how do costly traits evolve? In the past, two theories have addressed this problem. The theory of hitchhiking suggests that genes that confer a cost to their bearer can become common in natural populations when they "hitch a ride" with fitter genes that are being favored by natural selection. Conversely, the theory of kin selection suggests that costly traits can be favored if they lead to benefits for relatives of the bearer, who also carry the gene.

"Animal traits are not always independent. For example, people with blond hair are more likely to have blue eyes," explains Andy Gardner (Oxford University). "This is a nuisance for natural selection, which could not, for instance, favor blond hair without also indirectly favoring blue eyes, and this is the idea of genetic hitchhiking."

Kin selection is similar, but here the genetic associations are between different individuals: "If I have a gene that makes me more altruistic, then I can also expect my relatives to carry it. So while the immediate effect of the gene is costly for me, I would benefit by receiving altruism from my relatives, and so the gene is ultimately favored," Gardner explains. New research ... shows that both processes are governed by the same equations. This reveals that kin selection can be seen as a special form of genetic hitchhiking. ... This insight raises the possibility of using the tools of hitchhiking theory to explore social problems that have so far been too complicated to analyze using traditional kin selection techniques.

Friday, January 12, 2007

"The Origin of Utility"

An attempt at an explanation for utility maximizing behavior based upon evolutionary biology:

The Origin of Utility, Gianni De Fraja, University of Leicester, CEPR No. 5859, January 2007: Abstract This paper proposes an explanation for the universal human desire for increasing consumption. It holds that it was moulded in evolutionary times by a mechanism known to biologists as sexual selection, whereby a certain trait - observable consumption - is used by members of one sex to signal their unobservable characteristics valuable to members of the opposite sex. It then shows that the standard economics problem of utility maximisation is formally equivalent to the standard biology problem of the maximisation of individual fitness, the ability to pass genes to future generations. [open link]

Wednesday, January 10, 2007

NOAA: Global Warming Contributed to the Warmer than Usual Temperatures in 2006

For the first time under the Bush administration, the NOAA acknowledges global warming is real.

The Scientific Basis for Race

Moving outside the usual realm momentarily, this is from a colleague over in the Physics Department, Steve Hsu:

Metric on the space of genomes and the scientific basis for race, by Steve Hsu: Suppose that the human genome has 30,000 distinct genes, which we will label as i = 1, 2, ... N, where N = 30k. Next, suppose that there are ni variants or alleles (mutations) of the i-th gene. Then, each human's genetic information can be described as a point on a lattice of size n1 x n2 x n3 ... nN, or equivalently an N-tuple of integers, each of whose values range from 1 to ni. For the simplified case where there are exactly 10 variants of each gene, the number of points in this N dimensional space is 10N or 10{30k}, one for each distinct 30k digit number. It's a space of very high dimension, but this doesn't stop us from defining a metric, or definition of distance between any two points in the space. (For simplicity we ignore restrictions on this space which might result from incompatibility of certain combinations, etc.)

Note that the genomes of all of the humans who have ever lived occupy only a small subset of this space -- most possible variations have never been realized. For this reason, the surprise expressed by biologists that humans have so few genes (not many more than a worm, and far less than the 100k of earlier estimates) is no cause for concern -- the number of possible organisms that might result from 30k genes is enormous -- far more than the number of molecules in the visible universe.

To define a metric, we need a notion of how far apart two different alleles are. We can do this by counting base pair differences -- most mutations only alter a few base pairs in the genetic code. We can define the distance between two alleles in terms of the number of base pair changes between them (this is always a positive number). Then, we can define the distance between two genomes as the sum of each of the i=1, 2,..,N individual gene distances. It is natural, although perhaps not always possible, to choose the ni labeling of alleles to reflect relative distances, so variants n1 and n2 are close together, and both very far from n10.

The exact definition of the metric and the allele labeling is somewhat arbitrary, but you can see it is easy to define a meaningful measure of how far apart any two individuals are in genome space.

Now plot the genome of each human as a point on our lattice. Not surprisingly, there are readily identifiable clusters of points, corresponding to traditional continental ethnic groups: Europeans, Africans, Asians, Native Americans, etc. (See, for example, Risch et al., Am. J. Hum. Genet. 76:268–275, 2005.) Of course, we can get into endless arguments about how we define European or Asian, and of course there is substructure within the clusters, but it is rather obvious that there are identifiable groupings, and as the Risch study shows, they correspond very well to self-identified notions of race.

From the conclusions of the Risch paper (Am. J. Hum. Genet. 76:268–275, 2005):

Attention has recently focused on genetic structure in the human population. Some have argued that the amount of genetic variation within populations dwarfs the variation between populations, suggesting that discrete genetic categories are not useful (Lewontin 1972; Cooper et al. 2003; Haga and Venter 2003). On the other hand, several studies have shown that individuals tend to cluster genetically with others of the same ancestral geographic origins (Mountain and Cavalli-Sforza 1997; Stephens et al. 2001; Bamshad et al. 2003). Prior studies have generally been performed on a relatively small number of individuals and/or markers. A recent study (Rosenberg et al. 2002) examined 377 autosomal micro-satellite markers in 1,056 individuals from a global sample of 52 populations and found significant evidence of genetic clustering, largely along geographic (continental) lines. Consistent with prior studies, the major genetic clusters consisted of Europeans/West Asians (whites), sub-Saharan Africans, East Asians, Pacific Islanders, and Native Americans. ethnic groups living in the United States, with a discrepancy rate of only 0.14%.

This clustering is a natural consequence of geographical isolation, inheritance and natural selection operating over the last 50k years since humans left Africa.

Every allele probably occurs in each ethnic group, but with varying  frequency. Suppose that for a particular gene there are 3 common variants (v1, v2, v3) all the rest being very rare. Then, for example, one might find that in ethnic group A the distribution is v1 75%, v2 15%, v3 10%, while for ethnic group B the distribution is v1 2% v2 6% v3 92%. Suppose this pattern is repeated for several genes, with the common variants in population A being rare in population B, and vice versa. Then, one might find a very dramatic difference in expressed phenotype between the two populations. For example, if skin color is determined by (say) 10 genes, and those genes have the distribution pattern given above, nearly all of population A might be fair skinned while all of population B is dark, even though there is complete overlap in the set of common alleles. Perhaps having the third type of variant v3 in 7 out of 10 pigmentation genes makes you dark. This is highly likely for an individual in population B with the given probabilities, but highly unlikely in population A.

We see that there can be dramatic group differences in phenotypes even if there is complete allele overlap between two groups - as long as the frequency or probability distributions are distinct. But it is these distributions that are measured by the metric we defined earlier. Two groups that form distinct clusters are likely to exhibit different frequency distributions over various genes, leading to group differences.

This leads us to two very distinct possibilities in human genetic variation:

Hypothesis 1: (the PC mantra) The only group differences that exist between the clusters (races) are innocuous and superficial, for example related to skin color, hair color, body type, etc.

Hypothesis 2: (the dangerous one) Group differences exist which might affect important (let us say, deep rather than superficial) and measurable characteristics, such as cognitive abilities, personality, athletic prowess, etc.

Note H1 is under constant revision, as new genetically driven group differences (e.g., particularly in disease resistance) are being discovered. According to the mantra of H1 these must all (by definition) be superficial differences.

A standard argument against H2 is that the 50k years during which groups have been separated is not long enough for differential natural selection to cause any group differences in deep characteristics. I find this argument quite naive, given what we know about animal breeding and how evolution has affected the (ever expanding list of) "superficial" characteristics. Many genes are now suspected of having been subject to strong selection over timescales of order 5k years or less. For further discussion of H2 by Steve Pinker, see here.

The predominant view among social scientists is that H1 is obviously correct and H2 obviously false. However, this is mainly wishful thinking. Official statements by the American Sociological Association and the American Anthropological Association even endorse the view that race is not a valid biological concept, which is clearly incorrect.

As scientists, we don't know whether H1 or H2 is correct, but given the revolution in biotechnology, we will eventually. Let me reiterate, before someone labels me a racist: we don't know with high confidence whether H1 or H2 is correct.

Finally, it is important to note that any group differences are statistical in nature and do not imply anything about particular individuals from any group. Rather than rely on the scientifically unsupported claim that we are all equal, it would be better to emphasize that we all have inalienable human rights regardless of our abilities or genetic make up.

Monday, January 01, 2007

Making the Right Choices

Do we have free will?:

Free to choose?, The Economist: In the late 1990s a previously blameless American began collecting child pornography and propositioning children. On the day before he was due to be sentenced to prison for his crimes, he had his brain scanned. He had a tumour. When it had been removed, his paedophilic tendencies went away. When it started growing back, they returned. When the regrowth was removed, they vanished again...

His case dramatically illustrates the challenge that modern neuroscience is beginning to pose to the idea of free will. The instinct of the reasonable observer is that organic changes of this sort somehow absolve ... responsibility that would accrue to a child abuser whose paedophilia was congenital. But why? The chances are that the latter tendency is just as traceable to brain mechanics as the former; it is merely that no one has yet looked. Scientists have looked at anger and violence, though, and discovered ... a particular messenger molecule in the brain, that are both congenital and predisposing to a violent temper. Where is free will in this case?

Free will is one of the trickiest concepts in philosophy, but also one of the most important. Without it, the idea of responsibility for one's actions flies out of the window, along with much of the glue that holds a free society (and even an unfree one) together...

Continue reading "Making the Right Choices" »

Wednesday, December 13, 2006

The Age of the Robot

Bill Gates writing in Scientific American about the coming age of robots. Here's a shortened version of the article:

A Robot in Every Home, by Bill Gates, Scientific American: Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies... But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when—or even if—this industry will achieve critical mass. If it does, though, it may well change the world.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. ... But what I really have in mind is something much more contemporary: the emergence of the robotics industry... [S]ome of the world’s best minds are trying to solve the toughest problems of robotics, such as visual recognition, navigation and machine learning. And they are succeeding. ...

What is more, the challenges facing the robotics industry are similar to those we tackled in computing three decades ago. Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices. The standardization of robotic processors and other hardware is limited, and very little of the programming code used in one machine can be applied to another. Whenever somebody wants to build a new robot, they usually have to start from square one.

Despite these difficulties, when I talk to people involved in robotics—from university researchers to entrepreneurs, hobbyists and high school students—the level of excitement and expectation reminds me so much of that time when Paul Allen and I ... dreamed of the day when a computer would be on every desk and in every home. And as I look at the trends that are now starting to converge, I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. ...

From Science Fiction to Reality

The word “robot” was popularized in 1921 by Czech play wright Karel Capek, but people have envisioned creating robot-like devices for thousands of years. ... Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov’s I, Robot, movies such as Star Wars and television shows such as Star Trek. ... Nevertheless, although robots play a vital role in industries such as automobile manufacturing— where there is about one robot for every 10 workers—the fact is that we have a long way to go before real robots catch up with their science-fiction counterparts.

One reason for this gap is that it has been much harder than expected to enable computers and robots to sense their surrounding environment and to react quickly and accurately. It has proved extremely difficult to give robots the capabilities that humans take for granted—for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.

But researchers are starting to find the answers. One trend that has helped them is the increasing availability of tremendous amounts of computer power. ... As computing capacity continues to expand, robot designers will have the processing power they need to tackle issues of ever greater complexity.

Another barrier to the development of robots has been the high cost of hardware, such as sensors that enable a robot to determine the distance to an object as well as motors and servos that allow the robot to manipulate an object with both strength and delicacy. But prices are dropping fast. ...

Now robot builders can also add Global Positioning System chips, video cameras, array microphones ... and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today’s robots to do things such as vacuum a room or help to defuse a roadside bomb—tasks that would have been impossible for commercially produced machines just a few years ago.

A BASIC Approach

In February 2004 I visited a number of leading universities ... to talk about the powerful role that computers can play in solving some of society’s most pressing problems. ... At each university, after delivering my speech, ...[a]lmost without exception, I was shown at least one project that involved robotics.

At that time, my colleagues at Microsoft were also hearing from people in academia and at commercial robotics firms who wondered if our company was doing any work in robotics that might help them with their own development efforts. We were not, so we decided to take a closer look. I asked Tandy Trower, a member of my strategic staff ... to go on an extended fact-finding mission ... What he found was ... an industry-wide desire for tools that would make development easier. ... Tandy wrote in his report to me after his fact-finding mission. “[T]he hardware capability is mostly there; now the issue is getting the software right.”

Back in the early days of the personal computer, we realized that we needed ... Microsoft BASIC. When we created this programming language in the 1970s, we provided the common foundation that enabled programs developed for one set of hardware to run on another. BASIC also made computer programming much easier...

After reading Tandy’s report, it seemed clear to me that before the robotics industry could make the same kind of quantum leap that the PC industry made 30 years ago, it, too, needed to find that missing ingredient. So I asked him to assemble a small team that would work with people in the robotics field to create a set of programming tools that would provide the essential plumbing...

Tandy’s robotics group has been able to draw on a number of advanced technologies... One such technology will help solve one of the most difficult problems facing robot designers: how to simultaneously handle all the data coming in from multiple sensors and send the appropriate commands to the robot’s motors, a challenge known as concurrency. ...

Concurrency is a challenge that extends beyond robotics. Today as more and more applications are written for distributed networks of computers, programmers have struggled to figure out how to efficiently orchestrate code running on many different servers at the same time. And as computers with a single processor are replaced by machines with multiple processors and “multicore” processors ..., software designers will need a new way to ... fully exploit the power of processors working in parallel, the new software must deal with the problem of concurrency. ...

The answer that Craig’s team has devised to the concurrency problem is something called the concurrency and coordination runtime (CCR). ... Designed to help programmers take advantage of the power of multicore and multiprocessor systems, the CCR turns out to be ideal for robotics as well. ...

In addition to tackling the problem of concurrency, the work that Craig’s team has done will also simplify the writing of distributed robotic applications through a technology called decentralized software services (DSS). DSS enables developers to create applications in which the  ... the parts of the program that read a sensor, say, or control a motor— operate as separate processes that can be ... aggregated on a Web page. ... Combined with broadband wireless technology, this architecture makes it easy to monitor and adjust a robot from a remote location using a Web browser. ...

As a result, the robot can be a relatively inexpensive device that delegates complex processing tasks to the high-performance hardware found on today’s home PCs. I believe this advance will pave the way for an entirely new class of robots that are essentially mobile, wireless peripheral devices that tap into the power of desktop PCs to handle processing-intensive tasks such as visual recognition and navigation. And because these devices can be networked together, we can expect to see the emergence of groups of robots that can work in concert to achieve goals such as mapping the seafloor or planting crops. These technologies are a key part of Microsoft Robotics Studio...

Should We Call Them Robots?

How soon will robots become part of our day-to-day lives? According to the International Federation of Robotics, about two million personal robots were in use around the world in 2004, and another seven million will be installed by 2008. ...

Although a few of the robots of tomorrow may resemble the anthropomorphic devices seen in Star Wars, most will look nothing like the humanoid C-3PO. In fact, as mobile peripheral devices become more and more common, it may be increasingly difficult to say exactly what a robot is. Because the new machines will be so specialized and ubiquitous—and look so little like the two-legged automatons of science fiction—we probably will not even call them robots. But as these devices become affordable to consumers, they could have just as profound an impact on the way we work, communicate, learn and entertain ourselves as the PC has had over the past 30 years.

Saturday, December 09, 2006

Bumbling, Impractical, Disconnected Economists

More on how clueless economists are. My comments are in brackets and bold:

The Autistic Economist, by Stanley Alcorn and Ben Solarz, Yale Economic Review:

How many economists does it take to change a lightbulb?

  • Two: One to change the bulb and one to assume the existence of a ladder.
  • Eight: One to screw in the light bulb and seven to hold everything else constant.
  • None: They are all waiting for an invisible hand.

The caricature of the economist – bumbling, impractical, disconnected from the object of his work – underpins a set of surprisingly sophisticated criticisms leveled against the discipline, particularly its realism, method, and ideology. None of these critiques is particularly new, nor is any entirely unique to economics. But over the last few years, they have been asserted against the dominant economic pedagogy in general and the neoclassical framework in particular with new force...

[I hope you'll forgive me if I don't see myself as bumbling, impractical, disconnected...]

“We wish to escape from imaginary worlds!” proclaimed a group of French economics students in 2000, petitioning for broad changes in their economics curricula. “We no longer want to have this autistic science imposed upon us.”

The use of the French term “autisme” harkens back to an older meaning – “abnormal subjectivity, acceptance of fantasy rather than reality” – but it also refers to the continuum of neurological disorders. Steve Keen ... at the University of West Sydney and the author of Debunking Economics: The Naked Emperor of the Social Sciences, sees the aptness of the term as the strongest point of the critique. “It asserts that neoclassical economics has the characteristics of an autistic child,” he said, criticizing the manner in which the discipline “hangs on to its preconceptions, when serious analysis shows that they are untenable.” ...

[... or autistic.]

It Takes Two: the Realism Critique Go back to the light bulb jokes and look at the economist who “assumes the existence of a ladder”; the analogy here is with the many simplifying assumptions made in the course of developing an economic model. But as Keen is quick to point out, “There’s a very big difference between a simplifying assumption and a counterfactual one.” By way of example, he notes the Capital Asset Pricing Model, developed by Bill Sharpe in 1964, and challenges the implicit assumption that investors agree on the future prospects of shares with correct expectations. This is equivalent, he argues, to assuming consumers could predict the future. For Keen and other leaders of the movement, however, this is not an isolated example of confusing simplifying assumptions for counterfactual ones: “Neoclassical economics makes many of the latter and then defends them as if they’re the former.” ...

Continue reading "Bumbling, Impractical, Disconnected Economists" »

Saturday, December 02, 2006

Robot-Human Sociology

Have you hugged your sociable robot today?:

Robotic pets may be bad medicine for melancholy, by Stephanie Schorow, MIT News Office: In the face of techno-doomsday punditry, Sherry Turkle has long been a proponent of the positive. In her books ... Turkle has explored the relationship between human and machine and found much to ponder and even praise.

But now the director of the MIT Initiative on Technology and Self has a confession: "I have finally met technology that upsets and concerns me."

Turkle ... outlined her concerns about the implications of increasingly personal interactions between robots and humans during a ... lecture... Turkle, a clinical psychologist, spoke earnestly and openly about her fears, acknowledging that some parts of her research "gave me the chills" on a very personal level and that she is "struggling to find an open voice."

A pioneer of the now accepted notion that "technologies are never just tools," Turkle set the stage with a discussion of her work on machines as "evocative objects" and "relational artifacts." ... From Furbies to robotic dogs like Aibo to pocket "pets" like Tamagotchis to Paro, a robotic baby seal that responds to touch, children and even adults are forming bonds with machines, showing that the killer app may be "nurturing." That is, rather than the computer taking care of us, we take care of the computer, Turkle said.

Increasingly sophisticated robots--with big eyes that follow our faces or which respond to human voice and touch--trigger "Darwinian" responses in us; we are "wired" to react to objects that track our movement, Turkle said. "This is not about building AI with a lot of smarts," she said. The impact is "not on what it has but how it makes people feel."

One of Turkle's concerns was triggered by the effect of a sophisticated interactive doll, Hasbro's "My Real Baby," and of the Paro seals on the elderly. She left a few "My Real Baby" dolls (which were not a big retail hit with children) in a local nursing home, and when she returned later, she found that the staff had bought 25 of them because of the soothing effect on the residents.

"The only one who's not happy here is the sociologist," said Turkle, raising her hand. That soothing response was based on a sham, she believes. "What can something that does not have a life cycle know about your death, or about your pain?"

She cited the case of a 72-year-old woman who, because she is sad, says she sees that her robotic toy is also sad. "What are we to make of this relationship when it happened between a depressed woman and a robot?" Turkle asked.

The Q&A period triggered a lively debate over whether such bonding is necessarily bad. A questioner brought up the issue of how the elderly bond to pets. "A dog doesn't talk. A dog doesn't say, 'I love you,' '' Turkle said, although at least one listener insisted his dog does talk, in a fashion.

Turkle isn't sure what a dog can sense. "What I do know is Paro knows nothing. That sense of self soothing was with an object that knows nothing," she stated. Ultimately, human-like robots will be "test objects by which we are finding out new stuff about ourselves," Turkle said.

Econophysicists

Given recent discussions here, and given that I can't even agree with the claim in the opening sentence that "Predicting financial markets is more of a gamble than traditional economists will admit," I think I'll just present this without further comment:

It's a gamble: UH econophysicists meld science, economics, by Joseph McCauley, Eurekalert: Predicting financial markets is more of a gamble than traditional economists will admit, and making sense of such numbers is more like trying to decipher noise blasting from a loudspeaker, says one University of Houston econophysicist, who leads one of the world's most preeminent groups of its kind.

Joseph McCauley, a UH physics professor with a dual appointment as a senior fellow in the economics department at the National University of Ireland, Galway, leads the UH group. The team's main discovery, backed up by empirically based modeling of market dynamics, is that financial markets are unstable. Associate Professor Kevin Bassler, Professor Gemunu Gunaratne and Professor George Reiter – all of the physics department – round out the UH econophysics group that applies their newly discovered models and methods to solve problems in economics.

McCauley will be the only invited physicist to speak in an economic workshop – "Financial fragility and technological progress with heterogeneous agents and social interactions" – Dec. 14-15 in Trento, Italy. He will weigh in with his perspective on the subjects of macroeconomics (the overall aspects and workings of a national economy) and microfoundations (in which the macroeconomic model is built up from the actions of individual agents).

McCauley and his colleagues contend that a market is made up of "noise" in the strictest mathematical sense of a random and persistent disturbance that obscures clarity. Using techniques developed in physics such as entropy – the study of randomness or disorder – challenges the common belief in economics that market statistics have structure and tend toward equilibrium.

"Traditional economics is based far less on empirical studies than its econophysics counterpart," McCauley said. "For instance, deregulation is an example of economists relying on a belief and not hard analyses. Also, histograms – a traditional economist's tool – do not represent normal distributions. Even Nobel Prize-winning economists approach market statistics with a wrong mathematical model already in mind, and the model always fails. Physics helps us understand the information that goes into these models better."

Gunaratne uses the analogy of a pollen grain being heated up in water to illustrate how the randomness of motion is analogous to what happens in a market. Just as a physicist observes the increase and decrease in the temperature of water as variables that agitate or slow the motion of a pollen grain in an experiment, an econophysicist applies these sorts of principles to other such variables in a financial market. In trying to understand this randomness, he said, it is apparent that markets are not bell-shaped curves with symmetry and normal distribution. Instead, financial markets are more like radio static, but with non-bell-shaped noise, with stock prices continually moving up and down in ways that puzzle standard statisticians.

Focusing primarily on the foreign exchange (FX), a 24-hour-a-day traded world market, McCauley, Gunaratne and Bassler say studying the FX yields better information as the largest, most liquid market that dominates other markets because of its sheer volume and volatility. They model both the market dynamics and option pricing by deducing correct models from real market statistics, which is the opposite of what economists do. Broadening the UH econophysics program, their colleague Reiter focuses more on models of the economy, including production and consumption with results that show how individuals' preferences adapt to economic circumstances, a part of reality he said is missing from standard economic models.

"Whatever the specific focus, this relatively young subfield that merges the two disciplines of physics and economics helps us move toward applicable models for use in analyzing markets and economies more effectively and accurately," Bassler said.

Having established one of only a handful of Ph.D. programs across the globe with a specialization in econophysics, UH's physics department in the College of Natural Sciences and Mathematics has recognized a need for educating physics students in this area since the modeling, analytical and computational skills of physicists are exactly the skills needed to study financial markets and the dynamics of the economy in a practical way.

"We realize this is still met with skepticism in more traditional arenas, but we're convinced econophysics will play the leading role as world economies become increasingly more complex and harder to decipher, and the misleading notion of ‘self-regulating markets' will be slaughtered," McCauley said. "To the extent possible in the social realm, we want to create economic theory as science and avoid the ‘mathematized ideology' – as I've coined it – of mainstream economics that is currently the ideology used in the unregulated free market."

Okay, one comment besides the fact that economics is not the stock market. If the econophysicists have something better to offer, great. I'm listening. But until they do have something better, a little less criticism of the models we use would be appreciated (e.g. "Nobel Prize-winning economists approach market statistics with a wrong mathematical model already in mind, and the model always fails. Physics helps us understand..."). Because when your "main discovery, backed up by empirically based modeling of market dynamics, is that financial markets are unstable," I'm not sure that a condescending attitude has yet been earned.

Friday, December 01, 2006

Looking for Risk in All the Wrong Places

How we get snookered by our fears and by those who exploit them:

Why We Worry About The Things We Shouldn't... ...And Ignore The Things We Should, by Jeffrey Kluger, Time: It would be a lot easier to enjoy your life if there weren't so many things trying to kill you every day. ... There's the fall out of bed that kills 600 Americans each year. ... Will a cabbie's brakes fail when you're in the crosswalk? Will you have a violent reaction to bad food? And what about the ... father and grandfather who died of coronaries in their 50s probably passed the same cardiac weakness on to you. ...

Shadowed by peril as we are, you would think we'd get pretty good at distinguishing the risks likeliest to do us in from the ones that are statistical long shots. But you would be wrong. ... We pride ourselves on being the only species that understands the concept of risk, yet we have a confounding habit of ... building barricades against perceived dangers while leaving ourselves exposed to real ones. ...

Sensible calculation of real-world risks is a multidimensional math problem that sometimes seems entirely beyond us. And while it may be true that it's something we'll never do exceptionally well, it's almost certainly something we can learn to do better.

AN OLD BRAIN IN A NEW WORLD

Part of the problem we have with evaluating risk, scientists say, is that we're moving through the modern world with what is, in many respects, a prehistoric brain. We may think we've grown accustomed to living in a predator-free environment in which most of the dangers of the wild have been driven away..., but our central nervous system--evolving at a glacial pace--hasn't got the message.

Continue reading "Looking for Risk in All the Wrong Places" »

Wednesday, November 29, 2006

What Economists Can Learn from Evolutionary Theorists

Continuing with the topic of evolutionary theory and economics, Paul Krugman emails to remind me of this interesting piece he wrote on the topic for the European Association for Evolutionary Political Economy:

What Economists Can Learn from Evolutionary Theorists Synopsis, by Paul Krugman: (A talk given to the European Association for Evolutionary Political Economy)

Good morning. I am both honored and a bit nervous to be speaking to a group devoted to the idea of evolutionary political economy. As you probably know, I am not exactly an evolutionary economist. I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations.

Why, then, am I here? Well, partly because my research work has taken me to some of the edges of the neoclassical paradigm. When you are concerned, as I have been, with situations in which increasing returns are crucial, you must drop the assumption of perfect competition; you are also forced to abandon the belief that market outcomes are necessarily optimal, or indeed that the market can be said to maximize anything. You can still believe in maximizing individuals and some kind of equilibrium, but the complexity of the situations in which your imaginary agents find themselves often obliges you - and presumably them - to represent their behavior by some kind of ad hoc rule rather than as the outcome of a carefully specified maximum problem. And you are often driven by sheer force of modeling necessity to think of the economy as having at least vaguely "evolutionary" dynamics, in which initial conditions and accidents along the way may determine where you end up. Some of you may have read my work on economic geography; I only found out after I had worked on the models for some time that I was using "replicator dynamics" to discuss the problem of economic change.

But there is another reason I am here. I am an economist, but I am also what we might call an evolution groupie. That is, I spend a great deal of time reading what evolutionary biologists write - not only the more popular volumes but the textbooks and, most recently, some of the professional articles. I have even tried to talk to some of the biologists, which in this age of narrow specialization is a major effort. My interest in evolution is partly a recreation; but it is also true that I find in evolutionary biology a useful vantage point from which to view my own specialty in a new perspective. In a way, the point is that both the parallels and the differences between economics and evolutionary biology help me at least to understand what I am doing when I do economics - to get, to be pompous about it, a new perspective on the epistemology of the two fields.

Continue reading "What Economists Can Learn from Evolutionary Theorists" »

Sunday, November 26, 2006

Milton Friedman: The Methodology of Positive Economics

Let's go back a bit in time to one of the classic papers on the "Is economics a science" question discussed below. Here's the first part of Milton Friedman's essay from 1953, "The Methodology of Positive Economics." It runs a bit long, but I thought some of you might be interested in it:

"The Methodology of Positive Economics," by Milton Friedman, In Essays In Positive Economics (Chicago: Univ. of Chicago Press): Introduction In his admirable book on The Scope and Method of Political Economy John Neville Keynes distinguishes among “a positive science … a body of systematized knowledge concerning what is; a normative or regulative science…, a body of systematized knowledge discussing criteria of what ought to be…; an art…, a system of rules for the attainment of a given end”; comments that “confusion between them is common and has been the source of many mischievous errors”; and urges the importance of “recognizing a distinct positive science of political economy.”[1]

This paper is concerned primarily with certain methodological problems that arise in constructing the “distinct positive science” Keynes called for - in particular, the problem of how to decide whether a suggested hypothesis or theory should be tentatively accepted as part of the “body of systematized knowledge concerning what is.”  But the confusion Keynes laments is still so rife and so much of a hindrance to the recognition that economics can be, and in part is, a positive science that it seems well to preface the main body of the paper with a few remarks about the relation between positive and normative economics.

Continue reading "Milton Friedman: The Methodology of Positive Economics" »

Thursday, November 09, 2006

Moral Values and Market Exchange

Is market exchange morality in action? The author, "a fellow at the Gruter Institute and director of Claremont Graduate University's Center for Neuroeconomics Studies," says it is:

Moral 'bastards' have brain hormone problems, by Paul J. Zak, Project Syndicate: Recent revelations that many corporate executives have backdated their stock options ... are the latest examples of bad business behavior. ...[A] cynical public ... wonder[s] where big business has gone wrong.

The answer may be quite simple: Too many bosses have abandoned basic human values and embraced the credo famously uttered by Gordon Gekko in the movie "Wall Street:" "Greed is good." But a growing body of research concludes that greed is not always good, and that moral values are a necessary element in the conduct of business. ...

Continue reading "Moral Values and Market Exchange" »

Wednesday, November 01, 2006

David Altig Deflates Ball

Updating this post from a few days ago, David Altig reacted like I did to Philip Ball's column in the Financial Times:

Fisking Philip Ball, by David Altig, macroblog: A few days ago Mark Thoma posted an article by Philip Ball, "consultant editor of Nature", taking economists to task for, well, being economists.  Although Mark handled the rebuttal quite nicely, and there was lots and lots of fine commentary on Mark's site (a good chunk of which is summarized by Dave Iverson), I found the article so wrong-headed that I just can't help myself from commenting.  So, a-commenting I go... [continue reading...]

David has a long, thorough, rebuttal illustrating Ball's "wrong-headed" assertions. Here's a taste of David's reaction:

Don't like calling it a cycle?  Fine. Call it Aggregate Fluctuations.  Call it Co-Movement In Business Statistics.  Call it a Kumquat.  Just don't pretend a picky semantic point is serious criticism. ...

Tuesday, October 31, 2006

A Turning Point?

Yesterday, we heard from a physicist about economics. It was not a generally positive review of our discipline. Today, Chemistry World brings us scientists who, though it appears less than enthusiastic in some cases, suddenly find economics useful:

Economist’s review marks turning point, Chemistry World: Scientists have welcomed an economist’s review into the costs of climate change, which warns of global recession if greenhouse gas emissions are not stabilised.

A proper economic analysis was long overdue, providing independent support for the views of scientists accused of hyping up climate change, Chris Reay, National Environment Research Council (Nerc) research fellow at Edinburgh University, UK, told Chemistry World. ‘If this is the tipping point, I don’t mind if it comes from an economist,’ he said.

Thanks. We don't mind your help either. The statement refers to the Stern Review on the potential costs of global warming:

The government-commissioned report, carried out by former World Bank chief economist Sir Nicholas Stern, warns that the global economy could shrink by up to 20 per cent unless action is taken now to reduce greenhouse gas emissions; Stern estimates an R&D investment of one per cent of global GDP is needed.

‘The Stern Review finally closes a chasm that has existed for 15 years between the precautionary concerns of scientists, and the cost–benefit views of many economists,’ commented Michael Grubb, professor of climate change and energy policy at Imperial College London and the University of Cambridge, UK. And, said Grubb, it was encouraging that although Stern saw the problem as massive and urgent, it could be solved...

Sunday, October 29, 2006

Neoclassical Theory under Fire from the Sciences

This is Philip Ball, "consultant editor of Nature and the author of Critical Mass," with a criticism of neoclassical theory:

Baroque fantasies of a most peculiar science, by Philip Ball, Commentary, Financial Times (free): It is easy to mock economic theory. Any fool can see that the world of neoclassical economics, which dominates the academic field today, is a gross caricature in which every trader or company acts in the same self-interested way – rational, cool, omniscient. The theory has not foreseen a single stock market crash and has evidently failed to make the world any fairer or more pleasant.

The usual defence is that you have to start somewhere. But mainstream economists no longer consider their core theory to be a “start”. The tenets are so firmly embedded that ... it is ... rigid dogma. To challenge these ideas is to invite blank stares of incomprehension – you might as well be telling a physicist that gravity does not exist.

That is disturbing because these things matter. Neoclassical idiocies persuaded many economists that market forces would create a robust post-Soviet economy in Russia (corrupt gangster economies do not exist in neoclassical theory). Neoclassical ideas ... may determine ... how we run our schools, hospitals and welfare system. If mainstream economic theory is fundamentally flawed, we are no better than doctors diagnosing with astrology.

Neoclassical economics asserts two things. First, in a free market, competition establishes a price equilibrium that is perfectly efficient: demand equals supply and no resources are squandered. Second, in equilibrium no one can be made better off without making someone else worse off.

The conclusions are a snug fit with rightwing convictions. So it is tempting to infer that the dominance of neoclassical theory has political origins. But ... the truth goes deeper. Economics arose in the 18th century in a climate of Newtonian mechanistic science, with its belief in forces in balance. And the foundations of neoclassical theory were laid when scientists were exploring the notion of thermodynamic equilibrium. Economics borrowed wrong ideas from physics, and is now reluctant to give them up.

This error does not make neoclassical economic theory simple. Far from it. It is one of the most mathematically complicated subjects among the “sciences”, as difficult as quantum physics. That is part of the problem: it is such an elaborate contrivance that there is too much at stake to abandon it.

It is almost impossible to talk about economics today without endorsing its myths. Take the business cycle: there is no business cycle in any meaningful sense. In every other scientific discipline, a cycle is something that repeats periodically. Yet there is no absolute evidence for periodicity in economic fluctuations. Prices sometimes rise and sometimes fall. That is not a cycle; it is noise. Yet talk of cycles has led economists to hallucinate all kinds of fictitious oscillations in economic markets. Meanwhile, the Nobel-winning neoclassical theory of the so-called business cycle “explains” it by blaming events outside the market. This salvages the precious idea of equilibrium, and thus of market efficiency. Analysts talk of market “corrections”, as though there is some ideal state that it is trying to attain. But in reality the market is intrinsically prone to leap and lurch.

One can go through economic theory systematically demolishing all the cherished principles that students learn... [I]t is abundantly clear that herding – irrational, copycat buying and selling – provokes market fluctuations.

There are ways of dealing with the variety and irrationality of real agents in economic theory. But not in mainstream economics journals, because the models defy neoclassical assumptions.

There is no other “science” in such a peculiar state. A demonstrably false conceptual core is sustained by inertia alone. This core, “the Citadel”, remains impregnable while its adherents fashion an increasingly baroque fantasy. As Alan Kirman, a progressive economist, said: “No amount of attention to the walls will prevent the Citadel from being empty.”

The author seems to believe he has a better theory of aggregate fluctuations:

[I]t is abundantly clear that herding – irrational, copycat buying and selling – provokes market fluctuations.

That's fine, but when he says "The theory has not foreseen a single stock market crash" as his argument against neoclassical theory, he should first realize macroeconomists don't focus on predicting the stock market, and then he ought to put his theory to the same test. Can he predict stock market crashes? If he can predict the stock market's random walk behavior, will he then go on to show he can provide improved forecasts of the things macroeconomists care about? More generally, can physicists predict earthquakes, etc.? If I ask, why is there gravity, will physicists be able to tell me? Should I accept string theory as evidence of their success and superiority? The existing paradigm in physics doesn't work and the new one, string theory, doesn't produce testable implications and may be little more than mathematically sophisticated philosophic musings - is that where we want to head?

I don't mind the criticism, it's good for us and there are truths in what is said. But I always resent the arrogance of scientists from other fields thinking they can mosey on over to economics for a few minutes, diagnose our ills, and solve all our problems. I'd suggest they solve the problems in their own discipline first, or show a bit more humility when giving advice to others, especially when, as above, they are clearly unaware of vast swaths of literature such as the published work on corruption. And along those lines, and for the record, we're well aware of and have models for the list of things he mentions in his "abundantly clear" theory of market fluctuations. If he actually tried to build these models rather than simply casting aspersions at the existing paradigm, he'd find it isn't as clear as he thinks.

Update: I'd characterize this as a typical response to this post - Thoma was off the mark, but interesting discussion!:

Economics vs physics chez Thoma, by Felix Salmon: Mark Thoma's readers are certainly busy on weekends! A post of his on the general subject of physics vs economics went up on Sunday, and already it's got 52 comments and counting.

The impetus for the blog entry is a piece by Philip Ball in the FT, wherein a hard scientist attacks the soft scientists in the economic realm for their dogmatism and lack of falsifiability.

Thoma initially takes a slightly dubious tack in response, complaining that physicists can't predict earthquakes (they never claimed that they could) and that "the existing paradigm in physics doesn't work" (it does). But things rapidly get more subtle and interesting in the comments: a prime example of the blog format really enriching the debate.

Let me join in and also trumpet the quality of the comments I get here, and say thanks while I'm at it.

Saturday, October 28, 2006

Vibrating Filaments of Imagination

Some science. Brian Greene defends string theory against charges that it is of little use because it fails to deliver testable implications:

The Universe on a String, by Brian Greene, Commentary, NY Times: ...Einstein's belief that he'd one day complete the unified theory rarely faltered. Even on his deathbed he scribbled equations in the desperate but fading hope that the theory would finally materialize. It didn't.

In the decades since, the urgency of finding a unified theory has only increased. Scientists have realized that without such a theory, critical questions can't be addressed, such as how the universe began or what lies at the heart of a black hole. These unresolved issues have inspired much progress, with the most recent advances coming from an approach called string theory. Lately, however, string theory has come in for considerable criticism. And so, this is an auspicious moment to reflect on the state of the art. ...

Continue reading "Vibrating Filaments of Imagination" »

Friday, October 13, 2006

A Professor of Biocomplexity Has Advice for Economists

A professor of biocomplexity says economists won't get anywhere unless they stop acting like physicists and start adopting the models and analytical techniques of biology:

The Evolution of Future Wealth, by Stuart A. Kaufman, Scientific American: When the world changes unpredictably over the course of centuries, no one is shocked... Yet monumental and surprising transformations occur on much shorter timescales, too. Even in the early 1980s you would have been hard-pressed to find people confidently predicting the rise of the Internet or the fall of the U.S.S.R. Unexpected change bedevils the business community endlessly, despite all best efforts to anticipate and adapt to it—witness the frequent failure of companies’ five-year plans.

Economists have so far not been able to offer much help to firms trying to be more adaptive. Although economists have been slow to realize it, the problem is that their attempts to model economic systems focus on those in market equilibrium or moving toward it. They have drawn their inspiration predominantly from the work of physicists in this respect (often with good results, of course). For instance, the Black-Scholes model used since the 1970s to predict the volatility of stock prices was developed by trained physicists and is related to the thermodynamic equation that describes heat.

As economics attempts to model increasingly complicated phenomena, however, it would do well to shift its attention from physics to biology, because the biosphere and the living things in it represent the most complex systems known in nature. In particular, a deeper understanding of how species adapt and evolve may bring profound—even revolutionary— insights into business adaptability and the engines of economic growth.

One of the key ideas in modern evolutionary theory is that of preadaptation. The term may sound oxymoronic but its significance is perfectly logical: every feature of an organism, in addition to its obvious functional characteristics, has others that could become useful in totally novel ways under the right circumstances. The forerunners of air-breathing lungs, for example, were swim bladders with which fish maintained their equilibrium; as some fish began to move onto the margins of land, those bladders acquired a new utility as reservoirs of oxygen. Biologists say that those bladders were preadapted to become lungs. Evolution can innovate in ways that cannot be prestated and is nonalgorithmic by drafting and recombining existing entities for new purposes—shifting them from their existing function to some adjacent novel function—rather than inventing features from scratch.

A species’ suite of adaptive features defines its ecological niche through its relations to other species. In the same way, every economic good occupies a niche defined by its relations to complementary and substitute goods. As the number of economic goods increases, the number of ways in which to adaptively combine those goods takes off exponentially, forging possibilities for all-new niches. The autocatalytic creation of niches is thus a main driver of economic growth.

We do not yet know what makes some systems more adaptable than others, but research on complexity has yielded some clues. Some of my own work on physical systems called spin glasses suggests that the level of central control over subsidiary parts of a system is an important consideration. Too much control freezes the system into limited configurations; too little causes it to wander aimlessly. Only systems that hover on the border between order and chaos exhibit the needed general stability and capacity to explore the universe of possible solutions to challenges.

The path to maximum prosperity will depend on finding ways to build economic systems in which new niches will generate spontaneously and abundantly. Such an approach to economics is indeed radical. It is based on the emergent behavior of systems rather than on the reductive study of them. It defies conventional mathematical treatments because it is not prestatable and is nonalgorithmic. Not surprisingly, most economists have so far resisted these ideas. Yet there can be little doubt that learning to apply these lessons from biology to technology will usher in a remarkable era of innovation and growth.

When Stuart says "We do not yet know what makes some systems more adaptable than others" and that they are just discovering that the "level of central control" matters ( his "spin glasses") I have to wonder if maybe biocomplexologists shouldn't spend more time talking to economists - we have some pretty good ideas about how that works.

As for niches, or profit opportunities in economic terms, I think there is some value in that concept, particularly when combined with the ideas such as those from an article  by Olivia Judson where the existence of niches, or the lack thereof, explains differential rates of evolutionary change (i.e. different rates of innovation). For example:

Newly erupted islands are famous for this. Over and over again, archipelagos see explosive bursts of evolutionary change and the rapid appearance of species found nowhere else. New Zealand is full (and was fuller) of an amazing array of unique flightless birds... Hawaii has an abundance of unique fruit flies, spiders, silverswords ... and birds. Madagascar has all manner of lemurs... And everyone knows about the Galápagos.

Rapid bursts of evolution can also happen in new lakes... Indeed, right now, the great lakes of tropical Africa are the backdrop for the fastest known radiation of vertebrates, the cichlid fishes. Lake Victoria, for example, ... has cichlids that eat algae, cichlids that eat other cichlids, cichlids that eat fish eggs — cichlids, in short, that have evolved to eat everything that can be eaten. Some fish live in shallow water; others prefer the deeps...

Ideas about adaptive radiation can also be tested in experiments. ...[M]any bacteria can whiz through hundreds of generations in a month. This makes it relatively easy to use bacteria to look at radiations. Here’s what you do. You create two sets of environments, one simple, and one complex. The complex environment might have several different places to live, or a variety of sources of carbon. The simple environment has just one habitat or foodstuff. Then, since bacteria reproduce asexually, you take genetically identical individuals, and release them into the two different environments. Sure enough, mutations happen, and the bacteria rapidly evolve to exploit the different niches. After a month, you will find that bacteria from the complicated environment have become genetically diverse. Those from the simple environment, in contrast, remain unevolved.

In short, empty niches are a license for evolutionary change. Once the new niches are full, natural selection acts to stop further change, and the rate of evolutionary change slows. Fossils, islands and test tubes — they all show the same dynamics. ...

But this tells us very little about how islands and lakes supplying the new niches - the opportunities for profit - are created in relation to the biological system, and it does not acknowledge the additional complexity that arises when the actors within the system can respond rationally to change. Once again, I think economists have something to say about this - more than simply beginning with the exogenous emergence of lakes, islands, etc. and then modeling how these niches are filled.

Friday, October 06, 2006

Magnetic Personalities and Selfish Behavior

With enough magnets and metal hats, we may be able to induce people to behave just as out theories predict:

Selfish Impulse Set Free by Magnetic Pulse to Brain, Scientific American: The ultimatum game brings out conflicting impulses in human beings. In the game, a researcher offers two players a set amount of money and explains that if they agree on how to divvy it up they will keep that money for themselves. If they don't, neither will get anything. One player then offers the other a split. Our thirst for fairness dictates that most players will reject a patently unfair division--such as offering only $4 out of a total of $20. Yet, self interest would argue that even $4 is better than nothing... Brain imaging studies have shown that the prefrontal cortex is engaged when players ponder an offer and now new research finds that damping down activity in that region can set free our selfish side. Neuroscientist Daria Knoch and economist Ernst Fehr of the University of Zurich, along with colleagues, studied 52 young men as they mulled offers in the ultimatum game. ...[T]he scientists divided the recipients into three groups: those who would receive transcranial magnetic stimulation (TMS) to suppress the right side of their prefrontal cortices, those who would receive the treatment on the left side and, as controls, those who would receive no stimulation at all.

TMS affects electrical activity in the brain, altering neuron firing in the area where it is applied. Because previous research had shown that the prefrontal cortex played an active role during the ultimatum game, the team thought that interfering with its activity would release an innate tendency either to reject unfair offers or to accept them. During subsequent tests, 44.7 percent of the young men who experienced TMS on the right side of their prefrontal cortex accepted the most unfair offers--a split of 16 to four compared with just 14.7 percent of those whose left side had been stimulated and 9.3 percent of the controls. Further, 37 percent of those who underwent right side stimulation accepted all unfair offers--judged as any split less than 10 to 10--whereas no one was so accepting in the other groups. And they made the decision to accept an unfair offer as quickly as a fair one, while their colleagues needed much longer to decide.

Despite being unable to resist the temptation of selfishness in order to enforce social norms of fairness, the students were no less aware that they were being cheated; subsequent surveys revealed that all subjects considered an offer of four Swiss francs to be woefully inadequate. This marks the first time that brain researchers have controlled a specific behavior by using TMS on a specific region of the brain, the researchers state in the paper presenting the findings published online in Science on October 5. But the technology is not likely to show up in salesrooms anytime soon, thankfully; it takes at least 15 minutes of direct application to the skull to induce the changes, Knoch notes, and they only last a short while.

Friday, September 29, 2006

Friday Cats Control Your Mind Blogging

I'll admit I haven't quite figured out what 'Friday cat blogging' is all about, but I'll try to play anyway. Via a link in the Flash section of the Discover Magazine web site, are cats contributing to mass personality changes?:

A University of California at Santa Barbara study finds countries with high rates of Toxoplasmosis are more neurotic and suggest the cat-borne parasite could be causing mass personality changes.

Here's the article from the Mind and Brain section of Seed Magazine:

Continue reading "Friday Cats Control Your Mind Blogging" »

Wednesday, September 27, 2006

Science and Religion

To me, this isn't that controversial, but apparently this editorial is generating a large response at Scientific American:

Science and Religion, Editorial, Scientific American, October, 2006: It is practically a rite of passage that scientists who reach a certain level of eminence feel compelled to publicly announce and explain their religious beliefs. The new books by Owen Gingerich and Francis Collins, reviewed this month on page 94, follow in the footsteps of Arthur Eddington and Max Planck. Yes, these authors say, they believe in God, and no, they see no contradiction between their faith and their research— indeed, they see each as confirming the other.

Why this enduring fascination? Doubtless it is partly a reaction to the tensions that always seem to arise between science and religion: the recurring war over the teaching of evolution and creationism, the statements by physicists that they are plumbing the instant of “creation” or searching for a “God particle,” the reassurances of some evangelicals that a Second Coming will make global warming irrelevant. In writing books about their own faith, religious scientists may be hoping to point the way to reconciliations for the rest of society.

Yet the tension may be greatly exaggerated. Americans are famously religious, but according to studies by the National Science Foundation, they say that they hold science in higher regard than do the people of any other industrial country. Surveys indicate that scientists are only half as likely as the general public to describe themselves as religious, but 40 percent still do. As Albert Einstein wrote, it takes fortitude to be a scientist—to persevere despite the frustrations and the long lonely hours—and religious inspiration can sometimes provide that strength.

Unquestionably, the findings of science conflict with certain religious tenets. Cosmology, geology and evolutionary biology flatly contradict the literal truths of creation myths from around the world. Yet the overthrow of religion is not a part of the scientific agenda. Scientific research deals in what is measurable and definable; it cannot begin to study what might lie beyond the physical realm or to offer a comprehensive moral philosophy. Some scientists (and some nonscientists) are led to atheism in part by contemplation of the success of science in accounting for observable phenomena.

Some find support for their spiritual beliefs in the majesty of the reality revealed by science. Others are unmoved from agnosticism. Which philosophy an individual embraces is a personal choice, not a dictate of science. Science is fundamentally agnostic, yet it does not force agnosticism even on its practitioners.

No matter how earnest their testimonies, when researchers write about their faith in God, they are not expressing a strictly scientific perspective. Rather they are struggling, as people always have, to reconcile their knowledge of a dispassionate universe with a heartfelt conviction in a more meaningful design.

As for healing a social rift, most of the debates that are commonly depicted as religion versus science are really not questions of science at all; they are disagreements among various systems of beliefs and morals. The policy fight over embryonic stem cells, for example, centers on when and how one segment of a pluralistic society should curtail the behaviors of those who hold different values. Our attention should focus not on the illusory fault line between science and religion but on a political system that too often fails to engage with the real issues.

Friday, September 15, 2006

"A Free Pass on Editorial Irresponsibility"

Jeff Sachs, realizing how much it is needed, offers to educate the Wall Street Journal editorial board on global warming:

Fiddling while the Planet Burns, by Jeffrey D. Sachs, Scientific American: Another summer of record-breaking temperatures... Only one place seemed to remain cool: the air-conditioned offices of the editorial board of the Wall Street Journal. As New York City wilted, the editors sat insouciant and comfortable, hurling editorials of stunning misdirection at their readers, continuing their irresponsible drumbeat that global warming is junk science.

Now, I have nothing against the Wall Street Journal. It is an excellent paper, whose science column and news reporting have accurately and carefully carried the story of global climate change. The editorial page sits in its own redoubt, separated from the reporters—and from the truth. ...

The Wall Street Journal editorial page for years has railed against these scientific findings on climate change, even as the global scientific consensus has reached nearly 100 percent, including the reports commissioned by the skeptical Bush White House. ...

The Wall Street Journal is the most widely read business paper in the world. Its influence is extensive. Yet it gets a free pass on editorial irresponsibility. The Earth Institute at Columbia University has repeatedly invited the editorial team to meet with leading climate scientists. On many occasions, the news editors have eagerly accepted, but the editorial writers have remained safe in their splendid isolation.

Let me make the invitation once again. Many of the world’s leading climate scientists are prepared to meet with the editorial board of the Wall Street Journal and to include in that meeting any climate skeptics that its editorial board wants to invite. The board owes it to the rest of us to conduct its own “open-minded search for scientific knowledge.”

The editors should accept the offer:

Global warming could ignite sudden calamity, by Fiona Harvey, Financial Times: ...Within a year, the US space agency disclosed this week, an area of sea ice “the size of Texas” had been lost from the Arctic... A growing body of scientific opinion suggests the world may be about to experience not a gradual rise in temperatures over several decades but a wild careering into climate chaos.

That is because some of the changes triggered by warming temperatures create a “feedback” effect of their own. These feedbacks can cause the warming trend to accelerate further or bring serious disruption to regions of the world. ... [A] NASA report appears to confirm this feedback loop. There is more apparent confirmation in a study last month in Science ... that found the speed at which the Greenland ice sheet was melting had risen threefold in the past two years compared with the previous five.

Peter Smith, special professor in sustainable energy at Nottingham University, told the British Association science festival last week: “We could reach the tipping point within 15 to 20 years from now...” Jay Gulledge, senior research fellow at the Pew Centre on Global Climate Change, says climatologists “have dramatically under-estimated how responsive the climate is to warming”...

“It is not too late to save the Arctic, but it requires that we begin to slow CO2 emissions this decade,” says James Hansen, director of NASA’s Goddard Institute for Space Studies. ... Though some warn against overstating the feedback effect and the near approach of tipping points, most climate scientists accept the possibility that the climate will change abruptly rather than warm gradually. ...

Tuesday, September 12, 2006

Reasoning with Your Inner Compulsive Self

Neuroeconomic theories of decision-making hypothesize that within us there are "two warring sides: the first deliberative and forward-looking, the second impulsive and myopic" and the side that wins - the rational side or the compulsive side - determines our choices in a particular circumstance. One goal of this research is to discover how to provide incentives to reduce or eliminate the impulsive, irrational choices:

What neuroeconomics tells us about money and the brain, by John Cassidy, New Yorker: ...A few weeks ago, ... at New York University’s Center for Brain Imaging ..., I met Peter Sokol-Hessner, a twenty-four-year-old graduate student... Sokol-Hessner is ... currently working on a research project in the emerging field of neuroeconomics, which uses state-of-the-art imaging technology to explore the neural bases of economic decision-making. ...

In order to depict economic decisions mathematically, economists ... assume that human behavior is both rational and predictable. They imagine ... a representative human, Homo economicus, endowed with consistent preferences, stable moods, and an enviable ability to make only rational decisions. This ... yielded ... theories that had genuine predictive value, but economists were obliged to exclude from their analyses many phenomena that didn’t fit the rational-actor framework, such as stock-market bubbles, drug addiction, and compulsive shopping...

Richard Thaler ... ; beginning in 1987, ... published a series of influential articles describing various types of apparently irrational behavior... Acknowledging that people don’t always behave rationally was an important, if obvious, first step. Explaining why they don’t has proved much harder...

Not long ago, I drove to Princeton University to speak to Jonathan Cohen, a ... neuroscientist... Cohen has collaborated with economists on several imaging studies. “The key idea in neuroeconomics is that there are multiple systems within the brain,” Cohen said. “Most of the time, these systems coöperate in decision-making, but under some circumstances they compete with one another.” ...

Today, most economists agree that, left alone, people will act in their own best interest, and that the market will coordinate their actions to produce outcomes beneficial to all. Neuroeconomics potentially challenges both parts of this argument. If emotional responses often trump reason, there can be no presumption that people act in their own best interest. And if markets reflect the decisions that people make when their limbic structures are particularly active, there is little reason to suppose that market outcomes can’t be improved upon.

Consider saving for retirement. ... Saving money is difficult, because it involves giving up things that we value now—a new car, a vacation, fancy dinners—in order to secure our welfare in the future. All too often, the desire for immediate gratification prevails. ... [Harvard's David] Laibson has collaborated with Loewenstein, Cohen, and Samuel McClure, another Princeton psychologist, to examine what happens in people’s brains when they are forced to choose between immediate and delayed rewards. ...

The results provide further evidence that reason and emotion often compete inside the brain, and it also helps explain a number of puzzling phenomena, such as the popularity of Christmas savings accounts, which people contribute to throughout the year. “Why would anybody put money into a savings account that offers zero interest and imposes a penalty if you withdraw cash early?” Cohen said. “It simply doesn’t make sense in terms of a ... rational economic model. The reason is that there is this limbic system that produces a strong drive. When it sees something it likes, it wants it now. So you need some type of pre-commitment device to make people save.” ...

There is ... a ... fundamental objection to neuroeconomics and the Platonic view of decision-making. “There is no evidence that hidden inside the brain are two fully independent systems, one rational and one irrational,” Paul W. Glimcher, a neuroscientist ...[at] N.Y.U.’s Center for Neuroeconomics, and two of his colleagues ... wrote in a recent paper. “There is, for example, no evidence that there is an emotional system, per se, and a rational system, per se, for decision making at the neurobiological level.”

In place of the reason-versus-passion model, Glimcher and his colleagues have adopted a view of decision-making that, paradoxically, bears a striking resemblance to orthodox economics. In one experiment, Glimcher and a colleague trained thirsty monkeys to direct their eyes to one of two illuminated targets, which earned them differing chances of getting juice rewards...

Glimcher ... used electrodes to track neural firing... He discovered that ... their brains act as if they were solving a mathematical problem, which is what economists assume when they depict people as rational agents trying to maximize their ... “utility.” “What seems to be emerging from these early studies is a remarkably economic view of the primate brain,” Glimcher and his colleagues wrote. “The final stages of decision-making seem to reflect something very much like a utility calculation.”

If Glimcher’s results could be demonstrated in human brains, they might undermine a lot of neuroeconomics... Economists who have staked their careers on neuroeconomics [reply]... “It isn’t a wholesale rejection of the traditional methodology,” David Laibson said... “It is just a recognition that decision-making is not always perfect. People try to do the best they can, but they sometimes make mistakes. The idea that a single mechanism maximizes welfare and always gets things right—that concept is on the rocks. But models that I call ‘cousins’ of the rational-actor model will survive.”

The modified theories to which Laibson referred assume that people have two warring sides: the first deliberative and forward-looking, the second impulsive and myopic. Under certain circumstances, the impulsive side prevails, and people succumb to things like drug addiction, overeating, and taking wild gambles in the stock market. For now, the new models await empirical verification, but neuroeconomists are convinced that they’re onto something...

I'm resistant to neuroeconomics, but also interested in what they are finding. This captures the essential ideas, but it's only around 15% of the original - there's quite a bit more in the article.

Update: Greg Mankiw discovers this article and says a few words about it.

Open Access Academic Publishing

This discussion of open access academic publishing came by email (thank you). The question is whether the scientific community should move away from the conventional form of publishing to an open access model and, if so, how to fund the online publications, a question faced by economics as well. This is from a blog devoted mainly to string theory, Not Even Wrong:

Not Even Wrong: Open Access Publishing: There’s a big debate within the scientific community in general about how and whether to move away from the conventional model of scientific publishing (journals supported by subscriptions paid by libraries, only available to subscribers) to a model where access to the papers in scientific journals is free to all (”Open Access”). The main problem with this is figuring out how to pay for it. ...

The CERN task force has gathered a lot of interesting data about the particle physics literature, counting roughly 6000 papers/year, of which about 80% are theoretical. They found that about half of the journals publishing most particle physics papers are willing to move to an open-access model, with a cost per paper of between $1-3000. These included APS and IOP journals, but did not include Elsevier journals... The APS has announced a program that would make papers in its journals open access at a cost of $975-1300 per paper, and Elsevier has announced something similar at around $3000/paper. The CERN task force proposes raising $6-8 million/year over the next few years to start supporting the half of the journals (not including Elsevier ones) that it has identified as ready for Open Access.

What is being proposed here is basically to give up on what a lot of people have hoped would develop: a model of free journals, whose cost would be small since they would be all-electronic, small enough to be supported by universities and research grants. Instead the idea here is to keep the current journals and their publishers in place, just changing the funding mechanism from library subscriptions to something else, some form that would fund access for all.

The CERN task force suggests various sources for funds over the next few years, in a transition period, but doesn’t address the long term funding problem. If you fund these things out of, say, NSF grants, when Congress decides to cut the NSF budget, there’s a serious danger of the plug getting pulled on a field’s entire scientific literature. One popular idea is that researchers themselves should pay the cost. The problem with this is that the bulk of the literature is theory papers, mostly from people who can’t afford this. ...

The CERN task force doesn’t seem to me to be providing a viable long-term plan for moving to the kind of open access model they are supporting. It doesn’t address the fundamental problem of keeping a system where physicists hand over the scientific literature to Elsevier, then have to figure out how to buy it back...

The CERN report also contains a lot of highly debatable arguments. It claims that the current refereeing process is extremely important, valuable, and must be maintained at all costs, ignoring the fact that virtually everyone accesses papers at the arXiv, not at the journal. It’s true that the refereed version in a journal may be improved and have errors fixed, but authors are generally free to replace the original preprint version by a corrected one on the arXiv. The description given in the report of the “high standards of peer review” doesn’t agree with the reality of what is going on (see the Bogdanov affair). The mathematics literature still has a functional peer-reviewing system and it plays a very important role of keeping the number of incorrect proofs and unreliable results to a minimum, but the particle physics literature is very different. The report does continually make the point that the refereed journal system is crucial to the ways institutions evaluate people and decide whether to hire or promote them, but it doesn’t address the issue of whether this is a good thing. ...

Links to a lot more information and discussion on this issue can be found in the original post. For the economics, one recent paper is "The Economics of Open-Access Journals," May 2006, by Mark McCabe & Christopher Snyder (the paper is fairly technical). They also cite recent related work.

In economics, the role of the refereeing process and journal publications in assessing research quality is essential. Whatever we do, that function needs to be preserved and hopefully improved. One area that could be improved, though certainly not the only one, is the average time from paper submission to its ultimate publication in a journal. The time it takes is far too long for any age, let alone a digital one.

Friday, August 04, 2006

The Framing Effect

Here's new research on the framing effect connecting it particular regions of the brain. How active is your orbital and medial prefrontal cortex? I added the graphs from the more detailed version of the article:

The Emotional Brain Weighs Its Options, by Greg Miller, Neuroscience News of the Week, Science Magazine: Faced with a decision between two packages of ground beef, one labeled "80% lean," the other "20% fat," which would you choose? The meat is exactly the same, but most people would pick "80% lean." The language used to describe options often influences what people choose, a phenomenon behavioral economists call the framing effect. Some researchers have suggested that this effect results from unconscious emotional reactions.

Now a team of cognitive neuroscientists reports findings on page 684 that link the framing effect to neural activity in a key emotion center in the human brain, the amygdala. They also identify another region, the orbital and medial prefrontal cortex (OMPFC), that may moderate the influence of emotion on decisions: The more activity subjects had in this area, the less susceptible they were to the framing effect. "The results could hardly be more elegant," says Daniel Kahneman, an economist at Princeton University who pioneered research on the framing effect 25 years ago (Science, 30 January 1981, p. 453).

Frame18406

Fig. 1. The financial decision-making task. At the beginning of each trial, participants were shown a message indicating the starting amount of money that they would receive (e.g., "You receive £50")... Subjects were instructed that they would not be able to retain the whole of this initial amount, but would next have to choose between a sure option and a gamble option... The sure option was presented in the Gain frame trials (A) as an amount of money retained from the starting amount (e.g., keep £20 of the £50) and in the Loss frame trials (B) as an amount of money lost from the starting amount (e.g., lose £30 of the £50). The gamble option was represented as a pie chart depicting the probability of winning (green) or losing (red) all of the starting money. The expected outcomes of the gamble and sure options were equivalent. Gain frame trials were intermixed pseudo-randomly with Loss frame trials. No feedback concerning trial outcomes was given during the experiment. [View Larger Version of this Image (114K JPEG file)]

In the new study, a team led by Benedetto De Martino and Raymond Dolan of University College London used functional magnetic resonance imaging (fMRI) to monitor the brain activity of 20 people engaged in a financial decision-making task. At the beginning of each round, subjects inside the fMRI machine saw a screen indicating how much money was at stake in that round: £50, for example. The next screen offered two choices. One option was a sure thing, such as "Keep £20" or "Lose £30." The other option was an all-or-nothing gamble. The odds of winning--shown to the subjects as a pie chart--were rigged to provide the same average return as the sure option. In interviews after the experiment, participants said they'd quickly realized that the sure and gamble options were equivalent, and most said that they had split their responses 50-50 between the two choices.

Frame28406

Fig. 2. Behavioral results. (A) Percentages of trials in which subjects chose the gamble option in the Gain frame and the Loss frame. Subjects showed a significant increase in the percentage of trials in which the gamble option was chosen in the Loss frame with respect to the Gain frame [61.6% > 42.9% (P < 0.001, t19 = 8.06)]. The dashed line represents risk-neutral behavior (choosing the gamble option in 50% of trials). Error bars denote SEM. (B) Each bar represents, for each individual subject, the percentage difference between how often subjects chose the gamble option in the Loss frame as compared to the Gain frame. A hypothetical value of zero represents a complete indifference to the framing manipulation (i.e., fully "rational" behavior). All participants, to varying degrees, showed an effect of the framing manipulation. [View Larger Version of this Image (66K JPEG file)]

But they hadn't. When the sure option was framed as a gain (as in "Keep £20"), subjects played it safe, gambling only 43% of the time on average. If it was framed as a loss, however, they gambled 62% of the time.

When the researchers examined the fMRI scans, the amygdala stood out. This brain region fired up when subjects either chose to keep a sure gain or elected to gamble in the face of a certain loss. It grew quiet when subjects gambled instead of taking a sure gain or took a sure loss instead of gambling. De Martino suggests that the amygdala activity represents an emotional signal that pushes subjects to keep sure money and gamble instead of taking a loss.

Frame38406

Fig. 3. fMRI results. (A) Interaction contrast [(Gsure + Lgamble) – (Ggamble + Lsure)]: brain activations reflecting subjects' behavioral tendency to choose the sure option in the Gain frame and the gamble option in the Loss frame (i.e., in accordance with the frame effect). ... (C) Reverse interaction contrast [(Ggamble + Lsure) – (Gsure + Lgamble)]: brain activations reflecting the decision to choose counter to subjects' general behavioral tendency. ... Effects in (A) and (C) were significant at P < 0.001; for display purposes they are shown at P < 0.005. (B and D) Plots of percentage signal change for peaks in right amygdala... (B) and ACC... (D). Error bars denote SEM. [View Larger Version of this Image (136K JPEG file)]

De Martino says he expected to find that subjects with the most active amygdalas would be more likely to keep sure gains and gamble when faced with a certain loss. But no such correlation turned up. Instead, activity in OMPFC best predicted individuals' susceptibility to the framing effect. De Martino speculates that OMPFC integrates emotional signals from the amygdala with cognitive information, such as the knowledge that both options are equally good. "People who are more rational don't perceive emotion less, they just regulate it better," he says.

Frame48406

Fig. 4. Rationality across subjects: fMRI correlational analysis. Regions showing a significant correlation between rationality index [between-subjects measure of susceptibility to the framing manipulation...] and the interaction contrast image [(Gsure + Lgamble) – (Ggamble + Lsure)] are highlighted. (A) Orbital and medial prefrontal cortex (OMPFC) ... Effects were significant at P < 0.001; for display purposes they are shown at P < 0.005. (B) Plot of the correlation of parameter estimates for R-OFC with the rationality index for each subject (r = 0.8, P < 0.001).

"It's a nice, strong correlation between individual differences in behavior and individual differences in the brain," says Russell Polldrack, a neuroscientist at the University of California, Los Angeles. Yet Elizabeth Phelps, a cognitive neuroscientist at New York University, cautions that fMRI studies alone can rarely prove a brain region's causal role. She suggests examining people with damage to the amygdala or OMPFC to clarify how these regions contribute to the framing effect. [View Larger Version of this Image (148K JPEG file)]

Wednesday, July 26, 2006

Do You Have an Expert Mind?

Something a bit different. Scientific American looks at the characteristics of expert minds. Summarizing, "The preponderance of psychological evidence indicates that experts are made, not born," i.e. motivation seems tomatter more than talent. I cut quite a bit - the full article is available for free:

The Expert Mind, by Philip E. Ross, Scientific American: A man walks along the inside of a circle of chess tables, glancing at each for two or three seconds before making his move. On the outer rim, dozens of amateurs sit pondering their replies until he completes the circuit. The year is 1909, the man is José Raúl Capablanca of Cuba, and the result is a whitewash: 28 wins in as many games. The exhibition was part of a tour in which Capablanca won 168 games in a row. How did he play so well, so quickly? And how far ahead could he calculate under such constraints? "I see only one move ahead," Capablanca is said to have answered, "but it is always the correct one."

He thus put in a nutshell what a century of psychological research has subsequently established: much of the chess master's advantage over the novice derives from the first few seconds of thought. This rapid, knowledge-guided perception, sometimes called apperception, can be seen in experts in other fields as well...

But how do the experts in these various subjects acquire their extraordinary skills? How much can be credited to innate talent and how much to intensive training? ... The collected results of a century of such research have led to new theories explaining how the mind organizes and retrieves information. What is more, this research may have important implications for educators. Perhaps the same techniques used by chess players to hone their skills could be applied in the classroom to teach reading, writing and arithmetic.

The Drosophila of Cognitive Science ...[W]hen expertise undoubtedly exists--as in, say, teaching or business management--it is often hard to measure, let alone explain. Skill at chess, however, can be measured, broken into components, subjected to laboratory experiments and readily observed in its natural environment, the tournament hall. It is for those reasons that chess has served as  ... the "Drosophila of cognitive science," as it has been called. ...

The feats of chess masters have long been ascribed to nearly magical mental powers. This magic shines brightest in the so-called blindfold games in which the players are not allowed to see the board. In 1894 French psychologist Alfred Binet, the co-inventor of the first intelligence test, asked chess masters to describe how they played such games. He began with the hypothesis that they achieved an almost photographic image of the board, but he soon concluded that the visualization was much more abstract. Rather than seeing the knight's mane or the grain of the wood from which it is made, the master calls up only a general knowledge of where the piece stands in relation to other elements of the position. ...

Let us say he has somehow forgotten the precise position of a pawn. He can find it, as it were, by considering the stereotyped strategy of the opening--a well-studied phase of the game with a relatively limited number of options. Or he can remember the logic behind one of his earlier moves--say, by reasoning: "I could not capture his bishop two moves ago; therefore, that pawn must have been standing in the way...." He does not have to remember every detail at all times, because he can reconstruct any particular detail whenever he wishes by tapping a well-organized system of connections.

Continue reading "Do You Have an Expert Mind?" »

Monday, July 24, 2006

"Scientific Communities Include Tortoises and Hares, Mavericks and Mules"

Historians of science would be surprised if there weren't people unwilling to concede that global warming is caused by human activity no matter how much evidence accumulates to the contrary:

Global Warming -- Signed, Sealed and Delivered, by Naomi Oreskes, Commentary, LA Times: An Op-Ed article in the Wall Street Journal a month ago claimed that a published study affirming the existence of a scientific consensus on the reality of global warming had been refuted. This charge was repeated again last week, in a hearing of the House Committee on Energy and Commerce.

I am the author of that study, which appeared two years ago in the journal Science, and I'm here to tell you that the consensus stands. The argument put forward in the Wall Street Journal was based on an Internet posting; it has not appeared in a peer-reviewed journal — the normal way to challenge an academic finding. (The Wall Street Journal didn't even get my name right!)

My study demonstrated that there is no significant disagreement within the scientific community that the Earth is warming and that human activities are the principal cause. .... Not a single paper in a large sample of peer-reviewed scientific journals between 1993 and 2003 refuted the consensus position...

To be sure, there are a handful of scientists, including MIT professor Richard Lindzen, the author of the Wall Street Journal editorial, who disagree with the rest of the scientific community. To a historian of science like me, this is not surprising. In any scientific community, there are always some individuals who simply refuse to accept new ideas and evidence. This is especially true when the new evidence strikes at their core beliefs and values.

Earth scientists long believed that humans were insignificant in comparison with the vastness of geological time and the power of geophysical forces. For this reason, many were reluctant to accept that humans had become a force of nature, and it took decades for the present understanding to be achieved. Those few who refuse to accept it are not ignorant, but they are stubborn. They are not unintelligent, but they are stuck on details that cloud the larger issue. Scientific communities include tortoises and hares, mavericks and mules.

A historical example will help... In the 1920s, the distinguished Cambridge geophysicist Harold Jeffreys rejected the idea of continental drift on the grounds of physical impossibility. In the 1950s, geologists and geophysicists began to accumulate overwhelming evidence of the reality of continental motion... By the late 1960s, the theory of plate tectonics was on the road to near-universal acceptance.

Yet Jeffreys, by then Sir Harold, stubbornly refused to accept the new evidence, repeating his old arguments about the impossibility of the thing. He was a great man, but he had become a scientific mule. For a while, journals continued to publish Jeffreys' arguments, but after a while he had nothing new to say. He died denying plate tectonics. The scientific debate was over.

So it is with climate change today. As American geologist Harry Hess said in the 1960s about plate tectonics, one can quibble about the details, but the overall picture is clear.

Yet some climate-change deniers insist that the observed changes might be natural, perhaps caused by variations in solar irradiance or other forces we don't yet understand. Perhaps there are other explanations for the receding glaciers. But "perhaps" is not evidence. ... Climate-change deniers can imagine all the hypotheses they like, but it will not change the facts...

None of this is to say that there are no uncertainties left — there are always uncertainties in any live science. Agreeing about the reality and causes of current global warming is not the same as agreeing about what will happen in the future. There is continuing debate in the scientific community over the likely rate of future change: not "whether" but "how much" and "how soon." And this is precisely why we need to act today: because the longer we wait, the worse the problem will become, and the harder it will be to solve.

Sunday, July 23, 2006

Nature vs Nurture: Poverty Matters

This is interesting and important for our approach to social problems. We have come to believe that genetics largely determines our fate, but this research shows that environment can matter too, particularly poor environments that do not allow genetics to fully express themselves:

After the Bell Curve, by David L Kirp, Sunday Magazine, NY Times: When it comes to explaining the roots of intelligence, the fight between partisans of the gene and partisans of the environment is ancient and fierce. ... What is at stake is not just the definition of good science but also the meaning of the just society. The nurture crowd is predisposed to revive the War on Poverty, while the hereditarians typically embrace a Social Darwinist perspective.

A century’s worth of quantitative-genetics literature concludes that a person’s I.Q. is remarkably stable and that about three-quarters of I.Q. differences between individuals are attributable to heredity. This is how I.Q. is widely understood — as being mainly “in the genes” — and that understanding has been used as a rationale for doing nothing about seemingly intractable social problems like the black-white school-achievement gap and the widening income disparity. ... In their 1994 best seller, “The Bell Curve,” Richard Herrnstein and Charles Murray relied on this research to argue that the United States is a genetic meritocracy and to urge an end to affirmative action. Since there is no way to significantly boost I.Q., prominent geneticists like Arthur Jensen of Berkeley have contended, compensatory education is a bad bet.

But what if the supposed opposition between heredity and environment is altogether misleading? A new generation of studies shows that genes and environment don’t occupy separate spheres — that much of what is labeled “hereditary” becomes meaningful only in the context of experience. “It doesn’t really matter whether the heritability of I.Q. is this particular figure or that one,” says Sir Michael Rutter of the University of London. “Changing the environment can still make an enormous difference.” If heredity defines the limits of intelligence, the research shows, experience largely determines whether those limits will be reached. And if this is so, the prospects for remedying social inequalities may be better than we thought.

When quantitative geneticists estimate the heritability of I.Q., they are generally relying on studies of twins. Identical twins are in effect clones who share all their genes; fraternal twins are siblings born together — just half of their genes are identical. If heredity explains most of the difference in intelligence, the logic goes, the I.Q. scores of identical twins will be far more similar than the I.Q.’s of fraternal twins. And this is what the research has typically shown. Only when children have spent their earliest years in the most wretched of circumstances, ... has it been thought that the environment makes a notable difference. Otherwise, genes rule.

Then along came Eric Turkheimer to shake things up. Turkheimer, a psychology professor at the University of Virginia, is the kind of irreverent academic who gives his papers user-friendly titles like “Spinach and Ice Cream” and “Mobiles.” He also has a reputation as a methodologist’s methodologist. In combing through the research, he noticed that the twins being studied had middle-class backgrounds. The explanation was simple — poor people don’t volunteer for research projects — but he wondered whether this omission mattered.

Together with several colleagues, Turkheimer searched for data on twins from a wider range of families. He found what he needed... In a widely-discussed 2003 article, he found that, as anticipated, virtually all the variation in I.Q. scores for twins in the sample with wealthy parents can be attributed to genetics. The big surprise is among the poorest families. Contrary to what you might expect, for those children, the I.Q.’s of identical twins vary just as much as the I.Q.’s of fraternal twins. The impact of growing up impoverished overwhelms these children’s genetic capacities. ... home life is the critical factor for youngsters at the bottom of the economic barrel. “If you have a chaotic environment, kids’ genetic potential doesn’t have a chance to be expressed,” Turkheimer explains. “Well-off families can provide the mental stimulation needed for genes to build the brain circuitry for intelligence.”

This provocative finding was confirmed in a study published last year. An analysis of the reading ability of middle-aged twins showed that even half a century after childhood, family background still has a big effect — but only for children who grew up poor. ...

In seeking to understand the impact of nature and nurture on I.Q., researchers have also looked at adopted children. Consistent with the proposition that intelligence is mainly inherited, these studies have almost always found that adopted youngsters more closely resemble their biological than their adoptive parents. ...

But researchers in France noted a shortcoming in these adoption studies and set out to correct it. Since poor families rarely adopt, those investigations have had to focus only on youngsters placed in well-to-do homes. What’s more, because most adopted children come from poor homes, almost nothing is known about adopted youngsters whose biological parents are well-off.

What happens in these rare instances of riches-to-rags adoption? To answer that question, two psychologists, Christiane Capron and Michel Duyme, combed through thousands of records from French public and private adoption agencies. ...

Regardless of whether the adopting families were rich or poor, Capron and Duyme learned, children whose biological parents were well-off had I.Q. scores averaging 16 points higher than those from working-class parents. Yet what is really remarkable is how big a difference the adopting families’ backgrounds made all the same. The average I.Q. of children from well-to-do parents who were placed with families from the same social stratum was 119.6. But when such infants were adopted by poor families, their average I.Q. was ... 12 points lower. The same holds true for children born into impoverished families... These studies confirm that environment matters — the only, and crucial, difference between these children is the lives they have led.

A later study of French youngsters adopted between the ages of 4 and 6 shows the continuing interplay of nature and nurture. Those children had little going for them. Their I.Q.’s averaged 77, putting them near retardation. Most were abused or neglected as infants, then shunted from one foster home or institution to the next.

Nine years later, they retook the I.Q. tests, and contrary to the conventional belief that I.Q. is essentially stable, all of them did better. The amount they improved was directly related to the adopting family’s status. Children adopted by farmers and laborers had average I.Q. scores of 85.5; those placed with middle-class families had average scores of 92. The average I.Q. scores of youngsters placed in well-to-do homes climbed more than 20 points, to 98 — a jump from borderline retardation to a whisker below average. That is a huge difference — a person with an I.Q. of 77 couldn’t explain the rules of baseball, while an individual with a 98 I.Q. could actually manage a baseball team — and it can only be explained by pointing to variations in family circumstances.

Taken together, these studies show that the issue has changed: it is no longer a matter of whether the environment matters but when and how it matters. And poverty, quite clearly, is an important part of the answer.

That is not to say that an affluent home is necessarily a good home. ... On average, though, well-off households have the resources needed to provide better settings for the fullest development of a child’s natural abilities...

Is there a way to reduce such gaps? In recent years, the case for investing in early-childhood education has become stronger and stronger. The federal Early Head Start program for infants and toddlers is effective when it is well implemented — in part because it succeeds in getting parents more involved with their children. Recent research also shows that one year of high-quality state prekindergarten can give children as much as a seven-month advantage in vocabulary; this, in turn, is a good predictor of how well they will read when they are in primary school. As you would expect, poor children benefit the most, especially when they are in classes with middle-class youngsters.

The push for universal preschool is not a red-state-blue-state issue; the pioneers in the area are Oklahoma and Georgia, not generally known for social progressivism. And with the support of business groups and prominent philanthropists ..., it may enter the national agenda. If it does, it will be a small step toward a society in which not only the most fortunate children will be able to “max out” their potential.

I always liked Brad DeLong's take down of "[Michael] Barone's claim that "maybe" the fall in social mobility in America is due to the fact that a high IQ genetic elite has risen to the top of the fair meritocracy that is our society."

Thursday, July 20, 2006

Swift Boating the Planet

That's a title from a Paul Krugman column. Here's what he was talking about. This is "Senator Inhofe's Pet Weasel" from Scientific American's blog:

Senator Inhofe's Pet Weasel, by John Rennie, SciAm Blog: Here's a follow-up to my previous post about the misleading press release from Senator James Inhofe and the Environment and Public Works Committee. While writing the post, I was tempted to use the term "swift-boating" to describe the release's attempts to slime the reputation of climatologist James Hansen. I refrained, because why drag even more political baggage into the discussion unnecessarily?

Turns out that my impulse may have had more foundation than I'd realized. Darren Samuelsohn, writing for Greenwire, reports the following...:

A 71-year old former insurance executive, Inhofe has never been shy about confronting climate scientists, environmentalists, Hollywood producers and fellow senators. But in setting his sights on the press, Inhofe appears to be incorporating a strategy hatched by the committee's new communications director, Marc Morano.

As a reporter for the conservative Cybercast News Service from 2001 until earlier this year, Morano peppered his climate reporting with skeptics' views that have surfaced as themes in Inhofe's recent press attacks. Earlier this year, for example, Morano wrote about NASA scientist James Hansen's contributions to the 2004 Democratic presidential nominee, Sen. John Kerry (Mass.). Inhofe's press release questions why Brokaw failed to mention the political ties of Hansen and other scientists interviewed for the Discovery report.

Morano, who worked as a producer in the mid-90s for radio commentator Rush Limbaugh, was also among the first reporters to write about the Swift Boat Veterans for Truth campaign scrutinizing Kerry's Vietnam War record. And earlier this year, Morano penned an article questioning the Purple Heart medals of Rep. John Murtha (D-Pa.), a leading critic of Bush's Iraq policy.

Kudos to Inhofe for hiring someone who knows how to conduct important political discourse with the highest respect for facts, honesty and integrity.

By the way, if you wonder how this Greenwire story came to my attention, it was apparently e-mailed to me by Marc Morano's own office. I guess at EPW they believe there really is no such thing as bad publicity.

Thursday, July 13, 2006

Putting Thoughts to Work

Face to face, human input-output devices aren't too bad. My brain doesn't get too far ahead of my mouth most of the time, and even when I talk fast people generally seem to follow what I am saying.

But when it comes to electronic communications, it's different. Think of all the time you spend just on email. Wouldn't it be easier if you could just think the words and have them appear on the screen through some kind of magic electronic dictation process? Scientists have taken the first steps in that direction in a study published today in Nature. Using a brain implant, a paralyzed man was able to control the movement of cursor and in doing so, perform several simple tasks:

Paralyzed Man Uses Thoughts to Move a Cursor, by Andrew Pollack, NY Times: A paralyzed man with a small sensor implanted in his brain was able to control a computer, a television set and a robot using only his thoughts, scientists reported yesterday. ... “If your brain can do it, we can tap into it,” said John P. Donoghue, a professor of neuroscience at Brown University who has led development of the system...

In a variety of experiments, the first person to receive the implant ... moved a cursor, opened e-mail, played a simple video game called Pong and drew a crude circle on the screen. He could change the channel or volume on a television set, move a robot arm somewhat, and open and close a prosthetic hand. Although his cursor control was sometimes wobbly, the basic movements were not hard to learn. “I pretty much had that mastered in four days,” Mr. Nagle, 26, said ... He said the implant did not cause any pain. ...

Brain71306

The sensor measures 4 millimeters by 4 millimeters — less than a fifth of an inch long and wide — and contains 100 tiny electrodes. The device was implanted in the area of Mr. Nagle’s motor cortex responsible for arm movement and was connected to a pedestal that protruded from the top of his skull. When the device was to be used, technicians plugged a cable connected to a computer into the pedestal. So Mr. Nagle was directly wired to a computer, somewhat like a character in the “Matrix” movies.

Mr. Nagle would then imagine moving his arm to hit various targets. The implanted sensor eavesdropped on the electrical signals emitted by neurons in his motor cortex as they controlled the imaginary arm movement. Obstacles must be overcome, though, before brain implants become practical. ...

If implants were available that enabled you to process information more efficiently, would you want one? If implants were to give workers a productivity advantage, would you feel pressured to get one even if you weren't fully comfortable with it? I would get the implant I imagine with little hesitation.

Sunday, July 09, 2006

Setting Goldberg and Sullivan Straight on Global Warming

More on global warming and pundits from SciAm Observations:

Two More Pundits Whistle Past the Graveyard, SciAm Observations: Robert J. Samuelson's daft column about global warming last week (which I addressed previously) elicited some like-minded punditry from Jonah Goldberg of National Review Online and Andrew Sullivan of Time, so I might as well comment on those, too. Goldberg wonders:

What if science could prove 100% that the earth was warming dangerously but that this was 100% natural...? I suspect this would scatter the current environmental coalitions and antagonists... To be sure, many environmentalists would still be concerned. But, I think, a large amount of the passion would be gone in certain quarters once the fun of blaming capitalism and mankind was out of the equation. I think the reluctance on the part of some on the right to fix the problem would evaporate while the reluctance to "tamper" with nature would cause at least some environmentalists to second-guess global warming science.

He's using this thought experiment in part to try to start an argument about whether environmentalists are hypocritical about global warming. I'll leave it to others to take that bait. What he doesn't seem to recognize is that his "what if" as phrased is so vague that it's almost meaningless, because he's wishing away much of what climate science knows to be true. And in the process, he distorts what the reasonable responses to global warming could be.

Specifically...

Unfortunately, ... we have already been blindly driving climate change for decades (at least), and our future economic development guarantees that we will continue to have our foot on the accelerator for a long time to come. Not adjusting climate isn't really an option for us anymore; we're stuck with trying to moderate our influence and hope for the best.

Andrew Sullivan, meanwhile, wrote this:

It occurs to me that the global warming debate is not unlike the WMD-terrorist debate, except the sides are reversed. Accrding to Ron Suskind, Dick Cheney's "one percent doctrine" means that if there's a one percent chance that a terrorist could have access to a WMD, we must act as if it were a certainty - because the outcome, however unlikely, would be too disastrous to risk. On global warming, Gore expresses a not-too-dissimilar equation: if there's a small chance that human behavior could lead to environmental catastrophe, we should act as if it were a certainty - because waiting too long is too big a risk to take.

[...]

In both cases, however, the evidence is complicated and hard to pin down with absolute certainty. We know we are at much greater risk now from Islamist terror than we were a decade ago - but measuring how much, and where from specifically, is very hard. Equally, we know that global warming is real, but whether it has reached or will soon reach a dangerous tipping point is not a given. And in both cases, the entire argument rests a great deal on what we do not and cannot know. It seems to me prudent to take both risks seriously, but not so seriously that we abandon objective, empirical judgment.

What is there to say any more about smart people like Sullivan who maintain their agnosticism on global warming by shrugging that "the evidence is complicated and hard to pin down with absolute certainty"? That there is a difference between reasonable uncertainty and a willful refusal to draw an unwelcome conclusion?

Sullivan rushes for comfort into the arms of scientist skeptics like Richard Lindzen when he knows that Lindzen's position is very much at odds with the climatological consensus--the "objective, empirical judgment" he says we should heed. The irony of Sullivan stretching for scientific justification is that he himself draws a parallel with the the Iran "weapons of mass destruction" fiasco but doesn't take the lesson. Matt Stoller has had the pithiest response to that:

This is rich. The rush to war was premised on the assumption that the judgment of the Bush administration (and Sullivan) was superior to that of professional weapons inspectors like Hans Blix. This turned out to be false. Now, the foot-dragging on global warming is premised on the assumption that the judgment of the Bush administration (and Sullivan) is superior to that of the global scientific community.

Sullivan writes:

On global warming, Gore expresses a not-too-dissimilar equation: if there's a small chance that human behavior could lead to environmental catastrophe, we should act as if it were a certainty - because waiting too long is too big a risk to take.

But there isn't a "small chance" that human activity could cause catastrophic climate changes. There's a very large chance based on the extrapolation of what continuing to pump CO2 into the atmosphere will do--it's like Sullivan is saying there's a small chance a car will crash when it is racing toward a brick wall. Maybe Sullivan is playing games with words by reserving "environmental catastrophe" for the worst-case scenarios of ice-cap melting and so on; but 13 million Bangladeshis who stand to lose their homes if sea levels rise just one meter, for example, might disagree.

Setting Samuelson Straight on Global Warming

The Scientific American blog Observations plays "bad cop in responding to the prattle" by Robert Samuelson in his recent column on global warming:

Samuelson's Wishful Thinking on Global Warming, SciAm Observations: My good-cop colleague George Musser is doing the saintly work of reasoning with global-warming skeptics by calmly laying out the evidence for them. That leaves an opening for somebody to be bad cop in responding to the prattle of more politically influential skeptics and deniers. ...

This past Wednesday, Robert J. Samuelson, contributing editor to Newsweek and columnist for the Washington Post, published "Global Warming's Real Inconvenient Truth." You can read it all here. Shorter version: "Trying to do anything about global warming now would be hard, and therefore stupid; the smart strategy is to wait for a magical technology to make our problem go away effortlessly."

Several points on this:

1. Samuelson does a fine job of keeping to the hardcore skeptic game plan of denial-in-depth, which you may recall goes like this:

(a) Global warming is not real.
(b) Even if it is real, it is entirely natural.
(c) Even if people are causing it, it is nothing to worry about.
(d) Okay, it is something to worry about, but there's nothing we can do about it (optional: anymore) except adapt. Economic growth and technology will eventually make it all okay.
-- Start at the top and work down only as necessary; whenever possible, find opportunities to jump higher up the list again and repeat.

Samuelson's column perches at (d) but he manages to make a backward swipe at (b) ... when he writes,

I'm unqualified to judge between those scientists (the majority) who blame man-made greenhouse gases and those (a small minority) who finger natural variations in the global weather system. But if the majority are correct, the IEA report indicates we're now powerless.

Bravo! Well played, sir!

2. Several times in his column, Samuelson seems to invoke an odd definition of "hypocricy." He approvingly quotes himself from 1997:

Global warming may or may not be the great environmental crisis of the next century, but -- regardless of whether it is or isn't -- we won't do much about it. We will ... make some fairly solemn-sounding commitments to avoid it. But ...Little will be done. . . . Global warming promises to become a gushing source of national hypocrisy.

And follows up with:

Ambitious U.S. politicians also practice this self-serving hypocrisy. Gov. Arnold Schwarzenegger has a global warming program. Gore counts 221 cities that have "ratified" Kyoto. .... None of these programs will reduce global warming. They're public relations exercises and -- if they impose costs -- are undesirable.

Samuelson wants to paint saying one thing while doing the opposite and trying but failing to accomplish something as morally equivalent. Only by denying the sincerity of people working against global warming can he deride acts meant to show leadership and commitment as empty "public relations exercises." ...

4. The real hope for the future, according to Samuelson, is yet-to-be-identified technology. In his words, global warming is really just "an engineering problem." ... What that new technology might be and when it might emerge, Samuelson doesn't really know and doesn't seem disposed to fret about. He's confident that it will arrive, however. So the Samuelson plan for global warming in a nutshell (no jokes, please!) works like this:

  • Do nothing to curb CO2 emissions now because only hypocrites would want to try.
  • Invent new technologies that make CO2 go away.
  • Enjoy world saved from global warming.

At least it sounds realistic. ... But by delaying doing anything about global warming, we leave ourselves fewer options about how to deal with it later. ...

Wednesday, July 05, 2006

Evidence for Global Warming

This is part of a much longer commentary in the NY Times on the evidence for global warming:

The Evidence for Global Warming, by Philip M. Boffey, Commentary, NY Times: While the debate over what to do about global warming heats up ..., scientists have made substantial progress in recent years in defining the threat and estimating its likely impacts. The picture they paint is worrisome. The evidence suggests that humans are altering the atmosphere in ways never before seen. The only question is how damaging the consequences might be, and what can be done to head off or adapt to the worst...

Skeptics say these things are most likely part of the natural variation of Earth's climate, unrelated to man-made warming. ... [G]iven the huge potential consequence of the debate, it's important to examine all the evidence carefully. So let's look at the various pieces of the global warming debate one at a time.

The biggest question is the one on which there is least dispute. The leading scientific organizations with relevant expertise have overwhelmingly adopted the view that human-induced global warming is a serious problem. ... Only the American Association of Petroleum Geologists, with deep ties to the fossil fuel industry, has demurred.

Meanwhile, the vast majority of research reports in leading scientific journals tend to support the prevailing view that human activities are mostly responsible for driving up temperatures. An analysis of 928 abstracts from leading scientific journals between 1993 and 2003 found that ... [n]ot a single paper disagreed with the consensus. ...

Still, there is plenty of disagreement over how fast the climate will change and how dire the consequences might be...

Analyses of the gases trapped in ancient ice cores from Antarctica have revealed that important greenhouse gases have reached their highest atmospheric concentrations in at least 650,000 years. The concentrations will only get worse... Other things being equal, the rise in these gases will cause temperatures to rise. That's simple physics, agreed to by all sides.

What's not agreed to is how worrisome the temperature increase will be. The global average surface temperature rose about 1 degree Fahrenheit over the 20th century. The change hardly seemed noticeable, except in polar regions where the increases were larger. Yet even that seemingly small increase is affecting the global environment by thawing the frozen tundra, melting mountain glaciers, adding to stress on coral reefs, causing some species to change habitats, and increasing the number of hot days while decreasing the number of cold days, to cite a few examples. And the warming trend may be picking up speed. The last few decades of the 20th century were probably the warmest in a thousand years.

Skeptics have an answer for this. They say surface temperatures were probably as high or higher during the Medieval Warm Period that ushered in the last millennium, well before humans emitted vast amounts of greenhouse gases. That suggests to them that today's warming might simply be a continuation of long-term natural cycles. But the magnitude and geographic extent of the warmth back then is uncertain. ...

And for the rest of this century, temperatures will almost certainly keep rising. The Earth has been storing heat in its oceans, which means there is about 1 degree Fahrenheit more warming ... that will occur during this century even without any additional greenhouse emissions. All major components of the climate system are warming — the lower atmosphere, the surface, and the seas — so the heating cannot readily be attributed to natural mechanisms that transfer heat from one part of the globe to another.

The projections for the future also get far more worrisome than that 1 degree. Various scenarios used by climate modelers suggest that average surface temperatures could easily rise another 4 to 8 degrees Fahrenheit by the end of the century, based on mid-range projections. That is a level that many experts deem dangerous.

If the warmer climate increases the destructive power of hurricanes and typhoons, as two studies indicate it already has, the storm devastation could get worse... If the massive ice sheets on Greenland and Antarctica melt faster than long estimated — a trend that some recent studies suggest has already started — the added water could drive up sea levels by several feet in this century, inundating some low-lying coastal areas. If mountain glaciers around the world continue to shrink rapidly, ... areas that rely on them to store water and release it slowly may face shortages of drinking water. If high temperatures allow disease-carrying insects and plant pests to invade new areas, as some studies show is beginning to happen, or if higher temperatures increase the frequency of heat waves and heavy rainfall, as the world's science academies deem likely, then the health and environmental consequences could be significant.

None of this is settled science or sure to happen. But these and other potential risks show what's at stake in the climate debate, and underscore the need to act promptly to head off the worst dangers. ... [big cut]

With all of the most prestigious scientific organizations convinced that global warming is an increasing menace — and with the vast majority of research articles in leading scientific journals tending to support that consensus — it would seem wildly irresponsible not to believe it is important to curb emissions. These are the institutions with the most expertise, and they have been studying the issue in unparalleled depth and breadth. Their judgment deserves the utmost respect and attention. ...

The world keeps pumping greenhouse gases into the atmosphere in what amounts to a huge uncontrolled experiment, and a gamble that all will turn out fine. But ... if the worst-case scenarios turn out to be accurate, we could be dooming much of the planet to a very unpleasant future.

To answer the global warming question, scientists have to separate the cyclical part of temperature variation from the trend, and then understand the sensitivity of the trend (and cycle) to changes in greenhouse gases.

Economists face a similar problem. An important debate in economics is how much of the variation in GDP is caused by supply shocks, and how much is caused by demand shocks. To answer this and other important questions, the cyclical part of GDP must be separated from the trend component. (Demand shocks have short-run, or cyclical effects, but do not affect the long-run trend; supply shocks can have both short-run and long-run effects. Thus, the trend is dependent upon supply side factors while the cycles can be affected by both demand and supply shocks. The cyclical variation is generally thought to be dominated by demand shocks, though that is not universally accepted).

In order to differentiate a change in the trend for GDP or other macroeconomic variables from a change in GDP around the trend, long time-series are needed, and the longer the better. For example, are recent increases in GDP growth driven by high levels of productivity part of a cycle where growth and productivity will return to lower historical levels with time, or is this a change in trend so that we can expect permanently higher productivity and growth? The answer is important for all sorts of questions such as how high tax collections - and hence the deficit - will be in the future.

Unfortunately, we do not have the equivalent of samples from ice cores from the distant past to guide us -- reliable economic data don't exist prior to around sixty years ago, and we are often limited to forty or so years of data (since 1959 since money data don't exist before then). Because of this, our ability to differentiate between the trend and cyclical components of economic variables is not as precise as we would like. New theory could help, but a longer span of data would be even better.

Tuesday, July 04, 2006

Son of Mr. Green Genes

Genetically engineered crops don't bother me. But they worry some people. Here's an alternative where scientists still identify promising genes in the laboratory, but uses nature rather than scientists to splice the genes together:

Beyond Genetically Modified Crops, by Jeremy Rifkin, Commentary, Washington Post: For years the life science companies -- Monsanto, Syngenta, Bayer, Pioneer Hi-Bred, etc. -- have argued that genetically modified food is the next great scientific and technological revolution in agriculture and the only efficient and cheap way to feed a growing population.... Nongovernmental organizations, including my own, the Foundation on Economic Trends, have been cast as the villains ..., and often categorized as modern versions of the English Luddites, accused of continually blocking scientific and technological progress because of our opposition to genetically modified food.

Now, in an ironic twist, new, cutting-edge technologies have made gene splicing and transgenic crops obsolete... The new frontier is called genomics, and the new agricultural technology is called marker-assisted selection, or MAS. This technology offers a sophisticated method to greatly accelerate classical breeding. A growing number of scientists believe that MAS -- which is already being introduced into the market -- will eventually replace genetically modified food. Moreover, environmental organizations ... are guardedly supportive of MAS technology.

Rapidly accumulating information about crop genomes is allowing scientists to identify genes associated with traits such as yield, and then to scan "crop relatives" for the presence of those genes. Instead of using molecular splicing techniques to transfer a gene from an unrelated species into the genome of a food crop ..., scientists are using MAS to locate desired traits in other varieties of a particular food crop, or its relatives that grow in the wild. Then they cross-breed those related plants with the existing commercial varieties to improve the crop.

With MAS, the breeding of new varieties always remain within a species, thus greatly reducing the risk of environmental harm and potential adverse health effects associated with genetically modified crops. Using MAS, researchers can upgrade classical breeding and reduce by 50 percent or more the time needed to develop new plant varieties by pinpointing appropriate plant partners at the gamete or seedling stage. ...

The wrinkle here is that the continued introduction of genetically modified crops could contaminate existing plant varieties, making the new MAS technology more difficult to use. A 2004 survey conducted by the Union of Concerned Scientists found that non-genetically modified seeds from three of America's major agricultural crops -- corn, soybeans and canola -- were already "pervasively contaminated with low levels of DNA sequences originating in genetically engineered varieties of these crops." Cleaning up contaminated genetic programs could prove to be as troublesome and expensive in the future as cleaning up the viruses that invade software programs.

As MAS technology becomes cheaper and easier to use, and as knowledge in genomics becomes more dispersed and easily available over the next decade, plant breeders around the world will be able to exchange information about "best practices"... Already, plant breeders are talking about "open source" genomics, envisioning the sharing of genes. The struggle between a younger generation of sustainable agriculture enthusiasts anxious to share genetic information and entrenched company scientists determined to maintain control over the world's seed stocks through patent protection is likely to be hard-fought, especially in the developing world. ...

I don't know enough about the underlying science to know whether MAS restricts the types of crops that can be derived relative to laboratory splicing, whether genetic engineering really does threaten MAS technology, or to adequately compare the two techniques generally. So, while it sounds promising, I'm guarded about jumping aboard until I know more, particularly since this group has an agenda to eliminate genetically engineered crops entirely.
_____________________
Title note: Mr. Green Jeans ... was the right-hand man to Captain Kangaroo ... on the popular children's television program, Captain Kangaroo. Mr. Green Jeans earned his moniker from his distinctive apparel, a pair of farmer's overalls in his signature green. He was a talented and inquisitive handyman who provided assistance... He frequently visited the Captain with the latest addition to his menagerie of zoo animals.

Mr. Green Jeans was the subject of an urban legend that claimed he was the father of the late musician Frank Zappa. The confusion probably arose from the title of song by Zappa, "Son of Mr. Green Genes", from Zappa's 1969 album, Hot Rats. Zappa was, in fact, the son of Francis Vincent Zappa, Sr.

Friday, June 30, 2006

Taboo Research Topics

This discussion is about taboo research topics in biology, but such taboos exist in economics as well. Thus, these comments could just as easily be directed at our profession. Even with the protection of tenure, there are some topics that few, if any economists will dare address. As noted in the essay, just ask Larry Summers:

The Subject is Taboo, by Olivia Judson, Commentary, NY Times: ...I was 7 or 8, and ... had just spent the day walking around a golf course with a great friend of my mother’s ..., a man called Tim, and his opponent, a woman called Nora... Nora ... trounced him. Worse, she didn’t do it from ... the “ladies’ tees.” ... She did it from the hardest of all, the “tiger tees.”

I was chatting happily about this, ... not knowing that Nora’s tigerish defeat ... was, in Tim’s mind, an exasperating humiliation. I soon found out. As I relived Nora’s victory yet again, Tim leaned over to me and said, “Olivia. The subject is taboo. Do you know what that means?” And he explained.

Looking back, it seems somehow fitting that I learned this word in the context of male versus female performance. For certain subjects in science are taboo — and research into genetic differences in ability or behavior between different groups of people is one of the biggest of all.

The reasons for this are obvious. Some of the most ghastly atrocities of the 20th century were carried out under the banner of the “master race” and nasty pseudoscientific notions about genetic superiority. Sexual and racial discrimination still persist. ... Many geneticists I know are scared — really scared, and with reason — of having their careers ruined if they ask any other questions. Look no further than Lawrence Summers, former president of Harvard University, who was pilloried ... for wondering if mathematical ability in men and women might have some genetic underpinning. A sign has been hung on the door that says “Area Closed to Research.”...

Research into the genetics behind certain sorts of group differences — skin color, the ability to digest milk, the underpinnings of autism and the like — is now starting to be published. But other subjects remain ferociously contentious. Let me tell you a tale of three papers.

Last September, the journal Science published two papers that claimed natural selection had acted recently and strongly on two human genes involved in brain development. Let’s look at what this claim means. ...

The two [genes] featured in the Science papers are among those thought to affect brain growth. ... What do we know about these genes...? Not much. ... both appear to be involved in cell division, for example — but no one knows ... exactly. We also know ... these genes come in several subtly different forms. Whether these subtle differences matter is unknown. ...

Now, what does it mean to say that natural selection has acted on these genes? As I’ve been discussing ... Sometimes, natural selection promotes rapid change: a mutant form of a gene appears and spreads quickly — within a few hundred generations, say. Evidence of a rapid spread — within the last several thousand years — of a new version of each of the two genes is what the Science papers announced.

The papers caused a stir. For the papers also claimed that the new versions of the genes ... were more rare in sub-Saharan Africa than elsewhere. All this means is that, in populations outside Africa, the new forms of the genes may have conferred some sort of advantage — perhaps related to head size, perhaps not... But it didn’t take long for the whisperings to start that the new forms of the genes must be involved in intelligence.

The whispering has no basis: there is no evidence whatsoever that the variants have anything to do with intelligence. ... But brains, genes and race form an explosive mixture. So much so that ... the lead scientist on the papers, Bruce Lahn, will now be retiring from working on brain genes.

Meanwhile, another paper has appeared ... in ... the online journal Public Library of Science Biology. This paper failed to confirm the earlier result. However, the authors found that versions of other genes, also thought to be involved in brain function or structure, have been under recent natural selection ... and this time, the population is not outside Africa, but in it. ... Again, we have no idea what this means. But strangely, these results have received almost no attention: there has been no whispering this time.

I offer this story as ... an illustration of some of the grave difficulties in this field of research. ... As you can imagine, it is virtually impossible to work in an area as poisonously political as this one. On one side, you have neo-fascist groups twisting the most innocuous data out of shape; on the other, well-intentioned anti-racists who don’t even want the questions asked. Worse still, as ... the “intelligent design” movement shows, it is not always easy to make sure that science is discussed rationally. Result: most geneticists are totally unnerved — and who can blame them?

Perhaps, if open debate is impossible, declaring the area taboo is the best way to proceed. I don’t pretend to have a solution. But here are some thoughts. ...

If we declare brain genetics out of bounds, it will make it harder to understand how our brains are built ... and treat the diseases that affect people’s brains, especially in old age. ...[T]he study of human genetics has already illuminated a lot that is interesting and important about our evolutionary past, and how we have come to be. Handled well, this is a tremendously exciting area for research. Do we want to limit it? ...

[G]enetic information is pouring in. Questions about the genetics of human differences are not going to go away. ... Scientists have an essential role to play in mediating understanding. Do we really want to scare good scientists from this field? Then the only people left researching it could be those whose agendas genuinely are sinister.

Now that is a frightening thought.

Wednesday, June 21, 2006

'Niches' and Adaptive Innovation

It's interesting to think of this article about why evolution progresses at different rates both over time and across regions in terms of how competition among firms for survival drives innovation in a competitive marketplace. The term 'niche' as used below would be, for example, an industry where firms are earning greater than normal profits, or a profitable market opportunity that has not yet been exploited:

No Vacancy, No Evolution, Evolution 101, by Olivia Judson, Commentary, NY Times: Yesterday, I claimed that a major reason large evolutionary changes often don’t happen is that competition from the creatures around you stops you from changing. In other words, in environments that are already rich in different species, natural selection often prevents large changes. My piece of evidence for this was a claim that, when you take other organisms away — when you reduce competition for food, or space — evolution explodes. Today, I want to examine the truth of these claims more closely.

I want to start with a question: Why aren’t there more insects in the sea? Insects, after all, make up the bulk of all known animal species — most animals are insects — yet hardly any insects live in the sea.

It’s not because of the water. Many insects live in freshwater for at least part of their lives. Think of mayfly nymphs, or caddis fly larvae... Or think of water striders, skating along the top of a pond... Could it be the salt? No. Brine fly larvae live in the Utah’s Great Salt Lake. Some mosquito larvae do fine in salt, too — they can even live in the hypersalty Dead Sea. ...

I put it to you that the reason so few insects get beyond rock pools or dip their toes beneath the low tide mark is that the niches they would occupy in the ocean proper are taken already. Crabs, lobsters, shrimp, barnacles, water fleas (which are ... little crustaceans) and company have got all niches covered. My prediction — which I hope we won’t be testing — is that if all the crustaceans were wiped out of the oceans, insects would move straight in.

My prediction rests on the fact that it is typically difficult to evolve to occupy a niche that is already full. An invader must be better at exploiting the niche than the current occupant, who has already evolved to make effective use of it. By contrast, when a niche is empty — when seeds are falling to the ground and no one is eating them, say — it doesn’t matter if, at first, an animal is a bit inept at finding and opening the seeds.

Consistent with this idea is the observation that when new niches open up — perhaps because new islands or lakes or cave systems have formed, or because an asteroid has hit the earth and eradicated millions of species — the first organisms to become established in the new environments evolve quickly and reliably into all sorts of new species. This phenomenon is known as adaptive radiation.

Newly erupted islands are famous for this. Over and over again, archipelagos see explosive bursts of evolutionary change and the rapid appearance of species found nowhere else. New Zealand is full (and was fuller) of an amazing array of unique flightless birds... Hawaii has an abundance of unique fruit flies, spiders, silverswords ... and birds. Madagascar has all manner of lemurs... And everyone knows about the Galápagos.

Rapid bursts of evolution can also happen in new lakes... Indeed, right now, the great lakes of tropical Africa are the backdrop for the fastest known radiation of vertebrates, the cichlid fishes. Lake Victoria, for example, appears to have dried up and then refilled around 14,600 years ago. Since then, perhaps 500 different species of cichlid have evolved... Lake Victoria has cichlids that eat algae, cichlids that eat other cichlids, cichlids that eat fish eggs — cichlids, in short, that have evolved to eat everything that can be eaten. Some fish live in shallow water; others prefer the deeps...

Ideas about adaptive radiation can also be tested in experiments. ...[M]any bacteria can whiz through hundreds of generations in a month. This makes it relatively easy to use bacteria to look at radiations. Here’s what you do. You create two sets of environments, one simple, and one complex. The complex environment might have several different places to live, or a variety of sources of carbon. The simple environment has just one habitat or foodstuff. Then, since bacteria reproduce asexually, you take genetically identical individuals, and release them into the two different environments. Sure enough, mutations happen, and the bacteria rapidly evolve to exploit the different niches. After a month, you will find that bacteria from the complicated environment have become genetically diverse. Those from the simple environment, in contrast, remain unevolved.

In short, empty niches are a license for evolutionary change. Once the new niches are full, natural selection acts to stop further change, and the rate of evolutionary change slows. Fossils, islands and test tubes — they all show the same dynamics. ...

This impliess that it is not competition per se that produces rapid innovation, but competition coupled with emergent profitable 'niches.'

Tuesday, June 13, 2006

Why Don't Birds Do It?

If you like genetics, evolutionary biology, and so on, the NY Times has been running a good series of columns by Olivia Judson (to get a reading on their accuracy, I had one of our molecular biologists read one of the columns on genetic structure and active/passive copying as I had a question anyway and she said in reply: "This is exactly right!  Genomes are architectural jumbles -- and we're the better for it! Well, mostly. Cancer is probably an unavoidable by-product of our penchant for sloppiness."). Last week the topic was the building of the genome - to me horizontal gene transfer through bacteria and other means is particularly intriguing and solves lots of puzzles involving uneven evolutionary changes - and this week it's experiments the author would like to see performed. This one proposes a way to find out why birds don't get pregnant:

Why Don’t Birds Get Pregnant?, by Olivia Judson, Commentary, NY Times: Welcome back to Mad Scientist Week. Today, I want to look at ... why is it that birds don’t get pregnant... Or to put it another way, why do all birds lay eggs?

Questions like this matter because the answer tells us something important about the paths evolution can take. Some people suppose that from time to time, evolution gets stuck — that certain evolutionary directions are impossible (or at least, very difficult). According to this school of thought, birds can’t evolve pregnancy: some aspect of their biology stops them. Others argue that when such a phenomenon fails to evolve, the reason is not that it can’t, but that it’s not beneficial. This view says birds don’t evolve pregnancy because there’s presently no advantage in it.

At the moment, we can’t easily tell which view is correct: no one has done any experiments. Today, I’m going to propose one.

First, I should say what I mean by pregnancy. I mean: giving birth to live young. The alternative is laying eggs. However, different animals have different kinds of pregnancies. Humans, mice, dogs and other mammals ... make tiny eggs; the young draw nourishment from their mothers as they grow. In contrast, pythons give birth to live young — but rather than nourishing the developing embryos..., a female python makes large eggs which she keeps inside her body until the young have finished developing. ... Marsupial mammals — kangaroos, koalas, opossums and that crowd — do a mix of the two. Marsupial eggs are (relatively) large and yolky, but the mother also transfers nutrients to the embryo. In short, there’s a spectrum of ways to be pregnant.

And a lot of organisms have taken up the practice... Birds, however, are missing. Of the five major groups of animals with backbones, only birds have never evolved pregnancy. Why not?

When I raised this question in my first column, a number of readers answered it by observing that birds fly. But ... the answer is not so simple. Bats fly — yet they have pregnancy. Moreover, many birds do not fly. ... Yet none of these has switched from eggs. Antarctic penguins, it seems to me, would do especially well with live birth — they wouldn’t have this silly business of trying to keep their eggs warm when the temperature is 50 below. And they’ve had plenty of time: the ancestors of modern penguins abandoned flight at least 100 million years ago.

So, is it because they can’t? Or is it that they could but they don’t? Both schools of thought can make good arguments. ... But although both sides can point to this and that, neither has proof their argument is right. To tell which side is right, we need to do an experiment.

The ideal strategy would be to try to evolve a pregnant bird. But this might take a rather long time. A more practical aim would be to try to discover whether the imagined constraints are real. Here’s one way we could do that.

When lizards switch from eggs to pregnancy, they don’t do it overnight. They start gradually, by keeping the eggs inside their bodies for a bit longer and laying the eggs when the embryos are more advanced. In short, the first step in evolving pregnancy is becoming egg-retentive.

Interestingly, egg retention is virtually unknown in birds. In almost all species, the female lays the egg as soon as it’s been fertilized. So I suggest that we take a bird, such as a cuckoo, which does sometimes keep its eggs inside for longer than is usual, and see whether we can stretch out its egg-retention time.

To do this, we’d breed cuckoos as horse breeders breed racehorses — except that instead of choosing the fastest animals to breed, we’d choose the most retentive ones. Cuckoos that keep their eggs for longest before laying would be rewarded by having their chicks go into the next round.

If it turns out to be very difficult to shift the retention time, we’d know there is little genetic variation for the trait — and the constraint is real. Then we could start to explore the reasons why ... If shifting egg retention turns out to be easy — say, after 50 generations we’ve got healthy cuckoos that can hold an egg for two weeks — then we know that holding an egg doesn’t evolve because it’s not useful to the animal.

Such an experiment would be massive — you’d need lots of birds and lots of years. It probably won’t get done. But just in case, here’s my prediction: we would be able to evolve egg-retentive birds.

Update: More from the molecular biologist:

This is good. I never thought about that particular feature of birds before. It is odd, and I think I'd bet on the prediction. Another odd biological pattern is the complete absence of sexual reproduction in rotifers (small protozoans that whirl around in pond water). Maybe this was in the blogs on why the evolution of sex is reasonable or not. In any case, it's being studied by some high-profile people. By the criteria of ubiquity and diversity of types, rotifers are among the most successful living things. Yet, they've done it all by relying on vegetative reproduction. So much for the advantages of variety-inducing recombination!

Saturday, June 03, 2006

You Don't Know Everything

I can't resist this stuff. String theory and the “sunk-cost fallacy”:

Nothing gained in search for ‘theory of everything’, by Robert Matthews, Commentary, Financial Times: ...A ... scientific community that has completely lost touch with reality and is robbing us of some of our most brilliant minds. Yet if you listened to its cheerleaders – or read one of their best-selling books or watched their television mini-series – you, too, might fall under their spell. You, too, might come to believe they really are close to revealing the ultimate universal truths, in the form of a set of equations... Or, as they modestly put it, a “theory of everything”.

This is not a truth universally acknowledged. For years there has been concern within the rest of the scientific community that the quest for the theory of everything is an exercise in self-delusion. This is based on the simple fact that, in spite of decades of effort, the quest has failed to produce a single testable prediction...

For many scientists, that makes the whole enterprise worse than a theory that proves to be wrong. It puts it in the worst category of scientific theories, identified by the Nobel Prize-winning physicist Wolfgang Pauli: it is not even wrong. ... it is impossible to tell if it is a turkey, let alone a triumph.

It is this loss of contact with reality that has prompted so much concern among scientists – at least, those who are not intimidated by all the talk of multidimensional superstrings and Calabi-Yau manifolds that goes with the territory. But now one of them has decided the outside world should be told about this scientific charade. As a mathematician at Columbia University, Peter Woit has followed the quest for the theory of everything for more than 20 years. In his new book Not Even Wrong he charts how a once-promising approach to the deepest mysteries in science has mutated into something worryingly close to a religious cult.

It began in the mid-1980s, with the emergence of so-called superstring theory... By the mid-1990s, superstring theory had been subsumed into something called M-theory. Not even its inventor – the charismatic Edward Witten of the Institute for Advanced Study in Princeton – knows what the M stands for. Nor has he, or anyone else, succeeded in persuading M-theory to make a single testable prediction. As such, it has more in common with a religious conviction than science.

Most theorists pay at least lip-service to falsifiability, popularised by the philosopher Karl Popper, according to which scientific ideas must open themselves up to being proved wrong. Yet those involved in the quest for the theory of everything believe themselves immune from such crass demands. Mr Woit quotes a superstring theorist dismissing the demand for falsifiability as “pontification by the ‘Popperazi’ about what is and what is not science”. ...

Mr Woit has shown that some very smart people in academia have lost the plot. But why should the rest of us care? The reason is simple: the quest for the theory of everything has soaked up vast amounts of intellectual effort and resources at a time when they are desperately needed elsewhere. We can ill afford to let more brilliant talent vanish into the morass that is M-theory.

Those who have show signs of having fallen prey to the “sunk-cost fallacy”, the huge intellectual effort needed to enter the field compelling them to plough on regardless of the prospects of success. It is time they were put out of their misery by being told to either give up or find funding from elsewhere (charities supporting faith-based pursuits have been suggested as one alternative).

Academic institutions find it hard enough to fund fields with records of solid achievement. After 20-odd years, they are surely justified in pulling the plug on one that has disappeared up its Calabi-Yau manifold.

Tuesday, May 23, 2006

Science versus Spin Doctors

This commentary says when science and entrenched interests collide, it's not a fair fight:

Scientists have no chance against spin doctors, by David Bodanis, Commentary, Financial Times: Last week, touched by winning a science prize at the the UK’s Royal Society, I donated it to the family of David Kelly, the British scientist who committed suicide after governmental criticism associated with his research into weapons of mass destruction in Iraq.

Not everyone thinks mine was the right decision, on the grounds that science should not be sullied by bringing politics into it. From my years looking at the history of science, I do not agree. For science often leads to technologies that can undermine the established powers in society – and when those powers fight back, they fight to win.

Sometimes that retaliation is deadly and scientists die for the truth. Soviet authorities of the 1930s, for example, hated biologists who pointed out that changing a plant’s environment did not alter its genetic nature. That truth undercut the authorities’ belief that by altering society, they would be able to create a new Marxist man in a single generation. If there were any exceptions to this idea at all ... then those opponents had to be crushed. Many were demoted; others were sent to prison, beaten or killed.

George W. Bush’s attitude to science is less deadly, of course, but similar in essence. The US president and many of his supporters know that if the public were to be convinced that present uses of coal and oil were putting the planet in grave danger, there would be an outcry...

Two worlds are set on a collision course. One is the world of science... In that world, what counts is finding the truth and adjusting your actions – and, if need be, changing established industries – accordingly.

But in the world of politics, what is most important is what you have previously decided you are going to hold to. Anyone who threatens those goals has to be blocked, for they get in the way of what you consider the greater good. Often that is for the best – just think of any political change or institution you especially like that had to be pushed through against strong opposition.

The problem comes when the two worlds collide. For in the short-term, the world of politics almost always wins. Politicians are good at pressing the buttons of emotion, or group feeling, or character assassination, or selective evidence... Very few scientists can fight back. Although in their private lives they might be psychologically astute, their profession teaches them that arguments are ultimately won by appeals to the truth. That is their reflex: it is what they are habituated to do. Against spin doctors, leaked governmental whispers, smooth lobbyists and the like they have scarcely any defence.

There is an added twist. These two worlds operate on different timescales. Scientists are exceptionally good at picking out small indicators of what is happening in the outside world, and accurately foretelling their consequences. That is the enormous power that centuries of development in instrumentation and analytic technique have given them. Politicians, however, naturally take more of the layman’s attitude, where only evidence that is large-scale and immediately obvious is truly important.

In my books I have written about many people who, like Kelly, abided by the logic of science, confident that what they saw would be justified as time went on. Yet so often they crashed up against the very different world of politics and established power, and they ended up crushed by it...

Sunday, May 14, 2006

I Just Want to Get it Over with as Soon as Possible

Given a choice, will people choose to delay a painful event? Not necessarily. There are two costs to balance, the discounted value of the future pain and the negative impact of waiting, the dread to use the article's terminology. In some cases, the waiting may be more dreadful than the event itself:

Study finds people will take pain over dread, by Jamie Talan, Newsday: Root canal, anyone? Every day, people are faced with such pain, but until now no one had studied how the brain actually copes with the prospect.

Scientists at Emory University ... recruited volunteers willing to have their brains scanned while awaiting electrical shocks to their feet. The aim was to pinpoint the neurobiology of dread in the brain. The assumption was that given the choice, people will delay a bad event.

But that didn't happen. When Dr. Gregory S. Berns and colleagues gave people the option of delaying the shock, only one in 32 accepted the offer. "Virtually all of them wanted to get it over with," Berns said. The finding, published in Science, goes against the current theory, Berns said, which is: If it's unpleasant, sweep it into the future.

"This study lends a new piece of evidence," said George Loewenstein, professor of economics and psychology at Carnegie Mellon... "Delaying or speeding up unpleasant things is not a matter of weighing the present and the future, but of how someone feels while they are waiting." That, Berns said, is dread.

He also found that some people were willing to withstand a more severe shock if it meant eliminating the dread endured while waiting. The scientists gave the volunteers a choice: a mild shock later or a harsher one sooner. A third of the volunteers chose the greater shock: more pain, but sooner. ...

When the study was completed, they returned to the brain scans to find differences between the one-third of volunteers who chose the more severe shock sooner and the rest who waited for a milder shock. "The thing that distinguished the two groups were areas that govern attention," Berns said. ...

Makes sense. Those who dwell on things are less likely to choose to wait.

One interpretation is that this explains why people have different discount rates, i.e. different values for future versus present consumption. Some people dwell on having to wait to consume a good (how soon do you check the mail when you get home to see if your order arrived?), others are better able to forget about future events and go about their lives. But perhaps there is another way to explain this.

One of the assumptions in standard utility theory is free disposal, i.e. that having more of a good does not make you worse off - if it does, it can always be freely discarded. But information that bothers you cannot be forgotten once learned in the same way that a good can be thrown away or left on the table if it makes you worse off. Thus, when analyzing information the usual predictions of standard utility theory may need to be amended to account for the lack of free disposal, in this case to account for the "dread" of waiting for "bads" or "goods". The bottom line of the article for those who dwell on negative events is to find a way to, in essence, throw the information away:

...people can alter brain responses to prevent the dread. "If you are going to do something that you don't like, divert your attention."

Saturday, April 22, 2006

Intelligently Designed Clues Point to Evolution

The intelligent designers were really good at hiding signs of their participation in creation. To throw us off track, they left all sorts of clues making it appear incontrovertible that humans had evolved over millions years:

Evolution's case evolves, by Ann Gibbons, Commentary, LA Times: It's been a tough month for creationists. On April 6, evolutionary biologists announced the discovery of a fossil of Tiktaalik roseae, a giant fish whose fins were evolving into limbs when it died 375 million years ago. This scaly creature of the sea was in transition to becoming a land animal, the discoverers wrote in Nature.

A day later, molecular biologists reported in Science that they had traced the origin of a key stress hormone, found in humans and all vertebrates, back 450 million years to a primitive gene that arose before animals emerged from oceans onto land.

Both teams of scientists stressed that their findings contradicted creationists — and demonstrated how small, incremental steps over millions of years could indeed produce complex life... But even as they were touting their results as yet another validation of Charles Darwin's theory of evolution, biochemist Michael Behe, a leading advocate of "intelligent design," dismissed the hormone discovery as "piddling."

As if in response to Behe's challenge, paleoanthropologists raised the stakes last week ... In the journal Nature, a team of researchers ... found an "intermediate" member of the human family that they say unambiguously fills the gap in the fossil record between two early types of human ancestors. Australopithecus anamensis was a creature the size of an orangutan that walked upright in the Rift Valley of eastern Africa about 4 million years ago...

The team found the species in a mile-deep stack of sediment in northeastern Ethiopia, which has become the Comstock Lode of human evolution. Across 11 separate layers, researchers unearthed several types of early human ancestors, with the anamensis bones sandwiched between layers containing two other species — Australopithecus afarensis, the species whose most famous member was the diminutive skeleton Lucy that lived 3.2 million years ago; and the 4.4-million-year-old Ardipithecus ramidus. ...When researchers compared the teeth and bones of these various human ancestors, they saw a clear path from primitive to modern. ...

Thursday, April 20, 2006

Neuro-Economics

Tyler Cowen of Marginal Revolution, who is replacing Virginia Postrel as one of the contributors to the Economic Scene column at The New York Times, looks at neuro-economics and its potential uses in his inaugaral column:

Enter the Neuro-Economists: Why Do Investors Do What They Do?, by Tyler Cowen, Economic Scene, NY Times: Las Vegas uses flashing lights and ringing bells to create an illusion of reward and to encourage risk taking. Insurance company offices present a more somber mood to remind us of our mortality. Every marketer knows that context and presentation influence our decisions. For the first time, economists are studying these phenomena scientifically. ... using a new technology that allows them to trace the activity of neurons inside the brain and thereby study how emotions influence our ... economic choices like gambles and investments.

For instance, when humans are in a "positive arousal state," they think about prospective benefits and enjoy the feeling of risk. ... Camelia Kuhnen and Brian Knutson, two researchers at Stanford University, have found that people are more likely to take a foolish risk when their brains show this kind of activation. But when people think about costs, they use different brain modules and become more anxious. They play it too safe, at least in the laboratory. Furthermore, people are especially afraid of ambiguous risks with unknown odds. This may help explain why so many investors are reluctant to seek out foreign stock markets...

If one truth shines through, it is that people are not consistent or fully rational decision makers. Peter L. Bossaerts, an economics professor at the California Institute of Technology, has found that brains assess risk and return separately, rather than making a single calculation of what economists call expected utility. ...

Neuro-economics is just getting started. ... Investors are becoming interested in the money-making potential of these ideas. Imagine training traders to set their emotions aside or testing their objectivity in advance with brain scans. Futuristic devices might monitor their emotions on the trading floor or in a bargaining session and instruct them how to compensate for possible mistakes. Are the best traders most adept at reading the minds of others? Or is trading skill correlated with traits like the ability to calculate and ignore the surrounding caldron of human emotions? ...

Not all of neuro-economics uses brain scans. Andrew W. Lo, a professor at the Sloan School of Management at the Massachusetts Institute of Technology, applied polygraph-like techniques to securities traders to show that anxiety and fear affect market behavior. ... But do most economists care? Are phrases like "nucleus accumbens" — referring to a subcortical nucleus of the brain associated with reward — welcome in a profession caught up in interest rates and money supply? Skeptics question whether neuro-economics explains real-world phenomena.

The neuro-economists admit that their endeavor is in its infancy. It is difficult to identify brain modules and their roles. Even if one part of the brain is active at a particular moment, how is that incorporated into a person's broader method for making decisions? The number of people scanned in any study is typically small, if only because the hookups cost about $500 an hour ... Furthermore, the setting may matter. Perhaps we cannot equate choices made on the New York Stock Exchange trading floor with choices made under a hospital scanner...

That said, neuro-economics will make huge strides as technology allows researchers to identify more brain regions and read brains more accurately and at lower cost. It is a growth area in a profession that knows human feelings matter, but does not always know what to do with them. The next step? Perhaps neuro-economics should turn its attention to political economy. Do people use the same part of their brains to vote as to trade? Is voting governed by fear, disgust or perhaps the desire to gain something new and exciting?

Tuesday, April 18, 2006

Could Global Warming Be Worse Than You Think?

From the comments to the Krugman post on Exxon and global warming, this appears to be a topic that could use more discussion. Here's a place to start. This was posted at Scientific American's blog two days ago. It's somewhat long, so here's a very condensed version followed by a longer version in the continuation page. The principle being invoked here is the same as for monetary policy in the face of model uncertainty, choose the policy that is robust across models and avoids the chance of a catastrophic outcome. As the San Francisco Fed states, "A policy can be made "robust" to model uncertainty by designing it to perform well on average across all of the available fully specified models ... (McCallum 1988)":

SciAm Observations: One of the questions that came up in the earlier global warming thread was whether climate models have been tested against historical data. ... Climatologists who think global warming is serious and human-driven actually agree with skeptics who say that models have not been adequately tested. But whereas the skeptics think that the models overstate the threat, the mainstream researchers think they could understate it...

Now, what should we make of all this? ... To me, the main lesson of worst-case scenarios is that uncertainty cuts both ways. Skeptics often invoke uncertainty as a reason to defer action because global warming may not be as bad as the headline predictions. But uncertainty equally well means that the outcome could be even worse. Our response should be neither complacency nor panic, but risk-management -- exactly what we do when we buy insurance or strap on seat belts. As David Wasdell of the Meridian Programme said at a workshop I went to last weekend, the scenarios are alarming but not alarmist.

Here's the longer version:

Continue reading "Could Global Warming Be Worse Than You Think?" »