Journalism, Now.

I’ve been thinking and talking a lot recently about the future of journalism, and of newspapers. It’s something that I’ve touched on once or twice here, and for a number of reasons I suspect that it will be a growing focus of mine going forward.

But in my conversations I found myself relying on a number of shorthand assumptions, and at some point I started wondering if some of those assumptions could be examined in greater detail.  And the more I poked and prodded at them, the more I became convinced that in order to write about this in the future, I first had to set down what I think is going on now.

This post, then, is an attempt to define my current thinking – about journalism in general and newspapers in particular. Some of this I know to be true. Some of it I suspect to be true. And, near the end, I’ll offer some thoughts on what I believe will prove to be true in the future.

I pick on the New York Times and the Press Democrat throughout because they’re the newspapers I read the most.  I feel that they’re broadly representative of other national and local papers, respectively.

I’ll start, as perhaps I always should, with the obvious.

1.

Newspapers are in a great deal of trouble.

On this point there is complete (well, nearly complete) agreement. There isn’t a newspaper in the country that isn’t staring at the same array of unpalatable phenomena: falling advertising revenue, increased operating costs, online competition from faraway papers, and a sharp drop in readership across the board. A quarter of American journalists have lost their jobs in the last decade. Formerly-major newspapers, with circulations in the millions, have been forced to shutter their doors – among them the Rocky Mountain News and the Baltimore Examiner. The Paper Cuts blog keeps track of layoffs and closures across the country; clicking through their maps, with colored pins representing entire vanished papers, is a sobering experience.

Journalism is expensive.

Mother Jones took a look at a recent investigative report that ran in The New York Times Magazine and concluded that it cost in the neighborhood of $400,000 – and that only for the salaries for the various editors, reporters, and lawyers involved. Those costs are dwarfed by the other operating costs of the newspaper: the land it owns or leases, at the headquarters and at various field offices around the world; the printing, manufacturing, and distributing of the dead-tree version; the server, bandwith, and maintenance costs for its online services. All in all, it is a phenomenally expensive affair.

2.

Many newspaper articles are poorly written.

I’m not speaking from a purely stylistic perspective, though offenses against the mother tongue are hardly rare. Instead, I’m talking about the increasingly-archaic conventions of proper newspaper form that a) wordily provide context while obscuring recent events and b) insist on filling column-inches with recycled quotes from people I’ve never heard of. This problem was detailed by Michael Kinsley in last month’s Atlantic Magazine, where he deconstructed an article from November about health-care reform:

The 1,456-word report begins:

“Handing President Obama a hard-fought victory, the House narrowly approved a sweeping overhaul of the nation’s health care system on Saturday night, advancing legislation that Democrats said could stand as their defining social policy achievement.”

Fewer than half the words in this opening sentence are devoted to saying what happened. If someone saw you reading the paper and asked, “So what’s going on?,” you would not likely begin by saying that President Obama had won a hard-fought victory. You would say, “The House passed health-care reform last night.” And maybe, “It was a close vote.” And just possibly, “There was a kerfuffle about abortion.” You would not likely refer to “a sweeping overhaul of the nation’s health care system,” as if your friend was unaware that health-care reform was going on. Nor would you feel the need to inform your friend first thing that unnamed Democrats were bragging about what a big deal this is—an unsurprising development if ever there was one.

Providing this kind of context used to be important – when people only read about a given issue once every couple of months, they would understandably need a refresher course before diving headlong into the subject. But that’s not how most people consume news today. Even the most casual observer knows that the Democrats have been trying to pass health-care reform and the Republicans have been trying to block it, and when they pick up a paper in November they’re looking to find out what happened in November, not six month’s previous. It’s like having to sit through all of A New Hope again when you want to find out what happens in The Empire Strikes Back.

Newspapers offer a great deal of redundant content.

I recently browsed through the front page of the Saturday edition of the Press Democrat, my local newspaper. Out of the twenty-three articles in the section, only four were written by Press Democratstaff. The remaining nineteen were reprinted from larger institutions – like the New York Times Company, which owns the Press Democrat – or from wire services like McClatchy or Associated Press. The articles written by the staff predictably focused on local stories (“Unsafe to Swing? Injury to Marin County teen has reignited debate over metal baseball bats“), while those from larger services all concerned national or international issues.

This arrangement is absolutely unique to newspapers. I don’t subscribe to a magazine called The San Franciscan that reprints features from The New Yorker and The Los Angeleno, plus some original stuff. Magazines don’t work that way; only newspapers do.  It’s a system that was worked out when it was difficult for me, in Northern California, to read a newspaper published in New York City.  But that’s not true anymore.

The relationship between a reader and a news organization is based on trust. I trust that the New York Times is going to give me accurate information on what’s happening in the Middle East because I know the Times is a vast organization with thousands of reporters across the globe. Similarly, I trust the Press Democrat to give me accurate local news because I know that most of their staff is based in the immediate area and is deeply conversant in local issues.

In the same way that I would never rely on the Times to inform me about what’s happening in Sonoma County, I would never turn to the Press Democrat to give me an good picture of what’s happening in Kabul. Yet the Press Democrat continues to devote the vast majority of their publication to reprinting national and international news – a product that virtually no one is asking them to provide.

That certain things have always been done is not justification for continuing to do them.

3.

Until very recently, readers paid for the news every single day – sometimes twice a day.

This simple but oft-forgotten fact is, I think, the best rebuttal to the people who think that the subscription model is inapplicable to online news sources. Far from being entrenched, the idea of free news is actually very recent. For the first several centuries of American journalism, readers were accustomed to paying for each and every scrap of newsprint they read. Only in the last fifty years did scraps of news start being beamed into people’s homes, and only in the last decade did the average reader come to expect all of the vast amounts of journalistic work produced in America to be available instantly and free of charge.

There are a number of people who will pay to read content that is currently free.

There are a lot of reasons why people will pay for online news. Publications like the Wall Street Journal have proven that when a publication is renowned for its reporting on a particular subject, interested consumers will pay for that content. People are often willing to pay for the feeling of legitimacy – it’s the reason why people buy mp3s from Amazon or iTunes instead of downloading them from PirateBay. People are willing to pay for ease of use and access. And people are willing to pay for features above and beyond the baseline.

The number of people willing to pay for online news content is greater than many people realize.

However, subscription proceeds were never the main revenue stream for the newspaper industry. In fact, the newspaper business model has, for the last century, sought to conceal the true cost of journalism from the reader.

Despite what Dave Eggers may tell you, the newspaper industry has always been built upon advertising dollars; it’s why Craigslist was such a poison pill. By some estimates, 70% – 80% of newspaper revenue comes from advertising.

For centuries, then, the newspaper industry has been underselling their own product – charging the reader ¢25 for a $3.75 product. This has created a fundamental misconception in most people’s minds about journalism’s true cost. Readers are used to paying for news – but they’re not used to paying very much. And having first been given something for very cheap, and then having been giving it for free, it’s wishful thinking to suppose that readers will now shoulder the full cost of journalism.

Given that subscription fees didn’t support the old newspaper business model, it’s unlikely that they will support the new one.

This is the fundamental truth that most news publishers are unwilling, or unable, to accept.  It doesn’t mean that they’re wrong to move toward a subscription-based model, exactly.  But it does mean that subscriptions are a stopgap, and not a solution.

4.

Despite being among the first organizations to have a web presence, newspapers have done a poor job adapting to the internet.

Check out these screenshots of the New York Times website in 2000 and in 2010.

nytimesold.png
nytimesnew.png

The formatting has changed. The functionality is identical – down to the stock ticker in the right-hand sidebar.

Though a remarkably transformative decade has passed, the online version of the Times is still little more than the print version copied wholesale into a webpage. They’ve had a web presence since 1995, three times longer than YouTube (2005) or Facebook (2004) and significantly longer than Google (1998) – yet each of those companies manages more innovation in any six month period than the Times (or any other newspaper) has in a decade and a half.

What is missing here is an acknowledgment that people interact with the Times in different ways. I’ve been a member of NYTimes.com since it looked it did in the above screenshot; except for Amazon, it’s the oldest logon I have. Yet the New York Times knows nothing about me. It doesn’t know if I’m a 55-year-old woman living on the Upper East Side or a 22-year-old Texan man. It doesn’t know whether I’m interested in the NFL or NASA, the Metropolitan Opera or Myanmar, cricket or Cuba, or all of the above – despite the fact that I have been feeding them this information, through my reading habits, for over a decade. The Times has even built an experimental reader with a comely interface, but it is as dumb as the web interface is.

Of course NYTimes should recommend articles based on my reading history. That’s not a new idea. But I should also be able to hide sections of the Times that don’t interest me, custom-tuning their homepage. If I read a story that interests me, I should be able to request to be shown more stories like it in the future. I should be able to subscribe to stories or topics – so that if I was interested in, say, Michael Moss’s October 3 expose of the beef industry, I could have some way to be notified when he published his December 30 follow-up.

Articles should link to prominent bloggers who have used it as a jumping-off point, supporting (instead of merely putting up with) the conversation.  I should be able to link my preferred social networks to my NYTimes account, so that this thing can actually become useful.  And so on.

The point is not that the New York Times should cater to my particular whims.  But it needs to be scalable – that is, I should be able to engage with the Times website as much or as little as I like, without being limited to superficial interactions.

I have little sympathy for an industry that has so completely failed to figure out how their customers want to interact with their product.

5.

The newspaper business model, in its current form, is not robust enough to justify its continued existence.

This is very bad for newspapers, especially in the short term. But I think ultimately it’s good for news readers.

Journalism itself is in no immediate danger.

Changes in distribution rarely result in changes in content – music didn’t die when cassettes did. Unlike many art forms (like, say, sculpture) there is nothing about journalism that is anathema to online distribution. (Rather the opposite, in fact.)

The publishing industry is in straits at least as dire as the newspaper industry, yet it would be absurd to talk about the “death of literature”. Similarly, reports of the “death of journalism” are little more than hyperbole.

Most newspapers will adapt, not disappear.

Most sources (the aforementioned PaperCuts included) equally mourn newspapers that have closed completely (like the Rocky Mountain News) and those that have moved to an online-only model (like the Seattle Post-Intelligencer). That’s not quite the right way to look at it. When the Post-Intelligencer stopped producing a dead-tree edition they were forced to let go over a hundred employees – talented people, editors and layout designers and plant managers, who spent years honing their crafts and whose lives were thrown into upheaval.  That’s bad.

But today, the Post-Intelligencer still exists. It has a robust website, with dozens of blogs for different neighborhoods of Seattle, and customization options not unlike those I was asking for above. In their unfortunate situation they discovered a new sort of agency, opportunities and areas into which they could expand. And in doing so they have rediscovered their relevance in a way no one expected last year.

To treat their story as only a tragedy is to very much miss the point.  And in the next few years most people will accept that they were ahead of the curve.

6.

A few final predictions.

There won’t be any single product that will save the industry.

Personally, I think the iPad will be a revolutionary product on par with the iPod and iPhone before it – the sort of product whose power, features, and design raise the entire industry onto an entirely new plane.

What I don’t think is that it will save newspapers, magazines, publishing houses, or any of the other businesses whose executives have, in casting about for a messiah, lit upon Steve Jobs. The architecture they’re looking for has to be larger than a single product; the revolution, when it comes, will be platform-agnostic.

The latest system to change how I read content online is Instapaper. Instapaper – which was developed by Marco Arment, the same man who developed Tumblr – elegantly solves what was a vexing problem: there was no easy way to save articles for later reading. When I came across an article on the internet, I could leave the tab open (messy), email the link to myself (clutters up my inbox), bookmark it (overly permanent), or print it (environmentally detrimental). Instapaper gives me a button in my browser that, when pressed, scrapes the article text saves it for me to browse through later.

What’s great about Instapaper – and what’s led to its enthusiastic adoption – is that it lends itself to a variety of platforms. There’s a web interface. There’s an iPhone (and soon, an iPad) application. There’s a Kindle application. There are various Android applications. And – best of all – Arment has made the Instapaper API freely available, so various other designers have started to integrate Instapaper compatibility into their applications.

People want to be able to interact with media in their own idiosyncratic ways, and the best path to widespread adoption for these sorts of services – Instapaper, Twitter, Flickr – is to support as wide a userbase as possible. Anyone who’s talking about tying their business model to a single product is trying to disguise their deficit of real ideas.

Readers will benefit from the coming innovations.

After a long period of denial, news organizations are finally engaging with the new media landscape in a serious way. This means more smart bloggers working at national newspapers; more incredible visualization tools to interact with; and high-quality niche publications that deliver absolutely relevant content. After decades of being promised that the content revolution was just around the corner, we have finally made our way to the cusp of it. And it’s exciting.

7.

There are more people consuming more journalism than at any other time in history.

This, to me, is the greatest reason for optimism.  We’re living through a revolution, and that’s unnerving; uncertainty begets insecurity. But industries don’t disappear when demand is high.  We may not recognize the journalism of the next decade, but rest assured: it will exist.

(Top Photo Credit: Marcel Germain, Flickr)



This Glorious Struggle.

Two months ago, in the wake of Scott Brown’s election in Massachusetts, I wrote this:

The election of Scott Brown to Ted Kennedy’s Senate seat wasn’t, in and of itself, a great setback. But what was utterly demoralizing was the craven, cowardly, and chaotic way that Democrats on the Hill responded. Suddenly, the entire fate of health-care reform seemed doomed. Barney Frank and a number of liberal Congressmen announced they couldn’t support passing the Senate’s version of the bill. And health-care reform – which is, at this very moment, one single fucking roll-call vote away from passage – seemed more and more to be DOA.

It was a disgusting performance. I’ve been following politics closely for nearly a decade, and this is the most despondent – the most furious – the most ashamed I have ever felt. I voted for these people. I worked to put them there. And they have done worse than fail – they’ve run away before having the chance to fail.

I have to give credit where it’s due: after their collective freak-out, the Democratic leadership picked up the pieces, buckled down, and put it in the hole. Last night, the House passed the Senate’s health-care bill 219-212, and then passed a number of reconciliation changes 220-211. It’s not a perfect bill. But it is a good bill. And it’s the bill we sent them to Washington to pass. The pundits have been calling this the greatest progressive victory since Lyndon B. Johnson passed Medicare in the early-sixties; I suspect it’s greater still.

Given that a vast number of people contributed to the passage of this bill, the debate has, to a remarkable extent, centered around – and been controlled by – distinct legislators.  Some of these legislators ended up having little impact on the bill. Check out this graph of Google search traffic for Max Baucus, Olympia Snowe, Harry Reid, Rahm Emanuel, and Bart Stupak:

trends.png

The peaks on this graph directly correlate with the momentary fame of the connected officials.  There was an uptick for Max Baucus (purple) last summer, when health-care reform seemed to be spinning its wheels in his “Gang of Five” talks. It peaked in mid-September, when he released the “America’s Healthy Future Act” – which was widely anticipated but turned out to have little effect on the proceedings.

Olympia Snowe (blue) was once expected to be a pivotal vote on health-care reform, especially after she voted for Baucus’s bill in mid-October; eventually, though, her decision to stick with Republican obstructionism doomed her input to irrelevancy.

Bart Stupak (green) went from being unknown to a household name in November, when he attached an amendment with stricter abortion language to the House Bill.  It’s been stripped out in the version that passed last night.

Harry Reid (red) passed the Senate’s bill on Christmas Eve – back when the Democrats still had the filibuster-proof 60 Senators – but his popularity explosion came with Scott Brown in late January.

And Rahm Emanuel has been the focus of no fewer than four profiles in the last few weeks, all detailing his argument – rejected by the President – that health-care reform would be better handled incrementally.

I mention this because, over the last year, there has been a tremendous effort by the press to wrap the narrative of reform around the narrative of a particular person.  Baucus as the lone moderate struggling for bipartisanship, Emanuel asthe canny legislator whose advice was ignored.

In the end, though, the only two people who really mattered were Barack Obama and Nancy Pelosi. Obama made the decision – rare for Presidents – to actually spend his political capital, and to stake his Presidency on an issue that was always riskier than it should have been. And Pelosi got him the votes. For whatever reason Pelosi has been vilified by both parties over the years; this more than anything should cement her position as the ablest Speaker in decades. And Obama – well, it’s early yet to judge his Presidency. But he has already reshaped the American political system in a greater way than any President since Franklin Roosevelt.

By all rights the Republican opposition to this effort should kill the party.  But the Republican party is not unlike the Terminator: they survived their opposition to Social Security in the ’30s, to Medicare and the Civil Rights Act in the ’60s, and to SCHIP and Clintoncare in the ’90s.  They have, in other words, opposed every single attempt to improve the lives of Americans over the last century, and paid very little for it.  I don’t think that their opposition will usher in the kind of sweeping electoral victory they predict in November, but neither do I think that they will be seriously punished for their obstructionism.

The health-care reform debate has basically made the career of Ezra Klein.  Over its course he’s gone from being a blogger for The American Prospect to being the Washington Post’s premier voice on health-care.  Last night, on the eve of the vote, he wrote, “At that point, the decades-long struggle to pass a universal health-care system into law will finish, and the decades-long work of building and improving our system will begin.”

Hear, hear.



Weird Facts.

While researching the last post, I found out something very odd about the Purple Heart:

During World War II, nearly 500,000 Purple Heart medals were manufactured in anticipation of the estimated casualties resulting from the planned Allied invasion of Japan. To the present date, all the American military casualties of the sixty-five years following the end of World War II — including the Korean and Vietnam Wars — have not exceeded that number. In 2003, there were still 120,000 of these Purple Heart medals in stock. There are so many in surplus that combat units in Iraq and Afghanistan and United States are able to keep Purple Hearts on-hand for immediate award to wounded soldiers on the field.

Call me nostalgic, but I think there’s something very powerful about a soldier wounded today being given a medal manufactured sixty-five years ago, in the midst of the last armed conflict that America truly believed in.  Don’t get me wrong: I know there’s a lot of World War II romanticism out there, and that a lot of it is misplaced.  But it’s powerful all the same.



DARPA Researches Human Zombie Serum.

Not really. Well, kind of:

The Defense Advanced Research Projects Agency (DARPA) is now funding research that may one day bring humans to a zombie-like form of hibernation. The motivation, however, is not so much space travel as emergency trauma care for wounded soldiers on the battlefield.

As reported by Popular Science, nearly half of soldiers killed in action die of severe blood loss after being wounded by gunshots or IEDs. When emergency trauma care is administered during that first “golden hour,” soldiers’ odds of survival are relatively good, but after that their odds begin to drop quickly. That’s why TIPS researchers are looking for a way to send the human body into a state of suspended animation, essentially “shutting down” the heart and brain until proper care can be administered.

I just finished reading Atul Gawande’s Better, a series of meditations (adapted from articles for the New Yorker) about the ways in which both he and modern medicine are striving to improve.  One of the points he makes is that, whatever your feelings are on the wars in Afghanistan and Iraq, the military deserves a great deal of credit for vastly decreasing the number of battlefield deaths.  In the Revolutionary War, a wounded soldier had a 42% chance of dying.  For Vietnam and the first Gulf War, he had a 24% chance of dying.  But an American soldier wounded in Iraq today has just a 10% chance of dying – despite the fact that injuries caused by IEDs and machine guns are much worse than those caused by muskets and bayonets.

In previous wars, decreases in battlefield mortality had become because of improved scientific techniques – better sanitation in field hospitals, new antibiotics, blood transfusions.  But that’s not what happened here.  Instead, the military realized that battlefield trauma is so bad that there isn’t really a “Golden Hour” – it’s more like a golden ten minutes.  So they started sending surgeons – or Forward Surgical Teams – into battle.

In Iraq and Afghanistan, they travel in six Humvees directly behind the troops, right out onto the battlefield.  They carry three lightweight, Deployable Rapid-Assembly Shelter (“drash”) tents that attach to one another to form a nine-hundred-square-foot hospital facility… The teams must forgo many technologies normally available to a surgeon, such as angiography and radiography equipment.  (Orthopedic surgeons, for example, have to detect fractures by feel.)  But they can go from rolling to having a fully functioning hospital with two operating tables and four ventilator-equipped recovery beds in under sixty minutes.

Throughout Better Gawande points out ways – as with the FSTs – in which changes in protocol can have more substantive effects than advances in technology.  You don’t need to improve surgical techniques – just get the soldiers to surgery faster.  You don’t need to develop a better polio vaccine – just get the vaccine that does exist to as many people as humanly possible.  Bench science, in Gawande’s world, is less important than diligence, ingenuity, and organization.

DARPA, though, is not like most research institutions.  They specialize in taking real-world problems and finding effective solutions, and their zombie-serum scheme does have their signature feel to it.  By taking a serious problem – soldiers need to get to doctors faster – and finding a crazy solution – slow down their bodily functions to a point just above death – DARPA might just find the bench science that really can bring battlefield mortality counts down to 5%.  Or 3%.

A final note – I would be the last to argue that these advancements have had a net negative effect.  But at the same time, you have to wonder about the effect – on the country – of fighting a war with such a low casualty rate.  Would this war have gone on so long if triple as many soldiers were coming back dead?  After all, it’s not like our guns are a third as deadly, and there’s only one side that’s benefiting from these advances in wartime medicine.

In World War II, everyone in the country, soldier or not, lived, breathed, and slept war; it was on their thoughts both when they  awoke and when they were drifting off to sleep.  The day the war ended was a day of celebration the likes of which nobody had ever lived through.  Would the push to end the war have been as strong if the casualty rate had been 10%?

I’m not saying that we shouldn’t be trying to keep our soldiers alive. I am saying that it’s worth considering what happens when you lower the consequences of war.

(Photo credit: isafmedia.  Licensed under Creative Commons.)

Welcome To The Monkey House.

image

I went to the San Francisco Zoo a couple weeks ago.  One of the nice things about going to the zoo on a weekday in January is that there aren’t too many people about, and the animals engage in behavior that they might otherwise be too riled up or embarrassed to exhibit.  Case in point:  I wandered up to the lion-tailed macaque exhibit just in time to see the little monkey masturbate furiously, ejaculate (with a disconcertingly human-like expression on his face), scoop the evidence up from the wooden plank on which he was lounging, and eat it.

This may have been regular behavior for the macaque, but as a frequent visitor to the zoo, I can confirm it is a rather unique thing to witness and the experience has affected me accordingly.  Specifically, it has led to a sort of furtive, bemused interest in animal sexuality, and the ways in which it differs from our own.  (Not that the above behavior is entirely out of the question for human beings.)

All of which is a roundabout way of explaining how, last night, I found myself quietly absorbed in “Sperm Competition And The Function Of Masturbation in Japanese Macaques”, a dissertation by a researcher for the University of Munich named Ruth Thomsen.

Thomsen spent the better part of two years hiding in the forests of Yakushima Island, stealing ejaculatory fluid from macaques before they had a chance to eat it.  (This behavior is apparently pretty universal – among macaques, not researchers.)  The thesis undoubtedly had a long and embarrassing road to publication, but Thomsen’s findings are interesting – and have, dare I say, important implications on the subject of human masturbation.

What Thomsen discovered was that during the mating season (roughly February – September) masturbation was nearly universal among macaques, and whether through mating or masturbation the monkeys ejaculated on average once a day.  On first glance the frequent masturbation doesn’t seem to make much evolutionary sense – ejaculate itself is finite, of course, and there didn’t seem to be much reason for the males to be wasting it on what amounted to a recreational activity.  It was only by analyzing the makeup of the fluid itself that Thomsen was able to arrive at the answer.

It turns out that male macaques can be roughly separated into two groups, which Thomsen calls “guarders” and “sneakers”.  Guarders are dominant males; they mate with females on a very regular basis and used their social stature to block other males from mating.  They masturbate infrequently and at no regular time of day, and so semen builds up in their testes.  When they do masturbate, they produce a large volume of ejaculate, but it’s chock-full of dead or malformed sperm.

Sneakers, on the other hand, are second-class males who almost never get the opportunity to mate with females; when they do so it is done quickly and in secret.  They masturbate very frequently, and produce less seminal fluid per ejaculation.  But!  When Thomsen analyzed their ejaculate, she found that it was far more robust than that of the guarders.  By masturbating on such a regular basis, the sneakers were clearing the tubes (so to speak) to make room for newer, healthier ejaculate, and were ensuring that when they did have a chance to mate, their sperm would be in much better shape to make the journey upstream (so to speak).

image

What implications does Thomsen’s research have for us?  Well, first of all, Thomsen fairly convincingly lays to rest the idea that masturbation in primates – including humans – is somehow unnatural or caused by mental illness.  ”Masturbation occurs in non-human primates at an astonishingly high frequency of 65.4 %, in 34 of 52 investigated species,” she writes. “As this study is the first in which data concerning masturbation in wild living primates has been systematically collected, I wish to emphasise that the common perception of masturbation as a mainly pathological or abnormal behaviour in primates can now be considered defunct.”

Second, Thomsen noticed much higher rates of masturbation in primate groups who live in multi-male / multi-female social groups (as opposed to something like multi-female, single-male, like gorillas; or single-female-with-offspring, like chimpanzees).  The theory is that males in this social system face much more sperm competition, and the likelihood that they are mating with a female who has already mated with another male is higher, so they want to keep their sperm more vital.

Considering that most humans live in multi-male / multi-female groups , there’s no real reason not to assume that this instinct is at least partially responsible for human male masturbation.  So, men: next time you feel bad about masturbating, just remember that you’re really just fulfilling your evolutionary desire to keep your sperm strong and lively.

Lastly, it’s tempting to note that most human males could probably be classified either a guarder (high frequency of mating, relatively low frequency of masturbation) or a sneaker (low frequency of mating, extremely frequent masturbation).  But that’s probably taking the comparison too far.

The Times Goes Paid.

Everyone knew it was coming: the New York Times is going to start charging customers to read online content.

Starting in early 2011, visitors toNYTimes.com will get a certain number of articles free every month before being asked to pay a flat fee for unlimited access. Subscribers to the newspaper’s print edition will receive full access to the site.

First, I think this is a smart way to handle the subscription model.  It ensures that people can still link to Times articles without the fear that everybody will get stuck on the wrong side of the pay wall.  And if the Times can hit that number right, and set the cut-off such that it forces only truly regular readers to pay, it just might work.

Beyond that, this is probably a good move for the Times and a bad move for the newspaper world in the long-term.  I like the New York Times.  Their reporting is excellent.  They have correspondents all over the world and run stories on subjects other newspapers don’t.  Their opinion writers are (for the most part) reasonably intelligent.  I understand the Times is having financial trouble, and as a regular consumer of their product, I would absolutely pay for content.

However: what I would not pay for is the New York Times and the Washington Post.  Or, the NYT, the WP, and the Boston Globe.  Nor would I pay for the Cincinnati Enquirer or the New Orleans Times-Picayune, as much as I love how the latter rolls off the tongue.  Why would I?  On national and international issues, their reporting is identical.  The truth is, most news is redundant, and when it comes to larger issues, newspapers can no longer depend on their communities for subscriptions.  They certainly can’t compete with the NYT.

What I think we’ll see, then, is that local papers – even in big cities – will focus more and more on reporting truly local news.  A few large papers – I’d guess the Times, the Washington Post, and theLA Times – will more and more become the go-to sources for nationwide and international news.  This isn’t entirely a bad thing – in many ways, it’s more logical – but it does consolidate a lot of power into a few hands, and that, frankly, makes me nervous.

Timidity Makes For Lousy Journalism.

I don’t know too much about Dave Eggers.  I like McSweeneys.  I’ve never read A Heartbreaking Work of Staggering Genius.  I liked Where the Wild Things Are.  But this answer from a recent interview of his with the Onion A.V. Club – about the newest edition of McSweeneys, which is a mock-newspaper called the San Francisco Panorama - really made me angry:

To me, the print business model is so simple, where readers pay a dollar for all the content within, and that supports the enterprise. The web model is just so much more complicated, and involves this third party of advertisers, and all these other sources of revenue that are sort of provisional, but haven’t been proven yet. We’ve lost that very simple transaction that’s so pure, where a reader can say, “I support what you’re doing, here’s my dollar. I know that you guys are gonna be watchdogs or keep the government accountable, so here’s my 50-cent contribution each day.” It’s just so tidy, and I think so inspiring.

The idea that newspapers have heretofore been making money solely on circulation and incidental purchases – without the silly gimmick of advertisers, which are some newfangled internet thing – is insane.  Newspapers have always made the vast majority of their money from advertising, both retail and classified – indeed, my hometown paper is only still afloat because of arcane rules requiring things like name changes to be printed in a public forum.  Suggesting that the half-dollar sale price was what kept newspapers afloat is like saying that I support the Washington Post’s website when I pay my cable bill every month.

This is such a fundamental point, and one so key to understanding what’s happening in the journalistic world today, that I simply can’t believe that Eggers isn’t aware of it.  So I’m forced to conclude that Eggers was simply being disingenuous in order to score a cheap rhetorical point.  Which is bad enough.  But to make matters worse, instead of asking Eggers to explain what the hell he’s talking about – the interviewer just moved on to the next question!

This is my problem with most interviews I read: the complete disinclination of the interviewer to dynamically respond to the subject’s answers.  I’m not saying every interview has to be like Frost/Nixon, but at the same time, there needs to be some recognition that an interview is not an invitation for the subject to blather blithely on without being asked to explain themselves when they say things that are obviously false. This kind of answer doesn’t just ruin Egger’s credibility on newspaper business models – it calls into question whether Eggers believes anything that he says in the whole interview, and makes the whole thing an exercise in futility.

Five Thoughts About Avatar.

If you’re trying to decide whether or not to see Avatar, consider this: the climactic scene of the film features hand-to-hand combat between a man in a robot exoskeleton and a six-legged alien panther.  Whether you find that prospect intriguing or silly will largely determine your response to the entire film.

If you’ve already seen the film, or you’re not too concerned about spoilers (and, I gotta say, there’s not too much to spoil), then click below the jump for a few of my thoughts on the film.

1.  Avatar is, from a visual perspective, one of the most awe-inspiring films ever made. My hatred of 3D technology is a matter of public record, but even I have to admit when I’m wrong.  Instead of being a gimmick, the 3D technology here is used almost as an extension of the depth of field, to create a separation between foreground and background that’s surprisingly subtle.  It’s used in nearly every single shot, but often only on bits and pieces of a particular frame.  The effect adds a depth to the visuals that is, frankly, stunning.  They also appear to have solved many of the problems with the glasses that made them so painful for me in the past.  (I should point out that I saw sitting dead-center in the theater, and I’ve read that the technology suffers both from flanking and from being to close to the screen.)

But I saw the film in 2D, too, and the visuals are strong enough to carry it in that medium alone.  James Cameron knows that it’s okay for things to not seem realistic if they can be beautiful instead, and the alien jungle moon of Pandora (think the redwoods crossed with the Amazon) is so fully realized, and so full of marvels, that even on repeat viewings you’ll find yourself astonished.  The motion-capture technology used to animate the faces of the alien Na’vi works well enough to be believed, and though it’s application here isn’t as impressive as it was in The Lord of the Rings, that’s mostly because we had no human actor to compare Gollum to.  The computer graphics are so seamlessly integrated that it’s impossible to tell what’s real and what was added in (although at any particular moment I’d bet on computer generated).  It is worth seeing for the spectacle alone, and that spectacle almost makes up for the film’s other failings.

2.  Sam Worthington and Zoe Saldana are having a hell of a year. Worthington was little-known even in his native Britain Australiabefore the release of Terminator: Salvation earlier this year, and though critics were generally unkind to the film, the consensus seemed to be that he definitely out-acted Christian Bale.  Saldana had a small role in Pirates of the Caribbean: Curse of the Black Pearl, but was definitely not a household name before her one-two punch of Star Trek and Avatar in 2009.  Worthington’s solid but not particularly exciting, and I’d like for somebody to let him start using his native accent because his American one is terrible.  But Saldana’s a great actress, charismatic and convincing even behind the CGI, and I really hope she starts starring in her own films – in her roles so far she mostly props up the male characters around her, and it’s a shame.

3. I can’t fucking believe they used the Papyrus typeface for the subtitles. As if they were designing the flier for a small-town high school play instead of a half-a-billion dollar motion picture.  Even Comic Sans would have been preferable.

4.  The screenplay is bad. Really bad.  However bad you’re thinking it is, it’s worse.  The dialogue is wooden and trite.  The storyline is predictable – .  The characters are one-dimensional.  The themes are unsettling.  (More on that in a minute.)  It is at least as good an argument against writer-directors as the abysmal Lady In The Water was, and it’s only Cameron’s directorial skill that rescues the film from being the biggest laughingstock of the last ten years.

5. The film is not explicitly racist; this isn’t Birth of a Nation.  But it relies on any number of outdated and offensive racial stereotypes, without which the story does not work. The plot, as summarized by io9’s Annalee Newitz, is “a classic scenario you’ve seen in non-scifi epics from Dances With Wolves to The Last Samurai, where a white guy manages to get himself accepted into a closed society of people of color and eventually becomes its most awesome member.”  The Na’vi – in religion, in dress, in weaponry – at first seem to be a blend of all the cultures European colonists were confused by in 1650, from Native Americans to African tribes.  But the narrative trap laid by Cameron is actually more cunning: the Na’vi actually don’t even have a culture, in the sense that we use the word.

Instead, they are the literal, biological manifestation of Cameron’s idea of the noble savage, which is based on a superficial reading of some strains of Native American mythology with a bit of Judeo-Christian morality thrown in.  The Na’vi live in a state of grace, literally inside a tree, uncontaminated by the war, pestilence, and evil that the invading humans bring with them.  They don’t merely commune with nature – using a sort of planetwide neural network, they actually establish a physical connection with the plants and animals around them.  Men are biologically compelled to pick their favorite female, with whom they will form a lifelong, monogamous relationship.  Though they are scantily clad, their genitalia remain tucked chastely away (and are presumably rather puny, relative to their body size).  They are a race of Adams and Eves: they want for nothing, because everything has been provided for them; they need neither faith nor intelligence, because everything they believe in, Cameron has made reality.

This brand of primitivism – which sees native peoples not as real human beings but as embodiments of a perfect ideal – is both moronic and narratively dull.  That’s why Avatar has to take it even one step further: when trouble comes, and the Na’vi way of life is truly threatened, they are absolutely helpless until the arrival of a white man.  And not just any white man – an American Marine, full of pluck and derring-do and a cheerful contempt for these silly scientists with all their tests and samples. Jake Sully (check out those initials!) is chosen to lead the Na’vi not only because he has more experience with human technology but because he’s just better than they are: a better warrior, a better flier, a better leader.  He even takes over the role of spiritual leader: when he asks Eywa (the god of all living things) to help him win the climactic battle, the Na’vi chuckle at his naivete – but the gesture saves the day.  It’s a bait-and-switch: the film first claims to be enlightened and post-racial, then merrily goes on to prove that the white guy is superior to everybody else anyway.

It’s fair to wonder whether Avatar really deserves this level of scrutiny – it’s not a process that we routinely put blockbuster action films through, although maybe it should be.  But ultimately it’s important to analyze the racial assumptions the film makes, if only because Avatar – like Crash before it- absolutely begs to be taken as a Big Important Movie About Race.  But when you tease out and fully explore the allegory, it says more about Cameron than it does about race.  Avatar is a movie made by a man maniacally bent on proving how not racist he is; in the end, of course, it does exactly the opposite, right down to the casting choices.  (As the blogger SEK points out, nearly all the speaking roles among the humans were played by light-skinned actors, while all the speaking roles among the Na’vi were played by dark-skinned actors.  It’s evident that Cameron knows what his savages look like.)

There was, of course, another 2009 film that explicitly set up the metaphor of humans-as-whites and aliens-as-blacks:the cheaper and much, much better District 9.  (Coincidentally, the climactic battle of that film also involved a mech suit.)  District 9 also made some uncomfortable racial statements, but at least I felt that it was trying to engage with racism and racial stereotypes on a serious and intellectual level.  Avatar doesn’t want to make people think about race.  It wants to make them feel better without the trouble of thinking about it.

(Incidentally, James Cameron also comes off as being an egomaniacal asshole in this New Yorker profile, and between that and Avatar, he’s pretty much been knocked off the list of my personal heroes.  Sad when that happens.)

The Mechanisms of Horror Films.

6

This post contains a very few minor spoilers.  Consider yourselves warned.

Of the two low-budget horror films I’ve seen in the last few weeks – Paranormal Activity and The House of the Devil – the latter was better, and scarier.  But both films were top-notch, tense and surprisingly confident in their command of the horror film genre.

Paranormal Activity is about Micah and Katie, a young couple whose San Diego house is haunted by a malevolent entity.  The film was made for $15,000 by director Oren Peli, a former software engineer who had no prior film experience.  It was also shot entirely on a single camera, operated mostly by Micah; like The Blair Witch Project, this is supposed to be found footage. The House of the Devil, written and directed by Ti West, is set in 1982 and tells the story of Samantha Hughes, a college student who – mostly out of desperation – takes a babysitting job at a very creepy house.

Both films are scary – not in a Hostel-style gorefest way, but in a chilling, growing-sense-of-dread way.  But the films went about scaring the audience in very different ways.  Paranormal Activity is scary because it takes a setting that we think of as safe – the bedroom – and suggests that evil lurks within.  There are daylight scenes in PA, sure, but the acting and the story aren’t strong enough to make these very compelling.  But each night, Micah sets the camera up on a tripod, the couple goes to sleep, and the camera captures – in lime-green nightvision – the events that go on around them.  Paranormal Activity is scary because it makes us wonder: what happens around us while we’re asleep?  And if something were to happen – if a door were open, or if a voice, disembodied, were to whisper our names – would we wake?

Paranormal Activity is scary because its setting – the bedroom – is innocuous.  The House of the Devil takes a completely opposite tack: it takes a likeable protagonist and dumps her into the scariest place imaginable.  From the moment we see the titular house, looming out of the forest, we know it’s a place where bad things happen, and that those bad things take so long to occur only makes the atmosphere more tense; the whole movie is basically an exercise in for Christ’s sakes, don’t go in there! There’s a lot of horror-film shorthand in The House of the Devil – a haunted house, a dim-witted coed, a conveniently-timed eclipse – and it’s to West’s credit that the movie never slips into cliche.  (West is helped by creepily great performances, particularly from Tom Noonan.)

Where both films fall short is in establishing any deeper meaning.  Horror is a genre that already requires such a suspension of disbelief that it has a remarkable capacity to carry weighty symbolism without seeming ham-handed.   Rosemary’s Baby, for example, is about a woman’s right to choose, while The Shining is actually about alcoholism.  But these films keep things pretty superficial: the take-away lesson from Paranormal Activity seems to be that some people are just plain doomed, while The House of the Devil can be boiled down to satanic cults are bad.  I’m not saying, of course, that each and every film needs to have a deeper message, but there’s obviously a big difference between a horror film that’s great because it scares you, and a horror film that’s great because of what it says about society.

paranormalactivity_6

The Know-Nothings Redux.

“There are also those who claim that our reform efforts would insure illegal immigrants. This, too, is false. The reforms — the reforms I’m proposing would not apply to those who are here illegally.”

This, of course, is the line from President Obama’s health-care speech that prompted the now-infamous “You lie!” outburst.  What got lost in the chaos that Wilson set off was the larger question of whether health-care reform should cover illegal immigrants – ground that Obama has obviously already conceded.  But as Andrew Romano of Newsweek and others have pointed out, there’s a strong economic case to be made for insuring people working here illegally.  Illegal immigrants are more likely to be younger and less expensive to insure; when they do get sick, they’re forced to go to the emergency room, which drives premiums up for everyone; and excluding illegal immigrants from the individual mandate creates a strong incentive for employers to hire more illegal immigrants, because they don’t have to provide health insurance for them.  There’s also, I think, a strong moral case to be made here as well: is it right to deny people who are working for and supporting American companies the benefits (dental, eye care, preventative care) that are only attainable with full coverage?

That’s not a debate we got to have, though, because the Democrats are so scared of being painted as pro-immigrant that they’ve caved to nearly every Republican position on illegal immigrants and health-care.  (On several of the bills that are up to be voted on, illegal immigrants are barred from buying even unsubsidized healthcare on the private market – even though nobody’s been able to explain quite how more money moving from consumer to health-insurance company, without any government involvement, is at all a bad thing.)  But I was thinking about it recently while I was reading that Republicans on the Finance Committee tried to add amendment to that health-care bill that would have barred federal subsidies not only to illegal immigrants, but to legal immigrants as well:

Last week, Mike Lillis caught a remarkable scene during the Senate Finance Committee debate: Republicans attempting to insert amendments that would bar legal immigrants — you caught that, legal immigrants — from accessing health-care exchanges, leaving those very immigrants Republicans say they are not hostile to, those who have “played by the rules” so to speak, without access to a reformed health-care system.

The amendment failed upon strict party lines, with 13 Democrats voting against and 10 Republicans voting for.

There’s an old and ugly and dangerous nativist sentiment in this country, and it seems to me that the Republican party is increasingly trying to tap into it.  It’s evident in this vote.  It’s evident in the “birther” conspiracy theories.  It’s evident in the Minutemen Project.  It’s the fear of the other, the stranger, the Muslim, the African, the Mexican.  And I’m glad that the party who’s pandering to this sentiment isn’t in power any more, but it disturbs me all the same.

Is Science Fiction Mainstream?

So asks Damien Walter, writing at the Guardian website:

Sci-fi has made many predictions about the future, but did any of them forecast that in the early years of the 21st century everyone would be watching … sci-fi? Our TV screens are filled with Dr Who, Lost and now FlashForward. Each summer brings more blockbusters in the Lord of the Rings and Star Trek vein, and a flood of superhero franchises. In comics and video games, sci-fi is the norm. It’s not just part of mainstream culture, it is arguably the dominant cultural expression of the early 21st century…

The walls that defined speculative fiction as a genre are quickly tumbling down. They are being demolished from within by writers such as China Miéville and Jon Courtney Grimwood, and scaled from the outside by the likes of Michael Chabon and Lev Grossman. And they are being ignored altogether by a growing number of writers with the ambition to create great fiction, and the vision to draw equally on genre and literary tradition to achieve that goal. The post-sci-fi era is an exciting one to be reading in.

My thoughts on sci-fi’s recent move into the mainstream are complicated, and I don’t disagree per se with anything that Walter says in the article.  I do have two qualifications, though.

First of all, what Walter here is calling “science-fiction” would better be called “speculative fiction”, since he’s also talking about genres like horror or fantasy.  And I would argue that, for most of human history, speculative fiction has been mainstream.  Shakespeare wrote it (A Midsummer Night’s Dream).  So did Poe, Mark Twain, and Henry James.  And I think that the impulse to write speculative fiction – to create stories in which men are endowed with supernatural powers through magic or technology – is a very old one indeed, and may be at the root of most mythologies.

Secondly, I think that science-fiction in particular was never as fringe as it seemed to be, particularly in regard to the written word.  For years it has driven me crazy that, whenever someone writes a science-fiction book that’s halfway decent, it’s immediately taken out of the sci-fi genre and called “Literature” instead.  So the same people who read Cat’s Cradle and Fahrenheit 451 then walk past the sci-fi section of the bookstore without a second glance and claim that they just don’t “get” science fiction.  And who can blame them, really – that sci-fi section is usually full of Star Wars novelizations and Volumes VI – VI of the Bio of a Space Tyrant series, because all the Vonnegut and Saramago and so on have been filed with the “real” authors.  But it’s a shame all the same, and it contributed to the marginalization of science-fiction that went on throughout the 20th century.

Blood Sport.

Photo Credit: a href=

I only played football for one year, when I was twelve.  I was a lineman – mostly because I didn’t see very well, and had trouble catching and throwing the ball with much accuracy.  I certainly wasn’t built for the line.  I was also on the kickoff team, and I remember a particular hit – the hit, really, of my short football career.  The kickoff was to the other side of the field, and I was keeping an eye on the ball carrier, seeing that he was surrounded but making sure he didn’t cut back across the field.  Then something hit me from my blind side very hard, and I was briefly airborne before landing hard.  And then somehow it was five minutes later and I was sitting on the sideline drinking a cup of Gatorade.  I didn’t remember how I got there.  My head hurt.  I played the rest of the game, and after the game I laughed about the hit, but the whole experience was unsettling and it had a lot to do with my decision (to my mother’s eternal relief) not to go out for football again.

Malcolm Gladwell has an article out in the New Yorker trying to link football and dogfighting, and in that regard it isn’t quite successful.  But he also reports on new research on the brains of ex-football players that should be deeply disturbing to anyone who enjoys the sport.  It turns out that football is uniquely traumatic for the brain not because of the number of hard, concussive hits (which also occur in hockey or rugby) but because of the unrelenting number of sub-concussive hits that linemen take on every single play.

The HITS data suggest that, in an average football season, a lineman could get struck in the head a thousand times, which means that a ten-year N.F.L. veteran, when you bring in his college and high-school playing days, could well have been hit in the head eighteen thousand times: that’s thousands of jarring blows that shake the brain from front to back and side to side, stretching and weakening and tearing the connections among nerve cells, and making the brain increasingly vulnerable to long-term damage. People with C.T.E. (chronic traumatic encephalopathy), Cantu says, “aren’t necessarily people with a high, recognized concussion history. But they are individuals who collided heads on every play—repetitively doing this, year after year, under levels that were tolerable for them to continue to play.”

This sort of chronic trauma has very serious long-term effects:

“There is something wrong with this group as a cohort,” Omalu says. “They forget things. They have slurred speech. I have had an N.F.L. player come up to me at a funeral and tell me he can’t find his way home. I have wives who call me and say, ‘My husband was a very good man. Now he drinks all the time. I don’t know why his behavior changed.’ I have wives call me and say, ‘My husband was a nice guy. Now he’s getting abusive.’ I had someone call me and say, ‘My husband went back to law school after football and became a lawyer. Now he can’t do his job. People are suing him.’ ”

And perhaps most disturbingly, most people who play football long-term start when they are very young, when the brain is at its most fragile.

She pulled out a large photographic blowup of a brain-tissue sample. “This is a kid. I’m not allowed to talk about how he died. He was a good student. This is his brain. He’s eighteen years old. He played football. He’d been playing football for a couple of years.” She pointed to a series of dark spots on the image, where the stain had marked the presence of something abnormal. “He’s got all this tau. This is frontal and this is insular. Very close to insular. Those same vulnerable regions.” This was a teen-ager, and already his brain showed the kind of decay that is usually associated with old age. “This is completely inappropriate,” she said. “You don’t see tau like this in an eighteen-year-old. You don’t see tau like this in a fifty-year-old.”

The research also suggests that the problem is endemic to football, endemic to the manner in which the offensive and defensive lines come together – a foundation of the game itself.  It suggests that the material to build a helmet that will properly protect the brain under these conditions simply doesn’t exist.  And it suggests that practice can be as dangerous or more than games.

To me, as a football fan, the implications of these are – well, they’re sickening.  Of course, I never believed that football was healthy, exactly.  But I did think that there were ways that we could minimize the risks.  Heat stroke is an easy thing to fix – give the players more water and monitor their workouts closer on hot days.  So are concussions – modify the rules so that the kind of hits that bring them on are rare.  But this suggests that the game of football itself is damaging in fundamental ways, and I don’t know how you get around that.

The fact that linemen are the most effected is particularly tragic.  If you think about football heroes, they’re never linemen.  The men that we idolize are people like Joe Montana, Walter Payton, Joe Namath, or Barry Sanders – quarterbacks and running backs, or the people who get hit the least.  Nobody ever asks the left tackle where he’s going after the Super Bowl.  Guards are paid a small fraction of what quarterbacks make.  And yet they are literally sacrificing their ability to think for the game.  There’s a stereotype that football players are big and dumb, and that linemen in particular are big and dumb – it’s the basis for the high school jock stereotype.  But what this research suggests is that maybe that’s not just a coincidence; it suggests that maybe we made them that way.

The basic question is what we’re willing to do to people – what we’re willing to do to kids – in the name of entertainment.  I love football; I’ve loved football for fifteen years.  But I don’t know how I can balance that love with what’s in this article.  I do know that if I had a kid there’s no way I’d let him strap on a helmet.  And if I was a player I’d have to think long and hard about what I was giving up down the line.  And as a viewer – well, as a viewer I still don’t know what to do.

Photo Credit: Schlüsselbein2007

Our Man In Havana.

our-man-3

I have been blessed recently with an abundance of great literature, and my most recent diversion was Graham Greene’s Our Man in Havana.  Greene liked to divide his books into ‘novels’, which were about serious things (The End of the Affair, Brighton Rock) and ‘entertainments’ (The Third Man, The Confidential Agent) which he considered more frivolous.  By the end of his career, though, he’d begun to blur the lines between the two, and my favorites of his works – The Human Factor, The Quiet American, and now Our Man… – are emotional case studies masquerading as thrillers, stories about lonely men in strange lands, far from home.

Our Man In Havana starts off as a comedy.  James Wormold is a vacuum cleaner salesman in Havana, who lives in a state of quiet desperation: his trade is almost nonexistent, his daughter Milly spends his money and cavorts with dangerous government officials, and his only friend is a drunk.  He barely exists – as he himself puts it, “It always seemed strange to Wormold that he continued to exist for others when he was not there.”  But then Wormold is recruited as a spy for MI6 by the suave but incompetent Hawthorne, a position he accepts mostly to scrimp expenses from the British government.  Not knowing any valuable information, and without much idea of where to get his hands on any, Wormold starts filing fake dispatches.  He makes up imaginary operatives and gives them names and detailed character backgrounds.  He copies the schematics of a vacuum cleaner and submits them as “military installations under construction in the Sierra Maestra”.  (“He said one of the drawings reminded him of a giant vacuum cleaner”, says an MI6 official, during an interlude in London.  “Hawthorne, I believe we are dealing with something so big that the H-bomb will become a conventional weapon.”)  Over time Wormold comes to love his creative outlet, but as his masquerade spirals out of control, the situation takes a dangerous turn.

I was particularly struck by this passage, which is from about a third of the way through the book:

The long city lay spread along the open Atlantic; waves broke over the Avenida de Maceo and misted the windscreens of cars.  The pink, grey, yellow pillars of what had once been the aristocratic quarter were eroded like rocks; an ancient coat of arms, smudged and featureless, was set over the doorway of a shabby hotel, and the shutters of a night-club were varnished in bright crude colours to protect them from the wet and salt of the sea.  In the west the steel skyscrapers of the new town rose higher than lighthouses into the clear February sky.  It was a city to visit, not a city to live in, but it was the city where Wormold has first fallen in love and he was held to it as though to the scene of a disaster.  Time gives poetry to a battlefield, and perhaps Milly resembled a little the flower on an old rampart where an attack had been repulsed with heavy loss many years ago.

This passage just floors me every time.  It is such a perfect evocation of so many things – of Havana; of loneliness; of love lost.  But most of all it’s what I want my writing to sound like; I’m jealous of not having written it myself.

Roman Polanski Should Go To Jail.

In 1977, Roman Polanski – then 43 – hired a thirteen year old girl as a model, convinced her mother to leave them alone for a photo shoot, forced her to take champagne and quaaludes, and then performed a number of sexual acts on her – including forced anal sex.  When the case went to trial, Polanski agreed to plead guilty to “unlawful intercourse with a minor” in exchange for getting the other charges – sodomy and providing drugs to a minor – dropped.  After he was convicted, he fled the country and has spent the intervening years hopping around whichever Eastern European country was least likely to extradite him.

Sounds like a crime that warrants some serious jail time, right?  And his arrest on entering Switzerland over the weekend was a victory for the judicial system, right?  Wrong! says The Washington Post’s Anne Applebaum, in an article entitled “The Outrageous Arrest of Roman Polanski”:

Polanski, who panicked and fled the U.S. during that trial, has been pursued by this case for 30 years, during which time he has never returned to America, has never returned to the United Kingdom., has avoided many other countries, and has never been convicted of anything else. He did commit a crime, but he has paid for the crime in many, many ways: In notoriety, in lawyers’ fees, in professional stigma. He could not return to Los Angeles to receive his recent Oscar. He cannot visit Hollywood to direct or cast a film.

He can be blamed, it is true, for his original, panicky decision to flee. But for this decision I see mitigating circumstances, not least an understandable fear of irrational punishment.

Sorry, Anne, but the law doesn’t work like that.  You don’t get to atone for your crimes by being really sorry, or by virtue of your life (as an international fugitive from justice) being really hard.  People who rape children go to jail; missing the Oscars is not an equivalent punishment.

It’s a hard thing to accept, this idea that people who create beautiful things can still act in evil ways.  It doesn’t seem possible that the same person who sang “California Dreamin” could also have raped his own daughter; that the man who made films like Chinatown could have had his way with a drugged-out child; that someone who could make playing football look like dancing could have killed his wife.

But the correlation doesn’t exist.  Artistic talent doesn’t track with moral fortitude.  And the idea that it does is how someone like Polanski can live in relative comfort for decades after he committed his crime and how, after his arrest, people like Applebaum can make this argument – not that he’s innocent, but just that he should not be punished.  And that is really what is outrageous.

Update: In the twenty-minutes or so since I published this I was trying to figure out what it was about this case in particular that set me off.  I wasn’t around during the Manson murders; I don’t have a good sense of how Polanski used to be viewed, or why people might have an attachment to him.  And so the view being espoused by Applebaum – that Polanski should simply be let off the hook – is not only one that I have trouble understanding, it’s one that I didn’t anticipate anybody would make.  And that this viewpoint is being taken seriously is what is really making me angry.  A columnist at The Washington Post – one of the biggest newspapers in the country – is taking to the editorial page to argue that a child rapist shouldn’t be punished, and for no other reason that I can see than, but he seems like such a gentleman.  Or, but the Pianist was such a moving film.  Or, why would they target such an old man, anyway.  And if people want to argue that we shouldn’t be punishing child rapists, that’s fine with me, but I don’t see why they have to be given such a venue as the Post in which to do it.

Who's Asking For All These 3D Movies?

 

toy-story

Don’t get me wrong: I’m giddy as a schoolboy that Toy Story and Toy Story 2 are being re-released in theaters.  And I’m reluctant to criticize Pixar who, with the possible exception of Apple or Google, has the most impressive collection of pure genius in the world.  But this is damn dispiriting:

The Mouse House is giving Pixar’s “Toy Story” franchise the 3-D treatment.

As part of an aggressive move by the studio to turn more of its toons into 3-D releases, company will convert “Toy Story” into the format and re-release the pic in theaters on Oct. 2, 2009. Its sequel will get the same makeover and bow Feb. 12, 2010.

A confession: I loathe 3D movies.  This is in part because I have appallingly bad eyesight (“Wow,” said the optometrist at my last checkup, “You’re farsighted and you have an astigmatism!”  Tell me something I don’t know, lady) and so the visuals seldom land with the intended aplomb and I usually have a headache by midway through the film.  But even from an artistic point of view I think it’s seldom warranted.  Sure, a film like Coraline, in which hardly thirty seconds went by without something flitting prettily across the screen, made justifiable use of the technology, but few films are (or should be, really) filled with those sorts of frenetic visuals.  And the fact remains that the meat of most movies – the dialogue, the exposition, the character development – takes place in scenes that have no business being in 3D.

This wouldn’t be cause for so much concern if the 3D craze remained confined to children’s movies – kids are more easily wowed by the spectacle and it’s not like you can’t just see the thing in 2D anyway.  (Although in New York this can be surprisingly difficult.)  But the last few years have given us films like Beowulf, The Final Destination, Journey to the Center of the Earth, and Hannah Montana & Miley Cyrus: Best of Both Worlds Concert, all of which were live-action (except for Beowulf, which was performance captured, which is a whole other point of contention).  The other common denominator between these films?  They all sucked.  But that may not matter enough to staunch the tide.

And if I haven’t made the case enough against 3D technology, this should strike fear into the heart of every reasonable human being:

The Force is about to be unleashed in three dimensions.

George Lucas’ camp recently confirmed his plan to recast all six episodes of Star Wars — the original trilogy and the prequels — in an all-new, eye-popping 3-D light.

Writing in the Internet Age.

Clive Thompsonat Wired (far and away one of my favorite magazines, as it happens) has a post up about the work of Andrea Lunsford, a professor at Stanford University.  Lunsford has been looking at how people write in the internet age, and to do so she’s collected 15,000 student writing samples – letters, blog posts, tweets, emails, etc. – from 2001 to 2006.  Her conclusion?  “”I think we’re in the midst of a literacy revolution the likes of which we haven’t seen since Greek civilization.”

Let me quote at length for a moment from Thompson’s post:

The first thing she found is that young people today write far more than any generation before them. That’s because so much socializing takes place online, and it almost always involves text. Of all the writing that the Stanford students did, a stunning 38 percent of it took place out of the classroom—life writing, as Lunsford calls it. Those Twitter updates and lists of 25 things about yourself add up.

It’s almost hard to remember how big a paradigm shift this is. Before the Internet came along, most Americans never wrote anything, ever, that wasn’t a school assignment. Unless they got a job that required producing text (like in law, advertising, or media), they’d leave school and virtually never construct a paragraph again.

But is this explosion of prose good, on a technical level? Yes. Lunsford’s team found that the students were remarkably adept at what rhetoricians call kairos—assessing their audience and adapting their tone and technique to best get their point across. The modern world of online writing, particularly in chat and on discussion threads, is conversational and public, which makes it closer to the Greek tradition of argument than the asynchronous letter and essay writing of 50 years ago.

Now, it’s true that this study focused exclusively on Stanford University students, who were presumably better writers than the general public even before the internet came along.  But the core cause that Lunsford cites – that the text-based nature of the web is forcing people to write much more than ever before – is true regardless of education level, and I honestly wouldn’t be surprised if the effects are even more pronounced among people who aren’t required, by school or by their job, to write on a regular basis.

It is pretty gratifying, though, to hear a solid refutation of the “the web is making us dumber” crowd.  And a lot of what the study finds is just common sense.  No one is writing “2morow” in their academic papers, because we know to modify our tone based on our intended audience; I write differently at this blog than I do in my instant message conversations.

I wouldn’t be surprised, though, if a specific type of writing actually is decreasing in quality: the academic research paper.  And the reason for that is, I think, that research papers require a tone – dry, factual, neutral – that we don’t get a lot of practice for in our internet lives.  I think, too, that, for me, the idea of writing without an audience holds very little appeal.  In school, I was always frustrated to put time and effort into writing a paper that was only going to be read by a single professor (and then perhaps only superficially); I’ve poured much more of myself into blog posts or comment sections than I did into most of my schoolwork.

The New STAR TREK film.

I’ve now seen it thrice, and I think that only on the last was I able to really examine it critically.  The first time I was too swept up in geek fervor to have many coherent thoughts at all; the second time, in IMAX, was such sheer spectacle that I forgot to think about anything else.  But the third time I was able to sit back and really think about the film, and while I’ve cooled toward aspects of it somewhat, I still think that it is a tremendously successful reboot, and it hews true to what made Star Trek so appealing in the first place.

The casting is impecccable to a fault.  Chris Pine appropriates the swagger and ego of Shatner’s Kirk but none of the odd speech patterns or hammy acting.  Zachary Quinto’s Spock is a younger and altogether more human thing, with more than a hint of contempt beneath his arched eyebrows.  Zoe Saldaña’s Uhura has more to do in this film than Uhura did in the first six; Karl Urban manages to sound shockingly like a young DeForest Kelley; and Simon Pegg turns in an as-usual excellent performance as Montgomery Scott.  Acting in this film must have been no easy thing, considering that it required both a respect for the original cast and a new sensibility, but the (mostly very young) cast pulls it off with aplomb.  (I would be remiss, too, if I didn’t mention Bruce Greenwood’s gruff turn as Christopher Pike, which manages to take an exceedingly minor character from the Trek canon and turn him into something altogether different and more important.)

SPOILERS AFTER THE JUMP.

 

J.J. Abrams deserves credit, too, for pacing the picture at breakneck speed.  Star Trek has always had an uneasy relationship with action – space fights unfolded in elegant-but-unexciting slow motion, and hand-to-hand fight scenes involved a lot more punching than was really believable for the 23rd century.  Abrams still doesn’t quite know how to stage action scenes – during the opening space battle in particular it was almost impossible to tell what, exactly, was going on – but I did appreciate the way in which he gave each shot, digitally created or not, equal weight.  Too often, I think, special effects shots are treated as an afterthought, and they come out looking like it.  But there were several CGI shots in Star Trek that were jaw-dropping: the Enterprise warping in with all-guns blazing; the extreme wide shot of the shuttlecraft abandoning the Kelvin while the squidlike Narada lists.

The screenplay is a deft but, in the end, dissapointingly empty thing.  It is exceedingly well-written: it was a stroke of genius to introduce the parallel-universe aspect, because it allowed the screenwriters (Roberto Orci and Alex Kurtzman) to basically declare ignore the entirety of the Star Trek canon.  And I appreciated too the way that every every action had a direct impact on the plot.  When Sulu stalls the ship, it’s not only a funny moment – it’s also the reason why the Enterprise arrives late to Vulcan and avoids the massacre.  From a cinematic standpoint, there was a lot that I could admire.

But the film hums along at such a merry pace that there’s hardly any room for explanations or ideas, and it’s in this area that the film disappointed me the most.  Star Trek has always aimed to be intelligent.  God knows that it hasn’t always succeeded (I would trade a considerable amount to be able to erase Star Trek V: The Final Frontier from existence) but at it’s best it drew inspiration from science, literature, and mythology.  Highbrow is probably too severe a term to apply to a show that gave to us the Tribble and the Vulcan Nerve Pinch, but in some ways that’s what Star Trek was aiming for.

There’s none of that, really, in the new film, and indeed the plot elements that are there are scantily sketched out.  The time-travel element displays a depressing lack of knowledge on the workings of both time paradoxes and black holes.  The “red matter”, for all its importance and power, is not even afforded even the barest Treknobabble explanation, and exists as a plot point and nothing else.  And for all Nero’s bluster, his plan to destroy the Federation is pretty stupid: with almost two-hundred planets and many more colonies and space stations, he would have been at it for quite awhile if Kirk and Spock hadn’t intervened.

Speaking of which – all kudos to Eric Bana, who did the best with what he was given.  (I particularly liked his strange speech patterns; the way that he responded to Pike’s hail with, “Hi Christopher, I’m Nero” was perfect.)  But the best Star Trek films have had the best villains – the Borg from First Contact, the Klingons from The Undiscovered Country, and (of course!) Khan – and Nero simply does not rank among them.  He’s neither tragic nor frightening, and his ship is so powerful that he’s never given the opportunity to be particularly smart or cunning.  He’s just sort of there, chewing the scenery and snarling at people, all bluster and no substance.

But in all honesty these points are minor.  This film is an origin story, after all, and origin stories are always more about character than plot.  And I can’t really put down a film that knocked me flat twice.  And now that at least two sequels are almost certain to be greenlit, I’m excited for the future of the Star Trek franchise for the first time since I saw First Contact in 1997, when I was ten, and indeed I left the theater after this feeling much the same way that I remember feeling then.

So I’ll take what I can from this film, and hope that what’s missing makes it into the next film.  After all, Batman Begins was pretty good – but it was The Dark Knight that really blew everyone away.

Incidentally, I agree with much of what Wil Wheaton has to say, and I would point you in the direction of two typically terrific posts from The House Next Door if you want to read first a positive review and a negative one, both of which I somehow agree with on some level.

The Voice Of The Team.

Back in December I somehow missed the news that Joe Starkey is going to stop calling play-by-play for the 49ers:

The tipping point for Starkey was the weekend of Nov. 22-23, with the Big Game in Berkeley on Saturday and the 49ers in Dallas on Sunday. Starkey said he could not get a flight to Dallas from San Francisco until 12:45 a.m. PST Sunday, with an arrival time around 7 a.m. CST. He could not get a hotel room at that time of day and got no sleep prior to the noon kickoff at Texas Stadium. Starkey developed laryngitis returning from Dallas and ended up missing the next two 49ers games, at Buffalo and against the Jets at Candlestick.

I’m impressed that Starkey has been able to juggle calling both Cal games and 49er games for so long (since 1975 for Cal, and 1989 for San Francisco). Still, I think this is a loss for Bay Area sports on a whole, and I know that I’ll miss listening to Starkey call Niners games. Football is not a sport suited to radio; it’s faster and more visual than, say, baseball. But Starkey somehow managed to perfectly sketch out the action on the field, in only a few words and always with palpable excitement. I followed football much closer when I was in high school, and on those agonizing fall Sundays when I wasn’t able to watch the game, Starkey was there. He was far and away the best football announcer I’ve ever heard, and the 49ers will miss him.

Resident Evil 5 and Racism.

 

re5_screen001

There has been a reasonably serious controversy (among people who are reasonably serious about these sorts of things) brewing about the new Resident Evil since an extended trailer was released in 2007.  The storyline is similar to the other titles in the series: the main character, an inevitably grizzled and impeccably badass American man, is sent to investigate a small town whose inhabitants have all been infected by eel-like parasites and turned into zombies.  The catch?  In RE5, that village is in Africa, and there were a lot of people (me included) who were made uncomfortable by the footage of a white man shooting hordes of savage black people (with machetes!) in the face.

The game came out a week ago Friday, and with its release have come reviews both defending and attacking the game.  Here’s Seth Schiesel, writing in the New York Times:

Let’s get this out of the way: Resident Evil 5 is not a racist game.

So Resident Evil 5 exposes the perhaps uncomfortable truth that blacks and Arabs can become zombies too, just like anyone else. Blacks and Arabs do not have a secret anti-zombie gene. And just like all the thousands of white, Asian and Hispanic zombies that have been dispatched in innumerable other games before them, the African zombies must also be destroyed, or at least neutralized.

This is true enough, but is, it seems to me, a little beside the point.  I think that Evan Narcisse, who writes for Crispy Gamer, gets closer to the problem:

For my part, I’ve never called RE5 racist, and I probably won’t. Throwing the word around oversimplifies what I think is a more complex reality. What I will stand by is my assertion that this game will make plenty of people uncomfortable in racially specific ways.

This black videogame journalist has never said that black people aren’t fair game for being enemy antagonists in videogames. What’s problematic is, the way that RE5 chooses to make them antagonists pounces on fears that were promulgated about black people in the not-so-distant past. Sure, we’re all susceptible to zombie virus, as Schiesel’s NYT write-up blithely notes, but the subtext of the game seems to whisper: “Yeah, but those Africans don’t have as far to go to become savages.” This subtext feeds on awful, previously understood notions about black people.

Now, I haven’t played RE5, so I’m not going to comment on the specifics of is-or-isn’t-the-game-racist.  But there’s clearly a lot of racially-charged imagery in the game, and I think that this was a pretty massive miscalculation by Capcom, who designed the game.  A couple of points:

First, lets talk about zombies.  The main appeal of zombies – and in particular, zombie video games – is that they take people and move them squarely into the uncanny valley; that is, from being human to being something human-like but distinctly not.  Consider this helpful graph:

461px-mori_uncanny_valley

The distinct alien-ness of zombies is what allows the player to kill them with such complete impunity, and with a total lack of guilt.  People, however evil they may act, are still people, with mothers and fathers and thoughts and so on.  But zombies are soulless, parasitic killing machines!  They’re evil incarnate!  You’re actually doing the world a favor by putting a well-placed bullet squarely inside their craniums.

But by introducing this sort of imagery into the game, the creators have (by all accounts inadvertently) reintroduced the guilt.  They have destroyed the very principle that allows their genre to succeed.  And that supersedes any improvements in gameplay or graphics or story.  If the player is thinking too much about the people they’re killing, the game can’t succeed.

The other point I think is interesting about this is that the designers at Capcom, who are Japanese, were reportedly rather confused when people began talking about the racial aspects of the game.  This is rather clearly a cultural thing; I don’t think it’s possible, as an American or as a white person, to watch that trailer and not admit that there are things there that could be found offensive.  But I think that this should be taken as a very serious lesson by the entire Japanese video game industry.  Video games are the only media form in which Japan dominates the American market.  There are subsets of people who are fans of Japanese television or films or music, of course, but millions of people with no particular interest in Japanese culture still play Japanese video games.  So I think that they should be far more mindful of these sorts of things in the future, especially given the ways in which video games are often held up as markers of a continuing cultural downfall.  (See: Grand Theft Auto.)  There’s something to be said for knowing your market, and I can’t help feeling that, somewhere during the lengthy development of RE5, somebody should have thrown out the idea that maybe playing on distasteful cultural stereotypes wasn’t the best idea ever.

In Defense of Brendan Fraser.

The National Review Online just finished counting down their collection of the 25 “Best Conservative Movies”, and since they started the countdown people have been kicking around the picks.  And there’s some pretty strange stuff on there.  Their #1 favorite film is The Lives of Others, which is, incidentally, one of my favorite films too – but isn’t everyone pretty much against the Stasi by now?

I was going to just leave this alone, and indeed people soon started swooping to the rescue of films like Brazil or The Dark Knight, saving them from the indignity of having their ideologies perverted.  But then I saw this.

9. Blast from the Past (1999): Revolutionary Road is only the latest big-screen portrayal of 1950s America as boring, conformist, repressive, and soul-destroying. A decade ago, Hugh Wilson’s Blast from the Past defied the party line, seeing the values, customs, manners, and even music of the period with nostalgic longing. Brendan Fraser plays an innocent who has grown up in a fallout shelter and doesn’t know the era of Sputnik and Perry Como is over. Alicia Silverstone is a post-feminist woman who learns from him that pre-feminist women had some things going for them. Christopher Walken and Sissy Spacek as Fraser’s parents are comic gems.

If you’ve never seen Blast From the Past, let me tell you something: it is a great fucking film.  And this review manages, in five short sentences, to misinterpret almost every aspect of it.  So I was moved to set the record straight.

First of all, Adam (the Brendan Fraser character) is innocent not because he grew up in 1950s America but because he didn’t.  He grew up in a bomb shelter, an entirely artificial world that had the lovable aspects of the 1950s (the Perry Como, the Honeymooners) and none of the many things that blighted the era (McCarthyism, racism and segregation, or the Korean War).  The film plays off our nostalgia for the 50s, sure, but it also satirizes those same feelings of nostalgia by showing how they have little to no basis in reality.  Adam isn’t just unprepared for life in the 1990s – he’s unprepared for life.  He’s been raised like Beaver Cleaver, and as a consequence he has profound trouble interacting with society.

James Bowman, who wrote the blurb, also seems to have labeled Eve a post-feminist for no reason other than she is bitter toward men.  (Bonus fact: her dick-of-an-ex-boyfriend is played by a very young Nathan Fillion!)  In any case, Eve doesn’t learn much over the course of the film except that all men aren’t dirtbags, which is hardly an epiphany exclusive to the “pre-feminists”.  Incidentally, the film’s one example of a “pre-feminist” is Helen Webber, Adam’s mother (played by Sissy Spacek), and she has very few things going on for her.  She spends the film trapped, psychologically and physically, by her well-meaning but wildly eccentric husband, with whom she has no real communication.  She takes to drinking to dull her misery, and her husband is so perpetually clueless that in the thirty-five years that they’re locked in the bomb shelter he never realizes she has become a chronic alcoholic.  Call me crazy, but I think that Helen’s life could have been improved at least a little if she wasn’t such a pre-feminist.

But what is worst about this blurb is that it attributes to the film a simplistic, 90s-bad-50s-good viewpoint that simply does not exist in the source material.  Adam is damaged because of his upbringing and Eve because of hers, but the two fall in love not because of their backgrounds but despite them.  Adam learns that there are things about present-day America that are good.  (For example, the Dave Foley character can be open about his homosexuality, and isn’t forced to, you know, marry to a woman and be miserable.)  Eve learns that there are things about Adam that are good.  Very little is learned about the fifties.  (Not the least because the film’s timeline starts in 1962.)  And though its aim is more comedy than drama, Blast From the Past hews closer to Revolutionary Road than it does to the film described by Bowman.