Society

World Wars Two

I’ve actually seen American history textbooks whose beginning-of-the-unit timeline says right there in print, “WORLD WAR II: 1941–45”. (The odd thing is that those same textbooks have to acknowledge that the war was already going on in 1940 so that they can teach Lend Lease and the Neutrality Act.) I would imagine there are Russian textbooks that say the same thing.  Most Americans, I think, know that by the time the USA joined, the war between Britain, Germany, Italy and the Soviet Union had already been raging for years, but we can still shake our head at the insularity of actually telling children in history class that the war didn’t start until America entered in 1941, when it in fact had begun in 1939.

… Or had it? (Dunh dunh duuuuunh.)

I was a teenager when I learnt that Japan and China went to war with each other in 1937.  The expansion of the Asian war in 1941 to bring America and the British Commonwealth in on China’s side pretty closely parallels the expansion of the European war at the same time, with the Soviet Union and the USA being brought in on Britain’s side.  For China and Japan, 1937–45 represents a period of continuous conflict in the same way that 1939–45 does for Britain, Germany and Occupied Europe.  It bothered me that, though the two conflicts merged into a global World War II in December 1941, the name for the pre-1941 Asian conflict was “the Second Sino-Japanese War”, while the name for the pre-1941 European conflict is “World War II”.  English-language histories of the war would include the Phoney War and the London Blitz, but wouldn’t include the Marco Polo Bridge Incident or the Rape of Nanjing.

For a long time it didn’t seem to be such a big deal.  I would’ve liked the pre-41 European war to have its own name, but after Pearl Harbor, they both merged into a single global war, Axis vs. Allies, right?

… Or did they? (Dunh dunh duuuuuunh.)

Lately I’ve been thinking about how Anglo–Americentric it is to consider the Second World War a unified conflict after 1941.  Even leaving aside that there was no coordination between the European Axis Powers and Japan, we can still look at the three major Allied Powers: Britain, the Soviet Union and the United States.  One of the Allied Powers specifically.

After 22 June 1941, the war in Europe was fundamentally a war between Germany and the Soviet Union.  In terms of men and materiel involved, after the German invasion of the Soviet Union, the Western Allies’ participation in the war—the North African and Mediterranean theatres, the strategic bombing campaign, the D-Day campaign—became peripheral, and there’s a real sense in which, in terms of the grand strategic outcome of the war, our central contribution was in how much we could handicap Germany’s war effort in Russia.  If the Wehrmacht had taken Moscow, or had won in Stalingrad and crossed the Volga and rolled into the Caucasus, and had been able to transfer its millions of soldiers back to the West, we can’t reasonably expect that we’d ever have been able to dislodge them from Europe.  Even during the Battle of the Bulge in December 1944–January 1945, when the Germans pumped hundreds of thousands of additional troops into the Western Front in their last great push to turn back the British and American advance into Germany and knock the Western Allies out of the war, the total number of German troops fighting in the West was still just a small fraction of the number fighting against the Russians in Poland and East Prussia.  That’s part of why four million of the (very roughly) five million German soldiers killed in the Second World War died on the Eastern Front; it’s part of why four hundred thousand Americans and four hundred fifty thousand Britons were killed during the war, but twenty-seven million Soviet citizens were.

Whereas if we look at the Soviet Union in the Pacific War: Russia shared an extensive land border with Japan (the only one of the Three Powers to do so), by way of Korea, at that time an outright Japanese possession, and Manchuria, a Japanese puppet state since 1931; in Vladivostok, the Russians had a naval and air base within easy strike range of the Japanese Home Islands, far closer than anything the Commonwealth or the United States possessed.  The two countries rubbed up against each other so closely that they were literally athwart each other’s supply lines: Vladivostok thrusts into the Sea of Japan between Japan to the east and Korea and Manchuria to the west, while the Trans-Siberian Railway, Vladivostok’s link to the rest of Russia, actually runs through Manchuria.

And yet the Soviet Union and Japan remained at peace with each other throughout the Pacific War.  Indeed, out of deference to the Soviet–Japanese neutrality pact of 1941, the Russians actually interned British and American airmen who landed in Soviet territory after conducting operations against Japanese targets, just as would happen to belligerent airmen who landed in neutral countries like Switzerland or Spain (though the Russians usually permitted interned Allied airmen to “escape” after a given period).

(Someone’s going to mention that the Soviet Union did ultimately declare war on Japan, on 9 August 1945, three months after Germany surrendered and six days before Japan did the same, finally ending the Second World War.  The Soviet invasion of Manchuria of 1945 is an important event, and in fact I’m mentally drafting a blog post about it as I write this, but it had no effect on the outcome of the war on either continent and is irrelevant to the discussion here.)

Both the Soviet Union and Japan materially hindered their allies by refusing to go to war with each other from 1941 to 1945: peace along the Manchurian–Siberian border meant that Japan was freeing up Soviet troops to fight against Germany, while Russia was allowing Japan to divert all its best troops to the south to fight in China, Southeast Asia and the Pacific islands.

I just can’t see Europe and the Pacific as separate theatres of a single war when one of those theatres saw the Soviet Union locked in a death struggle in the bloodiest and most destructive war humanity has ever fought, while the other saw them remain at peace with the enemy for the duration.  It’s bad historiography.  It assumes that the Anglo–American experience, as the only two powers to conduct a unified war effort over both hemispheres, is the definitive one.

So I’m going to be calling them the Second World Wars.  Like “Napoleonic Wars”, that seems to me a good umbrella term under which to gather several separate conflicts which were clearly very closely related and overlapped considerably, but which did not share unified causes, participants, outcomes or even date ranges.  We acknowledge the separateness of, say, the Peninsular War, the War of the Fifth Coalition and the War of 1812, while also acknowledging how inextricably interlinked they are; we should be able to acknowledge the same thing about the wars in Europe and the Pacific.

The Second World Wars, then, to me include at least four conflicts: the European war of 1939–45, the Asian–Pacific war of 1937–45, the Spanish Civil War of 1936–39 and the Winter War of 1939–40. (Wikipedia’s article on the Napoleonic Wars groups the Anglo–American War of 1812 and the Latin American wars of independence as “subsidiary wars” of the Napoleonic conflicts, and I think that’s an excellent way to describe the Spanish Civil War‘s relationship to the war in Europe.)

And I mean, let’s be honest.  We all already think of the Winter War, or the Battles of Khalkin Gol or the Japanese occupation of French Indochina, as part of “World War II”, the cataclysmic period of global upheaval; they’re just not formally included in the definitions of the war itself.  By redefining the Second World Wars as an era rather than as a single conflict, we accord them a status we already know they should possess.

I

Dependence

I got off the plane at Heathrow last Tuesday morning and discovered that my iPhone utterly refused to receive any cell data signal in Britain.

I’m expecting this to be pretty beneficial to my cell phone bill—the last time I was home, for five days in 2011, my Android and I racked up a hundred forty bucks in data roaming charges—but it did mean that during my trip, I was completely cut off from the Internet or iMessage except when I could connect to wifi.

This was mostly fine.  Mostly.

Our hotel was in Borehamwood, just up the street from the Elstree & Borehamwood train station, so on Wednesday my mother and I decided to go to the National Portrait Gallery.  As we left the hotel room, my mum said, “And you know where we need to get off the train?” and I casually said, “Yeah.”

Reader, that was a lie.  What I had was a superficial knowledge of London geography (I can group a list of Central London landmarks into general categories like “this is in Westminster”, “this is in the West End”, “this is in the City”), and a reflexive assumption that, if I get lost, I can check for info on my smartphone.

Except that day I couldn’t.

We got on the train, and I checked the on-board map to figure out where we should get off.  What we should have done was get off at St. Pancras, so as to take the Tube from King’s Cross to Charing Cross, or else get off at Blackfriars to take the Tube to Embankment.  But I knew that the closest two stops we’d get to Trafalgar Square would be City and Blackfriars, so I had us get off at City because the picture of London I had in my head was one in which the City is close enough to Trafalgar Square for us to walk it.

(It’s close enough that I could have walked it, on my own, if I had the familiarity with the geography to know where I was going.  Figuring it out along the way and with my mum in tow, nope.)

So the upshot was that we emerged from the train station into Holborn Viaduct with no blessed idea how to get to the National Portrait Gallery, beyond perhaps, “figure out which direction is west”.

It wasn’t even that harrowing, in the end.  I managed to figure out which of the many bus routes that passed us would head to Trafalgar Square.  (The trickiest part of that was making sure we got on a bus headed in the right direction.)  After visiting the NPG, we decided to head to Bond Street to visit the shop that sells my sister’s jewelry, for which we got directions from the nice lady at the Trafalgar Square Waterstone’s.  (The trickiest part of that was that she told us to follow Cockspur Street and Pall Mall to Regent Street, but it turns out that Regent Street isn’t actually “Regent Street” at its intersection with Pall Mall; it is in fact “Waterloo Place”.)  Then after we got to the end of Bond Street, we turned into Oxford Street for some shopping, before taking the Tube back to King’s Cross and the train home.

But I felt a real disconnect, especially for that first quarter hour after we left City train station and had to figure out which end of the station we’d left from and which bus to take.  When Lisa and I spent a couple of days in Paris in 2009, for the first three or four hours or so, I was really disconcerted by the fact that I was somewhere where the conversations and signage that surrounded me was completely unintelligible to me.  I had a somewhat terrifying sense of isolation and helplessness.  Briefly in London last week, I got something of the same experience, just from not being able to pull up the internet on my phone.

I

On outnumbering and being Outnumbered

Unearnt privilege is real.  Unearnt privilege is also invisible.

Because privilege is invisible, those of us who have it (hi, straight white male here) can often be unaware of it, even when we’re actively exercising it; and this can lead us to think it isn’t real.  This can lead us to insist it isn’t real, particularly when we’re being called out for having (unwittingly or otherwise) profited by it.

But it is real.  If you’re in America and society perceives you as male, or white, or straight, or rich, or Christian (to name just a few big ones), then society affords you a latitude, society caters to your preferences and to your comfort, in ways that it simply doesn’t do for people it perceives as not belonging to those privileged groups.  It makes life for you easier and makes sure you feel more important.  That isn’t to say it makes life easy or makes you feel important, simply easier and more important than would be the case if you belonged to one of the non-privileged groups.

Gear change. I really love this piece in Cosmopolitan calling out Fox News’s Outnumbered for their paternalistic attempt to tell Cosmo to stay in their place and cover issues women should be reading about (fashion and pleasing men in bed, obvs) while leaving politics with the men, where it belongs.  In its tone, in its substance, in its perception, the essay is perfect from start to end.

And it got me thinking about the title of the show.  Outnumbered.  I’m already predisposed to dislike that title, because I don’t appreciate a cable news show appropriating the name of the most hilarious parenting sitcom ever televised.

But if you’re someone who I’ve claimed, up above, that our society gives you unearnt privilege, just for being you, and you’re sitting there thinking that’s a load of bullshit, that what you have, you’ve earnt, and it’s patent liberal hypocrisy of me to use claims of equality in order to give women or racial minorities or LGBTs special treatment, then think about the title of Outnumbered.

This is Fox News’s attempt to get women watching them in the middle of the day, since, after all, the daytime TV market is predominantly female.  And yet it’s not called Outnumbering or In the Majority or anything to emphasise the women who comprise most of its panel.  Instead it’s called Outnumbered.  The producers of this show, in their quest to appeal to women viewers, still take it totally for granted that even in something so fundamental as the show’s title, their audience are by default going to share the perspective of the one male panelist rather than his female colleagues.

That’s not the most pernicious, or pervasive, or harmful manifestation of privilege I could think of, not by a long shot.  It’s not even the worst instance of it just in the criticisms of Outnumbered cited in the Cosmopolitan essay.  But it’s a tremendously clear one.

I

Show them this

A friend on Facebook linked to Next Time Someone Says Women Aren’t Victims of Harassment, Show Them This, and I’m a big fan.

My first big takeaway is that my very presence as a man means that the women I know are less likely to get harassed while I’m around. Therefore, by definition, I only see them during their most harassment-free times, so it’s inevitable that the picture I have of a woman’s life involves her being subject to far less harassment than she in reality is.

It is therefore important that when a woman tells me she’s being harassed, I believe her. This falls under the basic principle that when a woman tells me something is sexist, I believe her; there are few things more prima-facie sexist than a man explaining to a woman how something isn’t actually an instance of sexism.

(See also: few things more prima-facie racist than a white person explaining to blacks or Hispanics or any other racial minority how something isn’t actually an instance of racism.)

My second big takeaway is that “Not all men” is a perfectly valid way to start off a sentence, as long as you’re not saying it to women, but instead to the men who are the problem. One of the special privileges I get as a male in Western society is that my voice is naturally treated with more authority than a woman’s. There are plenty of men who, when told that what they’re saying is sexist or creepy by a woman, would have no problem dismissing anything she says and concluding that their own behaviour is perfectly fine; but they’d have a much harder time doing that if it were a man who told them.  Sure, they’d most likely get defensive and angry, but being called out for their sexism by a man would stick with them far more than being called out by a woman.

It’s wrong that my voice gets that privilege, but unfortunately it’s true. I can’t change that, but what I can do is use my voice to try and build a world for my kids to live in where my daughter will be heard with just the same weight as my son.

I

Black Orchid

What my last post boiled down to, essentially, was that I’m old enough now, with around three and a half decades behind me, to have become aware of some of the ways that values and norms of acceptability have shifted just during my lifetime, such that people (of whom I am one) see the world differently now, when I’m thirty-four, than many of the same of us did back when I was, say, fifteen.  That time, I was talking about sport, but I recently came upon the same phenomenon again in a different context during my family’s multi-year Doctor Who rewatch.

The Cranleighs. At least, those of them who survive the story.
BlackOrchidCranleighs

We’ve reached season nineteen in the rewatch, Peter Davison’s first season as the Doctor, and recently we watched “Black Orchid”.  It was first transmitted on 1–2 March 1982, and there’s simply no way the same story in the same way could be told now, in 2014.

(Ten-year-old David Tennant was probably still excitedly watching his future father-in-law’s time as the Doctor when “Black Orchid” premiered, though twenty-three-year-old Peter Capaldi is more likely to have outgrown the programme by then. And, literally, no one had even conceived of Matt Smith yet.)

There are spoilers ahead for “Black Orchid”.

In the story, the TARDIS materialises in the 1920s at the home of Lord Cranleigh, who lives in a huge country manor somewhere in the Home Counties with his fiancée, Miss Ann Talbot, and his mother, the dowager Lady Cranleigh.  (I apologise for referring to a mother-and-son pair as Lady and Lord Cranleigh, because I know that’s confusing, but it’s how they’re continuously referred to throughout the story, except for when the local police commissioner once addresses Lady Cranleigh as “Madge”.)  Lord Cranleigh is the younger brother of George Cranleigh, a famed botanist who was killed by natives during an exploratory expedition in the Amazon rain forest; Ann was engaged to George before she agreed to marry Lord Cranleigh after the elder brother died.

Nyssa, Adric, Tegan and the Doctor
BlackOrchidTARDIS

The TARDIS team (at this time consisting of the Doctor, Adric, Nyssa and Tegan) have arrived on the day of an annual masquerade ball at the Cranleigh residence.  At Lord Cranleigh’s insistence, they agree to attend; Cranleigh and Ann provide them with costumes from the house supply.

What neither the TARDIS crew nor Ann know, though, is that George Cranleigh is not dead; during his expedition to the Amazon, the natives tortured him in a way that left him physically deformed and mentally unbalanced.  Once George was returned to England, Lord and Lady Cranleigh decided to keep his survival a secret, and have been holding him captive in a secret room deep within their manorhouse in order to save both him and themselves the embarrassment of being made a public spectacle.

While the masquerade ball is going on, however, George manages to escape from his captivity, killing one of the household staff in the process.  He then sneaks through the secret passages that riddle the house until he arrives in the Doctor’s bedroom, where he dons the harlequin costume the Doctor is to wear to the ball.

The Doctor does not see George, but he does find the secret passageway that George used to get to his room.  He follows it back to George’s room, where he finds the body of the murdered servant.  He summons Lady Cranleigh and shows her the body; she express shock and mystification at the murder, but fails to tell the Doctor about the existence of George.  She promises him that she will call the police immediately, and asks the Doctor not to tell the other guests about the murder so as not to upset them.  The Doctor is reluctant but agrees and returns to his room.

George Cranleigh, disguised as the Doctor disguised as a harlequin, makes his move on his long-lost fiancée
BlackOrchidGeorgeCranleigh

George, meanwhile, with his face covered by the harlequin mask, has infiltrated the masquerade, where he brutally attacks Ann Talbot and murders a second servant.  He then escapes back into the depths of the house, where, after he politely returns the harlequin costume to the Doctor’s room, he is secretly recaptured and returned to captivity by Lord and Lady Cranleigh.  The Doctor, meanwhile, has returned to his room, where he puts the harlequin costume on and arrives at the masquerade just in time for Ann to identify him as the man who attacked her.

This is followed by a fairly predictable twenty minutes in which Lady Cranleigh refuses to help the Doctor and covers up the fact that she knows it was her son George who is in fact the culprit, leading to the police arresting the Doctor for murder.  Matters come to a head when George escapes once more.  He sets fire to the house, then kidnaps Nyssa and retreats onto the roof with her as a hostage.  Lord Cranleigh redeems himself (apparently) when he and the Doctor follow George onto the roof of the burning manorhouse and persuade him to release Nyssa.  Lord Cranleigh, realising the error of his ways, steps forward to embrace his brother, but George instead hurls himself off the parapet and falls to his death.  I think we’re meant to take George’s suicide as demonstrating just how far his mind had gone, but it more feels to me like he was simply terrified of the man who has kept him tied to the bed in a darkened room for the past two years.

But it doesn’t matter why George has killed himself; he has, so now that’s cleared up, the TARDIS team and the Cranleighs can all be friends again, and there is much smiling as our heroes bid farewell and depart through the TARDIS doors.

Which, of course, points up the biggest problem with viewing “Black Orchid” nowadays—that this ending is considered happy.  The conflict has been resolved, and so everyone can move on with their lives.  This necessarily implies, then, that the conflict in “Black Orchid” is that George Cranleigh has survived his torture in a deformed and unbalanced state, and not—as I think any viewer in 2014 would expect—that his brother and mother are so monstrously inhumane that they have secretly kept him imprisoned in a tiny room with no natural light because admitting that he is still alive would embarrass them.

(You can make the argument that George Cranleigh might have been so proud a man that he would rather have the world think him dead than be exposed to public scrutiny in his present state; but his repeated and violent attempts at escape would seem to give the lie to that idea.)

It’s important when we look at “Black Orchid” to distinguish between ideas that the story thinks are A-OK by 1982 standards and ideas that the story presents as A-OK by 1920s standards.  We’re obviously not meant to think it’s all right for the Cranleighs to so callously imprison George, or for Lady Cranleigh to allow an innocent man to be arrested for murder rather than admit the truth, but the problem is that our reaction is meant to be one of disapproval rather than condemnation.  Once they stop engaging in their objectionable behaviour—ideally by seeing the light and setting George free, but, you know, I guess him throwing himself off a building and thus removing the dilemma works just as well—then there don’t need to be any consequences for what they’ve done, and it shouldn’t even occur to us that they’re morally responsible for their son/brother’s death.  So incongruous is the ending that I was certain that in my previous viewings of “Black Orchid”—on TV in the mid-90s and when the DVD first came out in 2008—that the Cranleigh brothers had fallen to their deaths together at the story’s climax.

There’s something really ghastly about that final farewell scene, with all the smiles and hugs goodbye.  Tegan, as the only human amongst the Doctor’s companions of the moment (and as a pretty outspokenly judgemental character), is the voice of the 1982 viewer, but the only emotions she displays here are excitement and gratitude when the Cranleighs let the TARDIS crew keep the costumes they wore to the masquerade.  (Read in a broad Australian accent: “D’ya really mean it?  We can keep them?”)

And then there’s the deep creepiness of Lord Cranleigh’s relationship with Ann Talbot—and when I say creepiness, that’s definitely something that we bring to it as 2014 viewers, because the script doesn’t expect the 1982 viewer to have any problem with it whatsoever.  Lord Cranleigh, a man in his mid-thirties, lumps his fiancée in with a group he refers to as “the children”, by which he means the teenagers who are too young to be served alcoholic beverages.  And yet not only is Ann, who we might therefore guess is twenty-one at the oldest (actress Sarah Sutton was twenty at the time the episodes were taped), old enough to be engaged and live with her fiancé, but she’s apparently old enough to have been engaged to an even older man, George Cranleigh, several years ago.

(I think we’re meant to conclude that Ann is the Cranleighs’ ward, which makes the idea of her living with them totally fine at the cost of making her engagement to successive Cranleigh brothers much, much skeevier on the men’s parts.)

And if we the viewers find it impossible to forgive the Cranleighs for what they have done, how much worse is it that Ann seems to forgive them in no time at all?  Sure, she has a tearful exclamation of, “How could you!” when first she finds out, and flees from the room, but her disgust with them seems to last approximately six or seven seconds.  The next time we see her in the sort of context that allows her to show us her state of mind, during the goodbye scene, she is snuggled comfortably in the arms of Lord Cranleigh, the man who knew that her fiancé was still alive but kept that knowledge from her, imprisoned her beloved and used that pretence as a cover to allow him to woo her himself.

That final scene isn’t the be-all and end-all of the story’s problems, but removing it would go a long way to rinsing out the bitter taste that “Black Orchid” leaves in the mouth.  In my last post I wrote from the perspective of being left behind as society changed around me; this time I’m glad that it is I who have changed with society and left behind the outlook that would have allowed us to think of this story as having a happy, or even an acceptable, ending.

I

New Orleans and the world that made it

Yesterday I finished The World That Made New Orleans: From Spanish Silver to Congo Square by Ned Sublette, a history of the first hundred years of the Crescent City, from its founding in 1718 through 1818.  It was a topic I went seeking out, I freely admit, because I’d been playing Assassin’s Creed: Liberation, which is set in New Orleans in the 1760s and has as its hero a femme de couleur libre.

worldthatmadeneworleansSublette opens his book by telling us that it’s “not about music per se, but music will be a constant presence in it, the way it is in New Orleans.”  This if anything understates the presence of music in the book, which shouldn’t be surprising for a city that has for two hundred years been known for the vibrancy, uniqueness and Africanness of its musical traditions (just like its religious and cultural traditions), through which it birthed the art form that is modern American music.  The book definitely comes across as a work written by someone who was brought to the history through a love of the music, rather than someone who was brought to the music through a love of the history; but as such, it gives you a perspective on the history of New Orleans that’s absolutely necessary and couldn’t have been achieved the other way around.  Sublette occasionally assumes that his readers will find a certain specific commonality between the musical/dancing traditions of New Orleans and Trinidad, or Cuba and Guadeloupe, as prima facie fascinating as he does, but that’s a small price to pay for that.

(The other small price to pay is Sublette’s insistence on referring to foreign monarchs by their names translated into their own national languages, even for those monarchs who are known in English only by their English-language names.  So he refers to Felipe II of Spain, not Philip of Spain, and to Carlos III, not Charles III, making it tough to follow the fact that he’s talking about individuals who already have established names and identities in English-language historiography.  Maybe he worked for NBC during the 2006 Winter Games.)

(No, I’m never going to let that go, NBC.  We speak English, so we call the city Turin.)

The book’s title is an accurate one—this is a book about the world that made New Orleans, and as much time is spent on history elsewhere as is spent on the city itself.  This could well be because, for most of its first century, New Orleans was a small, distant outpost, and there wouldn’t be much more with which to fill four hundred pages than there would be for a history of the first century of Charleston, South Carolina, or Bridgetown, Barbados.  So what we get instead are introductions to all the distant places and events that poured themselves into New Orleans and forged the city’s unique character.

There’s a chapter on French court life during the regency of the duc d’Orleans (during the childhood of Louis XV, the only French king ever to rule over New Orleans), since it was the duke who first sent French settlers to the mouth of the Mississippi and for whom their settlement was named.  There’s a chapter on life in prerevolutionary Haiti and a chapter on the revolution itself, which led so many refugees, eventually, to resettle in New Orleans—white men and the black slaves and mixed-race concubines they brought with them.  (Those chapters made me look forward to playing Assassin’s Creed: Freedom Cry, whose hero is an escaped slave washed up on the shores of prerevolutionary St-Domingue.)  And when we get to 1803, there’s a chapter on Thomas Jefferson, architect of the Louisiana Purchase, and another on the booming American slave trade of which the Big Easy suddenly found itself the fulcrum.

These last two were the chapters that blew my mind.

First, Jefferson.  Sublette spends a chapter voicing, eloquently and incisively, exactly the same reaction I have whenever the morality or virtue or greatness of Thomas Jefferson is discussed.  Yes, Jefferson was the primary author of the most famous affirmation of political self-determination ever written.  Yes, he forcefully and repeatedly articulated that the only way for Americans to practise the freedom of religion that we hold so dear is for us to maintain a government that is wholly free from religion and entirely secular.  Yes, throughout his life he wrote against slavery and wrote of it as an evil that does harm to everyone it touches.

He also owned other human beings, his entire adult life.  He lived a life of leisure and comfort, made possible only by the labour (and lives and good health and children) he stole from them every day, a life in which he generated huge debts that he knew quite well would be paid by the breakup and sale of the families he owned after his death.  He raped at least one of his slaves.  (And yes, it is rape to have sex with a human being you own, full stop, and it deserves to be called out as such.  And the fact that the woman he raped was his dear deceased wife’s half-sister only makes it creepier.)  And through the Louisiana Purchase, as Sublette points out, not only did he significantly increase the extent of American slavery’s territorial grasp, but he gave the slave industry a crucial shot in the arm that was a major factor in allowing it to boom right up until the Civil War.

Whenever the moral hypocrisy of the man is pointed out, the first half of all that always gets brought up as if it somehow alleviates him of the moral responsibility of the second half.  I’ve never understood why that would be, and apparently neither has Sublette.  Rather, the second half negates whatever praise he might have earnt from the first.  Sublette explains at length why that is, and my original idea for this post was simply to transcribe the entire Jefferson chapter verbatim, until I considered, you know, the law.  (Also all that typing.)  So I’ll content myself with just two paragraphs:

No, we don’t know absolutely for certain if Master Tom did impregnate Sally or not.  If the matter were tried in a court of law, with a presumption of innocence and an expensive law firm to defend Jefferson (which is how a number of mainstream American historians seem to have seen their role in this case), we might have to let him off the hook for lack of definitive proof.  On the other hand, if he were a poor man with substantial circumstantial evidence against him and a public defender, he’d accept a plea bargain, the way some 95 percent of criminal cases in the United States are resolved now, and get off with a guilty plea and a reduced sentence.

But then, no one has accused Jefferson of a crime.  After all, you can do with your property as you like.

And so we come to the chapter on the American system of chattel slavery.  I’ve done a bit of research on slavery in the past few years, though (like most Americans) I still don’t know nearly as much about it as I should.  I do have it on my reading list to read a book devoted to the institution, but I haven’t got there yet; so it’s entirely possible (hell, even likely) that the points Sublette makes, which have significantly shifted how I looked at American slavery, are points that are very commonly made in the literature about it.

I did already know a few things.  I knew that both abolitionists and slavery advocates believed strongly that slavery had to continually expand in order to survive.  This means, for instance, that when Abraham Lincoln reassured the South that he did not want to abolish slavery, merely contain it within its present extent, both Lincoln and the slaveowners were well aware that that “containing” slavery was code for “condemn it to a slow, gasping death without the need for legislation”.  And I knew that, generally speaking, the American slave population expanded from the northern and eastern states of the South into the southern and western states.  And I knew that Congress forbade the slave trade—the importation of slaves from locations outside the United States—in 1808, the very earliest date allowed by the Constitution.

But I hadn’t put those three things together and carried them out to their logical extreme.  We all know—or we all should know—that Eli Whitney’s invention of the cotton gin in 1793 revitalised the American slave trade.  It industrialised the processing of cotton for its use in manufacturing, and so it vastly increased the demand for unprocessed cotton; and unprocessed cotton, because of the intensity of labour, miserable conditions and lack of education required to harvest it, is something that lends itself readily to slave labour.  Then, following close on the heels of the cotton gin was the Louisiana Purchase, opening up vast new lands to plantation cultivation, and therefore to the slave trade.

It’s easy, therefore, to see slavery and its hold on the South as an unfortunate accident of history—tragic, monstrous, criminal, but still also accidental.  Slavery, such an argument would go, only took such economic hold because it was needed to prop up the cotton industry, and it was to cotton that the Southern economy was dedicated.

But that ignores the facts.  Slavery very quickly became an industry in and of itself, an industry that was perpetuated just for its own sake.  Those plantations in Virginia and North Carolina and parts of Kentucky had been under cultivation for a hundred years—in the case of Virginia, two hundred.  Their soil was spent.  They could be more profitable planted with cotton than with tobacco, sure, thanks to the cotton gin; but they still wouldn’t be nearly as profitable as the cotton plantations in the virgin soil of Alabama, Mississippi, Louisiana or Arkansas.

But those new plantations presented an opportunity to the planters—to turn their existing slave populations into a source of profit, by using them as seed stock from which to breed the slaves who would fill up the new lands.  (Does that sound horrible and dehumanising?  Good.)  It’s not just that slavery thrived because it supported the thriving cotton industry; the cotton industry thrived because it supported the thriving slave industry.  We can talk of cotton plantations in Virginia and Carolina and Kentucky that operated on slave labour; but we might also talk of slave plantations that happened to grow cotton.  The cotton there was grown not as an end in itself, but as something for the slaves to do during the ten or fifteen years it took to raise a baby up into a saleable field hand.

That’s why slavery “needed always to expand in order to survive”; because as plantation lands filled up with slaves, their owners needed new, virgin lands opened up in which to sell their children.  That’s why Congress outlawed the importation of foreign blacks on literally the very first day allowed by the Constitution: because, like a tariff on foreign manufactures (the existence of which the Confederacy would denounce as being the other reason they were seceding), it kept the cost of the domestic good artificially high.  And that is why slave migration followed a basic north and east to west and south pattern: because slaveowners in the more settled regions were actively breeding slave populations who were always intended to be sold on down to newer plantations.  (In countless cases, the slaveowners were of course actively fathering parts of the population that they always intended to sell.)  We know that slave trading frequently caused the separation of families and we think their owners were monstrous for allowing this (the scene between Benedict Cumberbatch and Paul Giamatti in Twelve Years a Slave touches on this), but we are perhaps less cognizant of the idea that many families were created so that they could then be broken up—so that their children, when they reached an age where they’d be capable of a full day’s work, could be loaded onto flatbottom boats in Wheeling or Louisville and floated thousands of miles downriver, to be displayed in a showroom and sold on an auction block.

The World That Made New Orleans has twenty-two chapters, and those are only two of them.  The book had its weaknesses, but on the whole I’m glad I read it—and I’m really glad I read those two chapters, because they’re going to inform how I look at their topics for a long time.

I

The philosophy of spoilers

I talked a while ago about when I realised how much more enjoyable becomes when I avoid spoilers, and the basic principle I derived from that.

Right now spoilers are a big topic, because of the Olympics.  If, like me, you’re on the East Coast, you have to wait until 8PM EDT for NBC to start their broadcast of the day’s major events.  That’s 1AM BST–in other words, it’s right when actual competition is wrapping up for the day, and it’s hours and hours after many of the events we’re most interested in have finished.  You have to wait three hours longer on the West Coast.

But while you’re waiting, lots of your friends on Twitter and Facebook already know the outcome, either because they watched it live in Europe or because they’ve gone online–maybe even to NBC’s website itself–so they don’t have to wait.  And they’re talking about it.

I’ve seen both extremes in reaction to this.  I’ve had someone in my stream declare that we need to hold our tongues even after this stuff airs on NBC, to accommodate those who are watching on DVR(!).  And I’ve had someone tell us all that you either can have Twitter, or you can not be spoilt, but that you’ve got no right to expect people online to consider others when spouting spoilers.

I think they’re both wrong.

I’ve thought about this quite a bit, and I’ve refined my position down to a basic standard:

If there’s a time we’re all supposed to gather together to watch something, I think it’s really rude to spoil it beforehand.  What this means, as far as the Olympics go, is that it’s my own responsibility to avoid what’s being said by the people I follow who are actually in Britain–they’ve all seen it live on TV (or in a few instances, in person).  But those in America, who are heading online to see it before the rest of us?  They should be taking the rest of us into consideration.  And I’m speaking here as someone who is far more interested in Team GB than Team USA, so this system leaves far more of the onus on me than it does on others.

Note that this does not mean that you can’t talk about what you know. Just have the politeness to ensure that people are able clearly to see that they’re about to read a spoiler before they read it.  Best way to do this is generally to start off with SPOILER in big, obnoxious capital letters.

For TV shows, that rule stands until the episode airs. (Yes, that includes not spoiling things that are being revealed in the adverts.) For a big movie, until it’s been in release for a week. For a book?  As long as it’s a new release (ninety days from publication), certainly, and then probably as long after that as it remains a top ten bestseller.

Note also that this is a minimum.  I for one have always tried to maintain a higher standard.  As far as movies, TV shows, books go?  I try always to include a spoiler warning in some form.  I was going on thirty the first time I saw The Third Man, and it was over sixty years after the film’s first release.  Yet somehow I’d managed never to be spoilt on one of the most famous movie twists of all time, and it was brand new to me.  If I’d known what was coming, it’s entirely possible I wouldn’t have nearly the appreciation for what’s now my all-time favourite film as I do.  But as far as sport goes?  If I’m watching a live event on TV, and I have something to say about it, I say it.

We can talk about the things that engage us.  But we don’t have to trample all over everyone else’s engagement with them to do it.

I

Signature moment

Whenever a new actor is cast as the Doctor or as James Bond, one of the comments that invariable gets made is that now, that actor knows what the first line of his obituary will be. And it’s true–Matt Smith is twenty-nine years old, but he knows that no matter what else his life holds for him, his obituary will introduce him as, “the eleventh actor to portray the title role in the BBC television programme Doctor Who“. There are really only two actors across the two roles who’ve accomplished enough else in their careers that most people don’t automatically think of the Doctor or 007 when they see their faces–Sir Sean Connery and Peter Davison–but even both of them still know that those relatively brief periods of their early lives will still be the first thing that shows up in their obituaries.

Similarly, sportsmen and sportswomen have moments that define their career in much the same way. It’s pretty much impossible to run a news story about Joe Namath without showing the footage of him jogging off the field of Super Bowl III with the single finger raised over his head in victory. Brandi Chastain will for the rest of her life be the player who whipped her shirt off after scoring the goal that won the shootout against China in the final of the women’s World Cup. Whenever Michael Phelps gets mentioned on TV, we’ll see his one hundredth of a second victory over Milorad Čavić. Gordon Banks’s save against Pelé’s downward header at the 1970 World Cup finals in Mexico, when his body seemed to defy the laws of physics, is the signature moment for both players, as Pelé ruefully admits: “It’s amazing because it was 35 years ago, but people ask me about that save all the time–not just in England, but all over the world. You know, I scored a lot of goals in that World Cup, but people don’t remember them. Sometimes I watch TV and before games they show this save. I say, ‘Why don’t they show the goals?'”

There are several things I find interesting about these career-defining moments. The first is that we don’t know they’re coming. There was no reason, until it actually occurred, that the finish to Michael Phelps’s seventh final of the 2008 Summer Games should have been any more significant than the dozen or so races he’d already swum those Games (counting both qualifiers and finals), during which he’d already won six gold medals, or the following race, in which he hoped to win an eighth gold medal. There was no reason to expect that that one particular shot from Pelé would result in what most football analysts believe is the single greatest save a goalkeeper has ever made; indeed, it’s precisely because it was unexpected–that it looked, at the moment Pelé struck the ball, impossible–that it’s so great.

The second is that it’s not necessarily the player’s greatest moment. Phelps’s win was the first time, in seven attempts, that he failed to set a world record in a final race in Beijing. Brandi Chastain’s bra-bearing celebration came after she scored a penalty kick, probably the most routine and pedestrian thing a goal scorer can do. Indeed, sometimes it’s a really low moment that becomes the first thing people associate with a sportsman–the blood trickling down Greg Louganis’s forehead; Paul Gascoigne’s blubbering tears upon receiving a yellow card in the World Cup semi-final against West Germany.

So if it’s not necessarily their most brilliant moment, then what makes that indelible instant that will come to define a player’s career in the years ahead? It ends up being combination of factors. The spectacle of the moment is certainly important. But so is the importance and visibility of the context–who knows how many other acrobatic, apparently impossible saves Gordon Banks made, that happened to be in league matches for Leicester City against Blackpool or Burnley rather than for England at a World Cup finals?

Or there’s the possibility of the moment running against expectations. Like Pelé having his shot saved. Or Dennis Law, who scored about two hundred goals for Manchester United (his record as United’s most prolific scorer in European competition stood into the twenty-first century, when it was broken by Ruud Van Nistelrooy), but the goal that always gets mentioned is the one he scored against United, when he moved on to a season at Manchester City at the twilight of his career, for that was the goal that condemned United to relegation to the Second Division.

Wayne Rooney’s winning goal in Saturday’s Manchester derby has been getting reshown in sports coverage ever since he scored it. How big has it become? Big enough that it got discussed here on Washington, DC, talk radio, on Tony Kornheiser’s local show. And it didn’t even need to be introduced or given context–“Did you see Rooney’s goal?” was all Kornheiser was asked, to which he responded, “Yeah, I did.”

I think Rooney’s goal has a strong possibility to be that signature moment of his career–to be the first line of his footballing obituary, if you like; the one moment most likely to be referenced, to be replayed, every time Rooney is mentioned following his (eventual) retirement from football. So many factors are aligned in its favour.

It came against Manchester City. It came thirteen minutes from time, shortly after City had equalised. The eyes of the whole world were on that match; with United in first and City in third, it was the most significant Manchester derby since that 1974 meeting when Dennis Law scored for City. For Wayne Rooney personally it’s come after a very tough season–his controversy in the tabloids, his declaration (subsequently retracted) that he wished to leave Man United, and of course his ten months dry of goals scored in open play, a period he really only ended a couple of weeks ago with his two goals against Aston Villa.

And the goal itself is spectacular enough on its own that, even if it had come against Luton Town in the fourth round of the League Cup, it would still have made any Top Ten Goals of the Season list.

What it really depends on is how the rest of the season plays out. Should Man United lift the title in three months, then that goal will be cemented as the key image of Wayne Rooney’s career; only scoring the winner in a World Cup semi-final or final will be able to dislodge it.

I

Delkery

That was my tweet the other day, prompting a short flurry of conversation. The initial suggestions were bakery and deli, but I rejected these. Panera, the Atlanta Bakery and the Corner Bakery (and McAllister’s, which was shortly added to the list) might be considered a subset of delicatessens and bakeries, but they share a quality between them that other delis and bakeries don’t have. I suggested several names for this type of place–gourmet deli, hipster deli, pretentious deli–before Diane combined deli and bakery to get the title of this post.

But what is that quality that these four places have in common? Is it being gourmet? Is it being upscale? They’re definitely not actually gourmet, though I suppose they qualify as “gourmet” in the sense that marketing has taught us to use it nowadays. I guess upscale is as good a word to use as any, but I still question whether or not “upscale” has any actual meaning. How would we define upscale?

Let’s broaden our scope a little, to include other upscale places. Starbucks. Barnes and Noble. Borders. What do all these places have in common with the delkeries? They’ve all been constructed over the past twenty years to be places where the customers are encouraged to spend their free time.

There’s no reason to spend longer than twenty minutes in a sandwich shop or coffee shop–you stand in line, you place your order, then you either eat/drink and leave or take your food/drink with you. A bookshop might require slightly longer–browsing through the books, after all–but browsing should really be done standing in front of the shelves, not sitting in a cafe with a latte and a stack of books you haven’t paid for sitting on the table.

But the corporate offices want you to stay long enough that you end up spending more of your money, in bits and pieces over several hours. (Though speaking as a former Barnes & Noble employee, the staff generally don’t want you sitting around, getting underfoot, making more work for them without really spending enough to justify it.)

So they have crafted their stores with leather upholstery, non-intrusive lighting, ambient music and pretentiously-named, slightly overpriced food. And we walk in and look at what’s on offer in the bakery case and we feel refined, and cultured, and in comfort, and for a little while we feel slightly above our actual station as members of the Great American Middle Class. So we settle into our chairs with our mocha and our cranmelon scone and we chat with friends or do our homework or work our way through a stack of magazines we haven’t paid for.

I love the Corner Bakery, and I love Barnes & Noble, and while I don’t love Starbucks (because I can’t stand coffee), I was overjoyed when the Starbucks adjoining the B&N where I used to work started serving “gourmet” sausage McMuffins, because they were delicious. So I’m just as much a part of the phenomenon as everyone else–but it’s still a phenomenon I find fascinating. And something, I think, that’s really only come to be in the past two decades.

I

Anglophone privilege

It shouldn’t be a surprise to anyone that immigration is an issue I care a lot about. Most of the public debate about immigration in America gets me pretty angry, since it generally seems to comprise a whole lot of blame and ignorance directed at a minority powerless to defend themselves.

What is a bit more surprising, perhaps, is that I still care just as much once the immigration conversation zeroes in on language. After all, it’s easy to dismiss my interest in immigration as self-interest–I am an immigrant. But when it comes to languages, self-interest isn’t part of the deal; I emigrated from one English-speaking country and immigrated to another. I didn’t have to learn a new language at all (besides reducing my vocabulary somewhat, of course).*

But my hackles go up instantly whenever I hear people complaining about the increasing prevalence of Spanish (because it’s always Spanish) in American life. Always, always, it contains some variation of the sentiment, “If I moved to Germany, I’d definitely make some effort to learn German!”

There are two assumptions that always underlie people’s complaints about this, and they both infuriate me. The first is the implication that immigrants are simply choosing not to learn English–that it is, fundamentally, a function of laziness or apathy. Not that they can’t learn a foreign language, whether because of a lack of aptitude or money or time or competent teachers.

The second is the exceptionally First World picture of how migration works–that’s not migration so much as moving house, only with passports added. You pack up all your stuff and head off with your wife and kids to a nice, comfortable house Somewhere Else and immerse yourself in a new culture. That the penniless economic migrants or political refugees who have gathered in this country from Mexico, the Caribbean, South America, Africa, Asia, are simply living out their own versions of A Year in Provence. You cannot say the things these people say and not have that picture of migration in your head.

This is privilege at work, pure and simple. People are living much harder lives than you or me, purely because of where they were born, and they have done a very hard thing that neither you nor I have ever done–left behind their entire lives, frequently including their families, and migrated to a foreign, alien place to try and improve their lot. (I didn’t do it because my parents did–I simply came with them.) And their lives are made all the harder because they cannot readily communicate with the people around them, cannot ask for help, cannot explain what is wrong, cannot be hired for any number of jobs (you know, the jobs that actually pay a living wage and have prospects of advancement) because they can’t master the language.

And we’re complaining, not because their failure to learn English actually presents us with any real hardship, but because the presence of advertisements in Spanish or the phrase “For English, press one” prevents us from simply ignoring this large population of the unprivileged who surround us.

I

*That means using fewer words.**

**Bazinga.

A Traitor's Loyalty Cover

Follow Ian

RSS icon Twitter icon Facebook icon Google Plus icon GoodReads icon LibraryThing icon

Categories

Archives

Recent Tweets


Follow @ianracey

Rights

Interested in translation, audio, or movie (oh, yes, please!) rights to my works, please contact my agent via his website at www.zackcompany.com.