Feeds:
Posts
Comments

Archive for the ‘technology’ Category

The most recent data released by the U.S. Bureau of Justice Statistics (2008) disclosed that 11.7 million persons in the United State experienced at least one attempted or successful incident of identity theft during a two-year period.  The incidences increased 22% from 2007 to 2008.  For the past decade it has been a federal crime to “knowingly transfer, possess, or use, without lawful authority, a means of identification of another person.”  A steady stream of news stories reminds us of the dangers of identity theft and identity impersonation—from credit card and social security numbers stolen, to bank accounts hacked into, to child predators trolling social networking sites under false identities.

Our legal system and news media have sent us a clear message about identity theft: it is criminal.  It is unacceptable.  It is a menace to society.  You need to protect yourself, vigilantly, against it.  But in the imaginary world of popular culture, the ethics of identity theft is, well, far less rigid.  In the fictional stories we tell ourselves about identity fraud, the act itself can actually be a justified means to a moral end.  In the past year alone, three films have been released with plots that are propelled by characters who assume false identities, identities inhabited “knowingly” and without permission.  In the sci-fi film Source Code (2011), a soldier is transposed into the body of an everyday man who is riding a train that will blow up in eight minutes; he is occupying the man’s body in order to try to find the bomber on the train.  In Unknown (2011), a doctor awakens after a car accident to find that another man has assumed his identity, is married to his wife, and has his career (the other man’s photograph even shows up when the doctor’s name is searched on the internet).  Finally, the documentary Catfish (2010) explores the world of Facebook and the ways in which users perform their identities for people they’ve never met in real life.  I don’t want to give too much more away from any of the three movies, as each contains various twists and turns, but suffice it to say that identity cloning and identity manipulation is a theme that runs through all of them.

And here’s what strikes me about this cultural content and its broader context: in each case, the ethics of assuming someone else’s identity is not exactly black and white. In fact, in the moral logic of Source Code, posing as another person is wholly justified if it can help save the lives of others.  In Unknown, it turns out that identity fraud is ultimately a redeeming act for one of the characters.  And Catfish challenges the audience to reflect on whether questionable identity performance justifies the good works that can result in the process.  In short, if another person’s identity is assumed for reasons other than criminal purposes and/or economic gain, then it may just be okay.

It’s not just these three movies that explore the ethical gray area of identity theft.  We can go back farther into the decade to find others.  As readers of the Harry Potter series know, the magical “polyjuice” is a potion that will transform the imbiber into another person, at least in outward appearance.  While it is used to evil purposes in one of the books, it is also used by Harry and his friends Hermione and Ron throughout the series to obtain valuable information and pursue dangerous missions.  Their temporary acts of identity theft are deemed morally defensible.  Another example: the hero protagonist of the popular 2002 film Catch Me If You Can is a clever identity imposter—based on a real-life criminal—whom the audience finds itself rooting for.  And in the TV show Mad Men (spoiler warning…), the lead character is not actually the “real” Don Draper, but is a Korean war vet named Dick Whitman who has assumed the identity of an officer killed in the war—named Don Draper.

Of course, the theme and plot device of false identity is not a new preoccupation of literature and cinema.  Mark Twain explored it in Pudd’nhead Wilson, Herman Melville in “Benito Cereno,” Vladimir Nabokov in his dark novel Despair.  Older movies, like Day of the Jackal, the film version of the play Six Degrees of Separation, and the science fiction story Gattaca all revolve around identity fraud.  But in the world we live in today, where online identity theft is a rapidly growing criminal phenomenon, I find these contemporary iterations of the theme even more noteworthy.  As the performance of our identity becomes more fluid, unstable—indeed more uncomfirmable—in this online age of social networking, it seems that our sense of the morality of identity performance has become more muddied and culturally contested.  The law and the news media are telling us one story about the ethics of identity theft and manipulation; our popular culture is telling us another; and, I daresay, what we post on our own Facebook and Twitter accounts is telling us a third.

Read Full Post »

Whenever someone (often on the internet, but not always) wants to emphasize how smart the collective masses are, or point to the effectiveness of the “hive mind,” they always end up looking at Wikipedia.

But why hold Wikipedia up as a bastion of either mass-generated wisdom or productivity? Why make Wikipedia the pinnacle of the possibilities of collective action?

Yes, its Wikipedia is both a tremendous source of information and an example of the possibilities of a kind of open-ended collaborative effort to tap the massive labors of folks around the world who would, historically, have been shut out of the process of knowledge production. And that opening-up of access is a good thing, generally speaking. However, the price of such opening-up might be a more generally accurate account, but its a less curious, less inspiring and ultimately, less informative product.

Wikipedia gives us information but beyond that, I’m not sure it gives us very much.

Pointing to Wikipedia or the Encyclopedia Britanica as the best of a culture’s intellectual output makes about as much sense as looking to sausage for the best of a culture’s protein sources. Both are collections of abstract and often reductive information — no matter how extensive that information is. Rendered, often, in the authoritative prose of clumsy journalism, both sources are and remain locations for fact checking and the collection of surface level information. They’re useful, and (to extend the sausage metaphor), they might even be a little useful, but they are hardly the finest examples of intellectual production to which we can point. Wikipedia is great for checking dates of major events or grabbing thumb-nail sketches of historical figures, but it is not a terribly productive location for fostering curiosity or encouraging complex intellectual investigation.

Wikipedia offers information at its most basic and banal. It does not signal the “death of the expert,” but its eternal life. And I don’t mean the good kind of expert, either. I mean the niggling, nit-picking kind of expert who treasures the firmness of facts over the flexibility or rigor of intellectual labor.

Wikipedia is great at presenting gross information, but it is much worse at presenting nuanced argument. Encyclopedia Britanica is not much better, for that matter, but what the continual stoking of the Wikipedia v. Britanica debate suggests not the triumph of the collective mind over the individual brain. Nor does it indicate the death of the expert. Instead, it illustrates the embrace of the middle by the mass — which doesn’t tell us anything we don’t already know, and which is exactly where Wikipedia belongs.

Coming Soon: The Death of the Death of the Expert

Read Full Post »

The great irony of the The Social Network, of course, is that its central theme is not connectivity but disconnection.  A film about the genesis of a technology designed to bring people closer together features a main character who experiences the painful dissolution of social bonds. The plot is driven by protagonist Mark Zuckerberg’s exclusion from social groups, the end of his romantic relationship, and the collapse of his close friendship.  This sense of disconnection is pervasive despite the fact that The Social Network is a crowded movie: characters are almost never seen alone (only when they are individually “wired in” to their computer screens), and certainly never seen enjoying a moment of quiet solitude. Instead, the characters are regularly packed together in small places—a legal office, a dorm room—or in big, loud impersonal places—a nightclub, a drunken party. But these spaces, despite their capacities, are repeatedly portrayed as lonely and alienating.

While the characters may inhabit a decidedly unsocial non-network beneath a façade of constant social interaction, the film itself serves as a remarkably vibrant cultural network. For the student of American culture, The Social Network is a fountainhead of intertextuality.  Perhaps appropriate for a film about Facebook, The Social Network functions as a busy crossroads of cultural referents, many readily recognizable, others unstated but nevertheless present.  The movie obviously plays on our familiarity with Facebook, but it is also features appearances by Bill Gates and the creator of Napster (both portrayed by actors), a musical score by industrial rock luminary Trent Reznor of Nine Inch Nails, and a Harvard setting (even though campus scenes were mostly filmed at The Johns Hopkins University).  It is also directed by David “Fight Club” Fincher and written by Aaron “West Wing” Sorkin.  One of the students on Zuckerberg’s Facebook team is played by Joseph Mazzello, the actor who starred as the little boy Tim Murphy in Jurassic Park.  In other words, what is really “social” about The Social Network is the way in which it serves as a pulsating intersection for a range of icons, myths, and expressive forms that circulate in the audience’s collective imagination.  It is populated by cultural detritus and ephemera with which we are friendly, if you will.

I imagine these multiple and varied cultural associations may in part explain the source of the film’s pleasure for viewers. The experience of viewing The Social Network is akin to data-mining.  It rewards a 21st century audience accustomed to scanning mounds of digital information and quickly categorizing that information into familiar frames of reference. For example, the brief appearance of Bill Gates evokes our Horatio Alger myth of success and the American dream.  The presence of Sean Parker of Napster infamy conjures associations with the lone rebel, hacking the system and sticking it to the man.  And Zuckerberg himself comes across as a nerd pulling one over on the Olympic jocks.

Reznor’s musical score triggers memories of his earlier work on such Nine Inch Nails albums as Pretty Hate Machine (1989).  That record opens with the line, “god money I’ll do anything for you/god money just tell me what you want me to,” and builds to the chorus, “head like a hole/black as your soul/I’d rather die/than give you control.” Pretty Hate Machine, with its loud synthesizers, drum machines, and vocal wails, is not unlike The Social Network: an expression of male adolescent angst and rage confined inside an electronic world.

And there are still other resonances: Fincher’s directorship reminds us of his previous explorations of masculinity and antisocial behavior in Fight Club, The Game, and Zodiac.  Sorkin’s dialogue echoes the brainy loquaciousness of the political characters he developed for the television show The West Wing. Nearly twenty years ago, in Jurassic Park, actor Mazzello played a child victimized by a technology abused and gone awry.

As I watched The Social Network, I even found correspondences with The Lonely Crowd (1950), the sociological study of “other-directed” social character that became a bestseller in postwar America.  Co-authored by David Riesman and his team, the book argues that Americans are increasingly motivated by the need for peer acceptance.  More and more, our “inner gyroscope” is set in motion not by individualistic self-reliance, but by the drive to win approval and fit in.  Consequently, our time is spent trying to decode what is popular and adjust our personalities accordingly: “The other-directed person acquires an intense interest in the ephemeral tastes of ‘the others’… the other-directed child is concerned with learning from these interchanges whether his radar equipment is in proper order” (74). What is The Social Network if not a depiction of a lonely crowd?  Indeed, isn’t Facebook itself a kind of lonely crowd?

I can’t help but wonder if this way of reading the movie—this pleasurable scanning of its cultural allusions—in some way works to conceal its underlying critique of 21st century connectivity. The film’s central theme of social dissolution is undercut by its teeming network of familiar, friendly signifiers.  Its “ephemeral tastes.”  Yet we are “friends” with these signifiers in as much as we are “friends” with hundreds of people on Facebook—another text, by the way, that we scan with the same data-mining mindset.  As portrayed in The Social Network, Facebook doesn’t really bring people closer together in any meaningful way; it facilitates casual hook-ups in bar bathrooms, or it breeds addiction, as one character confesses, or it quickly delivers trivial information like the results of a crew boat race. You would think The Social Network, with its depiction of Zuckerberg and his creation, would compel a mass exodus from Facebook, or at least spark critical public reflection on our complicity with this technology of the lonely crowd. But instead it rewards and reinforces our ways of reading: our ingrained habits of consuming images, scanning information, reducing human experience to pannable nuggets of gold.

Read Full Post »

What would you think if someone told you that they were fighting for a “share” of your stomach?  Bring to mind organ harvesting? alien invasion? theft?

But this is in fact what the food industry is doing, and has been for some time.  I first heard this term last week when I took part in a to-remain-nameless gathering of food experts in the bay area.  It was in the context of a discussion of how we might, as eaters, make healthful choices in the American food marketplace.  Someone in the room recalled being at a food industry gathering, recently, where executives from a soda company were debating how to increase their “stomach share.”  They were seeking to expand their line of products (from sodas, to juices, waters, and exercise drinks) to make sure that whenever someone put a beverage in their stomach it was from company X.   Rather than merely competing with another brand in, say, “the marketplace,” the “stomach share” metaphor takes the battle to the consumers’ own body.  The question is not just how can we ensure that the consumer is buying the maximum amount of our product, but also how can we ensure that whenever the consumer is ingesting it is our product that’s got a majority share of the space.

This stomach talk reminded me of another phrase I’d come across in my research for Empty Pleasures—“prosperity stomach.”  Coined in 1966 by Henry Schacht, an executive from a diet-food company, and mentioned in a talk to newspaper editors called “How to Succeed in Business without Getting Fat,” the phrase referred to a troubling problem faced by the food industry.  Because people (at least the middle class) did less manual labor and had more money to buy the cheaper food produced by American industry, they had begun to gain weight.  That wasn’t really the problem from Schacht’s point of view.  More troubling was that this weight gain meant that they could not—or would not—buy all of the food they wanted—food that industry could profit by selling.  The answer? Diet Foods.  By developing more foods that had fewer calories, manufacturers and marketers could enjoy profits in excess of the stomach barrier.

“Stomach share” and “prosperity stomach”—terms invented nearly fifty years apart—remind us that the food industries have long viewed consumers as reducable to mere storage spaces for their products.  Within this climate, the wonder is not that our stomachs have expanded, it’s that they have not expanded even further.

Read Full Post »

The conversation about “selling out” in popular music has been dead for some time. And I’m not interested in reviving that conversation now. The last time it really flared up was around 1989, when Nike featured the Beatles’ “Revolution” in a commercial. Since then, it’s basically been a done deal.

So, today’s New York Times‘ story about Converse opening a recording studio did not come as that big a surprise. It’s a pretty interesting story, actually, that points out the real deadness of the “selling out” debate. Given the state of the music industry — the rise of digital downloading, the bloatedness of the major labels, the constriction of radio outlets through consolidation (and companies like Clear Channel), the so-called “360 deals,” rampant product placement in pop music and so on — why shouldn’t Converse enter the industry? Why shouldn’t Whole Foods? Barnes and Noble? You or I?

It’s a rhetorical question, of course, but it raises three important issues that we ought to be clear about, if we’re thinking about the current state of popular music.

1. Converse will make music to sell shoes. The music is “successful” if it results in shoe sales. The Converse record label is the idea of Geoff Cottrill, Converse’s chief marketing officer. Cottrill is pretty plain about his intentions:

“Let’s say over the next five years we put 1,000 artists through here, and one becomes the next Radiohead,” he said. “They’re going to have all the big brands chasing them to sponsor their tour. But the 999 artists who don’t make it, the ones who tend to get forgotten about, they’ll never forget us.”

In other words, if the company has a .01% success rate in terms of music sales, but it builds brand loyalty for its shoes, then the music is a worthwhile investment. It’s a strange approach to “arts patronage,” in which it has none of the trappings of the Rennaissance or human expression — it’s about creating art to sell shoes. And I know (thanks, Warhol), that this, too, is old, self-referential, post-modern news. Nevertheless, I think we ought to be clear about these new arrangements and what is serving and what is being served.

2. Because it is in the business of selling shoes, Converse is actually being far more generous to its artists than the labels (at least it appears to be so). The article reported that Converse has little to no interest in owning the recordings that it makes. This is something new, and it does give more power to the artists than they typically have under contract with major labels — but good luck selling your song to Nike or Starbucks or VW if you’ve already sold it to Converse. And if you’re a musician, you’re probably not making money selling records, so where are you going to sell your music?

3. The entrance of Converse into this marketplace seems like evidence of the breaking-apart of the music industry as we knew it in the 20th century. Indeed, one of the great things about music these days is that anyone with a laptop and an internet connection can become a label. This is radically liberating for many artists. But what’s the real difference between Columbia and Converse? Amidst the sweeping changes in the music industry, it still seems to be about artists serving larger corporate interests. Converse, like Columbia or EMI or Decca or whomever, has the broadcast outlets; it has the power in the marketplace that independent musicians don’t have.

And, though Converse seems to be more generous with their artists, they appear to care less about their music.

Read Full Post »

I don’t have a T.V. but I do have access to TV programming, thanks to my Netflix subscription and various other online sources of traditional broadcast entertainment. As a result, I found myself watching HULU the other night, and as after I selected the sitcom I wanted to watch, the screen went dark and I was presented with a decision:

Which advertising experience would you prefer?

And then, Hulu presented me with a choice of three ads to watch, each one for a particular insurance company.

Is this really what all the hype about interactivity and the radical power of the internet has come to? If this is really what it means to “harness the power of the internet,” then I might just go back to dial-up. Has Web 2.0 been reduced to my ability to choose which advertisement to watch? The dimensions of absurdity of this encounter, framed as a “choice” of “experience,” are too manifold to list, so I’ll focus on just two:

1. It’s a choice of advertisements, but I still have to watch an advertisement.
2. It’s a choice between advertisements for the same insurance company.

In a sense, this is the epitome of choice within capitalism. Which is to say: its not really choice, or else its a choice that is so constrained that the choice itself doesn’t end up really mattering. Theodor Adorno would be so proud and so perplexed. And he’d be laughing (BTW: if you don’t “choose,” HULU will choose for you…. so even not choosing is a choice).

Read Full Post »

There’s an article in a recent NY Times about this diagram:

Slide of American Military Strategy in Afghanistan


It came from a PowerPoint presentation about the complexities of the American military strategy in Afghanistan. It’s safe to say that the slide, perhaps unintentionally and certainly unironically, certainly does portray complexity….

The article then went on to discuss the military’s current obsession with and recent dissatisfaction with Microsoft’s PowerPoint slide show software because it is unable to convey complexity, which results in boring or irrelevant presentations full of “dum dum bullets.” In short, people seem upset with the technology because it cannot convey the complexities of culture.

They are right. It can’t This only the latest part of a longer and more varied critique of PowerPoint, offered first by no less than Edward Tufte, the genius scholar who was talking about visualizations and the display of data before the rest of us even thought of tweeting.

To expect that technology would be able to solve this problem is to deeply and profoundly misunderstand technology and to project one’s own hopes onto a tool that is, of course, never going to be up to the task. Perhaps the military’s problems in Kabul are not the fault of PowerPoint, but the fault of cultural and political differences between people and nations. Blaming PowerPoint for failing to represent cultural complexities is like blaming a cookie cutter for failing to make cake.

Blaming the technology is far easier than trying to appreciate the complex powers of culture, and technology is rarely better equipped to solve problems than the folks who are putting operating it. And people of all kinds and on all sides of every conflict are notoriously difficult to represent in a set of slides, to capture in bullet points or diagrams, no matter how complex those diagrams become.

Read Full Post »

A few weeks ago I reorganized my bedroom closet.  This alone may be worthy of a blogpost, but I won’t bore you with recounting the small joy that this task brought me.  What struck me about the process, from a cultural perspective, was the sheer volume of paper memories I found myself sorting through and reordering.  Ten photo albums.  Two file crates of stories and poems I wrote as a child and adolescent.  Four different memento boxes of written correspondence from friends, family, and former girlfriends dating from high school through the recent present.  A thick stack of letters from my grandmother, starting in college and continuing through her death in 1999.  A shoebox of love letters, another shoebox of random photographs, a pile of birthday cards.  All handwritten.  All saved.  All newly organized on a shelf in my closet.  All ready to be grabbed up in case of a fire.

I always imagined that I would re-read these letters someday on my porch sitting in my rocking chair when I was old and gray.  I would revisit the words, the thoughts, the feelings, the handwriting of people I know and love, of people I knew and loved.  But as I was organizing, I became painfully aware of a gap.  The collection had dwindled substantially over the past eight years, slowing to barely a drip of birthday cards and the occasional sweet letter sent by a former partner.  But essentially the paper trail dries up around 2002.  It noticeably dries up.  Why?  Because my correspondents and I stopped sending mail and instead used electronic means almost exclusively.

My question that day, surrounded by boxes on the floor and letters and photos strewn about my bed, was this: how will we organize, preserve, and retrieve our memories, our special moments, our correspondence, in the digital age?  We communicate via email, we post our digital photos on Flicker and Facebook, we text message quips and best wishes and intimate confessions.  But how many of these will be saved?  When we are old and gray, will we sit in rocking chairs and peruse our laptop, reread thousands of emails, and revisit texts stored on old cell phones?  And if so, will anything be lost?  Or gained?

Perhaps the age we live in has merely required us to be more selective.  To print out the few emails that mean something to us, file them away, and let the rest pass into the past.  On more than one occasion I’ve been accused of holding on too much to my personal history, as I’ve carted these memories around from apartment to apartment over the years.  Granted, I do.  But mail has meant something to me in a way email hasn’t.  Maybe it’s not the medium but the message that counts here.  Still, I think reading the handwriting of others invokes a certain kind of memory, connection, and closeness in us.  Do standard text fonts have the same power?  I suppose we’ll find out in the years to come.

Read Full Post »

Hours before Sunday’s broadcast of the Oscars, whole swaths of the New York viewing audience were cut out because ABC pulled its affiliate stations. In the wake of this event, a coalition of broadcast providers submitted a petition to the FCC, asking that the current rules that govern broadcasting and redistribution are ripe for reassessment.

The current rules were formulated in 1992, and it should be obvious that lots of things have changed with respect to telecommunications and broadcasting since 1992.

The lag between older legal structures and emerging technologies and practices is nothing new. In this realm the law doesn’t and has never governed retroactively, but it always operates from the wake of history. Laws about new or emerging communications technologies are built on older frameworks, and therefore have always lagged a few decades behind the actual communications practices and technologies that the laws are written to govern.

As telephony and telegraphy spread, the laws that governed them were laws that were originally written to regulate interstate commerce. The only imaginable structure that could compare with telephony was trucking. When the radio act was written in 1927, it was modeled on the wireless laws that that controlled ship-to-ship communications because that was the only format for understanding wireless communication.

The laws governing television followed the laws of radio and the network model, and the laws about cable were written with television in mind, and so on. To be sure, there have been changes in the law — both innovative and some deeply troubling. But as new laws are imagined and formulated, they are done so as if new communications models will look just like older models; that telephony would be similar to trucking or that television would be like radio.

This is less a failure of law than a success of particular interests that were shaped from the very beginning by the desires of commercial interests. Here’s the punchline: because the commercial interests (AT&T during the radio era, DirectTV now) are deeply invested in maintaining the status quo. There is lots of money to be made (or lost) in controlling what communications look like. There is little impetus for innovation as such, so why not build the new laws to look like the old ones?

Certainly, the internet is radically reshaping habits and practices of the consumption of popular culture, and the 17 year-old laws are deeply in need of an overhaul. But is this round of law going to preserve the past or usher in the future?

Read Full Post »

Anyone who buys a ticket to see Avatar is going to see technology on display. That’s what all the press has been about, that’s why the film has won awards, that’s why it sold out my local IMAX ™ theater on Friday night (some two months after its original release). Despite its pseudo new-age lust for the natural, the film is a gluttonous celebration of technology.

Couched as a struggle between a money-hungry and heartless corporation and a peaceful tribe wholly in tune with nature, the film sells viewers the latter but delivers the former with more technological firepower than the mercenary (ex-)marines it features as bad guys. The thin script and the cast’s paltry performances are literally no match for the fantastical animations and imaginary worlds brought to life by director James Cameron and his animation army. Any shred of humanity or trace of emotion, connection or affect is churned under the unrelenting barrage of computer-generated images (in 3D!) that seem to pile up, one after another, each trying to out-do the last without any sense of fun or excitement (indeed, rather than exploratory or curious, the film takes a rather triumphant, and, dare I say militaristic approach to showing just what technology can do).

That might be the film’s act of hubris, but here’s what I find even more troubling about my two-and-a-half hour journey on Pandora: Once you can do anything with computers and computer animation, I find it harder to be impressed. Once the door to imagination is thrown completely wide open, and computers are capable of rendering anything imaginable on screen, then what’s the big deal of having 8-foot-tall blue characters or fiddle ferns the side of SUV’s? Once you can make computers do anything, what’s the big deal when they do anything, at all? Once anything is possible, who cares what happens? It’s the cinematic equivalent of eternal life (lord knows, the film felt about that long) — it may last a long time, but why does it matter?

Avatar at once captures the gluttonous revelry of technology and its absolute failures. Cameron’s attempt to critique technology in the film ultimately collapses beneath the film’s bloated, burdensome reliance on technology to tell this story. Yet, at the same time, the film’s meta-emphasis on its own story-telling technology so radically opened up the possibilities of animation that it diminished its own ability to highlight those very possibilities. In this way, the film fails twice and twice as hard.

But ultimately, for all its technophilia and bloated self-promotion, and notwithstanding the awards it has won and will win, the film’s greatest failure seems to be not technological, but human. For all its armament and animation, the film’s greatest failure was its absence of any real, human imagination at all. There is still no technology powerful enough to hide hackneyed plot points, recycled dialogue, and flat acting. By letting technology tell the story, Avatar obliterates its desire to tell a human story, leaving only a trail of computer-generated fantasy worlds in its wake.

Read Full Post »

Older Posts »