Monday, September 24, 2007

I can't let you do that, Dave

Two articles caught my eye over the weekend because of their opposed views on the relationship of society to technology: Regina Lynn’s “Rude People, Not Tech, Cause Bad Manners” at Wired.com and George Johnson’s “An Oracle for Our Time, Part Man, Part Machine,” at NYTimes.com.

Lynn’s column argues against a common complaint: that an increasing number of people defer face-to-face interaction in order to connect via IM, cell phones, etc., often leading to obliviousness of people in their immediate, physical vicinity.

According to Lynn, technology doesn’t impede, but only enables interpersonal relationships. An IM-only friend is as real as an in-the-flesh friend. Indeed, for the socially awkward, electronically mediated relationships may be deeper and more open than those conducted face-to-face. And the real problem with too-loud-in-public cell phone conversations is less the gadget than the oaf holding it.

I tend to agree, with some reservations. If technology is capable of enabling “good” socialization, it must likewise be capable of magnifying our rudeness.

Lynn sees us as in control of our actions. The technology has no insidious impact on human behavior. It is powerless and thus blameless: cell phones don’t offend people, people offend people. Yet no one can claim that antipathy towards public cell phone use isn’t a social phenomenon, or that it could exist if there were no cell phones.

Johnson’s piece begins by breaking down the etymology of “algorithm” before omenously announcing, “It was the Internet that stripped the word of its innocence.”

He takes issue with two distinct, but closely related phenomena of the internet age: the automation of judgment through the use of powerful algorithms, e.g. Google’s PageRank or NewsRank, and systems—both human- and machine-directed—which perform knowledge-intensive tasks through crowdsourcing, e.g. Wikipedia or Amazon’s Mechanical Turk.

Johnson fears that these entities are robbing of us of judgment and enslaving us to a technological hive-mind. His is less an argument than an appeal to the viscera: on his account, these entities are horror-show symbiotes of man and machine, perhaps a “buzzing mechanism with replaceable human parts”, or “an organism with an immune system of human leukocytes”. The illustrations accompanying the article belong to a type commonly seen in the media of fearful technological essentialism: crude cyborgs of human tissue and forms superficially evocative of industrial tech.

It seems unlikely that Wikipedia will overthrow the nation-state as the leading technology for the subjugation of individual will (though virtual reality pioneer Jaron Lanier, for one, is troubled by the possibility). Systems can and do take on a direction of their own, but they are always created to satisfy a set of human interests. Rather than framing our fears in terms of man-versus-machine, let us ask who the machine is working for, and if their values are our values.

2 comments:

Bella said...

"Systems can and do take on a direction of their own, but they are always created to satisfy a set of human interests. Rather than framing our fears in terms of man-versus-machine, let us ask who the machine is working for, and if their values are our values."

Wait a second: you can ask who the machine is working for "now", but what about "ten years from now" - no one can predict the future and you already stated that systems take on a "direction of their own" - just because your "idea" was "humanistic" does not mean your end result will satisfy that desire. For example: you can build a church, BUT is a church really a church? I would say it depends - could this lead one to think about typologies of systems - some "humanistic" and others that are "imitations" - not authentic.

Darren Abrecht said...

Hi crystal,

My question of “who the machine is working for,” is admittedly simplistic, as is my claim that systems take on a “direction of their own.” In fact, technologies are points of conflict and negotiation between diverse human interests. Wikipedia cannot entirely “belong” to its admins, however much power they may have over it. Even a less obviously open web site, one where only the admin can edit, becomes subject to social forces once it becomes public—the admin can be compelled to change the site in order to get more pageviews.

Which is not to say that you can entirely reduce the machine down to the wills of the people involved. Matter always seems to have its say in the outcome, which is part of our fanatical drive to eliminate it by turning it into form (that idea will be getting its own series of blog posts). I can't entirely get rid of “nature,” either, no matter how much I might like to free myself to the postmodern revelry of textual play. But I don't think granting that technologies have unforeseen consequences necessarily leads to the hysterical techno-essentialism of the NYT piece.

As for the question of whether a church is really a church, that too will come down to the vote of the individuals involved—the creators and the users and the long-dead communities that established the typology. But the architect, by controlling the expression of that typology, has a powerful psy-ops weapon at her disposal for convincing the masses of the legitimacy of her interpretation of the space.