Monday, October 29, 2007

One to keep an eye on

Writer and comedian Stephen Fry has a new technology column at the Guardian. In his first outing, “Welcome to dork talk”, he objects to the distinctions so often made between between humanists and technologists...
Well, people can be dippy about all things digital and still read books, they can go to the opera and watch a cricket match and apply for Led Zeppelin tickets without splitting themselves asunder. Very little is as mutually exclusive as we seem to find it convenient to imagine. ... So, believe me, a love of gizmos doesn’t make me averse to paper, leather and wood, old-fashioned Christmases, Preston Sturges films and country walks.
... and between design and engineering.
What do I think is the point of a digital device? Is it all about function? Or am I a “style over substance” kind of a guy? Well, that last question will get my hackles up every time. As if style and substance are at war! As if a device can function if it has no style. As if a device can be called stylish that does not function superbly.
There is, I think, a growing consciousness of this last point. The “utility” and “efficiency” of a technological solution are often touted as if these were virtues unto themselves, and not a means to a greater end. The engineer who cannot appreciate aesthetics is like a miser who dies with millions but without having ever known a single earthly decadence.

There are some who would justify design by tying it to the bottom line. While it is true that beauty can grease the wheels of functionality, and that pretty things sell, I think this approach has it backward. Aesthetics is the end; functionality is the means. There is no objective reason why we should bother to go on living at all—we do so out of sheer preference, out of our aesthetic appreciation for living over dying.

Utility cannot be divorced from fancy. As we have struggled for decades now with poorly designed, nearly unusable, “utilitarian” computer technology, we have gradually come around to acknowledging the importance of taking pleasure in design.

Wednesday, October 24, 2007

A harder look at the soft luddite

I introduced the idea of “soft luddism” in a recent post, and attempted to define it by alluding to a genre of writing that I assume to be widely recognizable. I should have worked harder to give a more explicit definition, because the soft luddite is too easily confused for a number of similar characters.

The old-fashioned, “hard” luddite dislikes a specific technology or class of technologies for the enumerable disadvantages they deal him. The soft luddite, however, is uneasy about “technology” as an abstract category, because of the affront to humanistic values that it is perceived to represent. His error is diametrically opposed to that of the gadget fetishist or other bleeding-edge enthusiast, who loves this abstract category, “technology,” without regard to what use a given technology may be put.

This “technology” never includes familiar or naturalized technologies, but only the bleeding edge, technology in the most obvious sense. The humanistic values that the soft luddite defends are almost always underdefined, since to define them explicitly would be to make them available for fair comparison with the advantages offered by the technology under critique.

There is almost always an unconscious element of classism in this refusal. The soft luddite is usually a member of the upper-middle class. He mingles with the wealthy and knows their ways. He is thus acutely self-conscious of his need to live by the working-class values of utility and efficiency, but romanticizes the inefficiency that is the leisure of the wealthy. That same inefficiency is also the prison of the down-and-out, which gives us what is perhaps the most common criticism of the “voluntary simplicity” movement: that most of the world’s people are already practicing simplicity of the involuntary kind!

Rebecca Solnit has an article called “Finding Time” in the present issue of Orion magazine. It’s more honest than most soft luddite pieces you’ll read, in that it actually offers up some of the values that must be compared in making a rational technological choice:
The gains are simple and we know the adjectives: convenient, efficient, safe, fast, predictable, productive. All good things for a machine, but lost in the list is the language to argue that we are not machines and our lives include all sorts of subtleties—epiphanies, alliances, associations, meanings, purposes, pleasures—that engineers cannot design, factories cannot build, computers cannot measure, and marketers will not sell.
You’ll notice, however, that the comparison is handicapped. On Solnit’s telling, convenience, efficiency, and the rest of the first list are not human values but machine values. We’re back to the bogeyman of a looming technological essence.

To compare these two lists is not to compare machines with humans, but to compare two different tiers of Maslow’s hierarchy of needs. People who are struggling to feed themselves can’t worry, just yet, about epiphanies, meanings, and pleasures. Solnit explicitly rejects the accusation of elitism, but her “nomadic and remote tribal peoples” and “cash-poor, culture-rich people in places like Louisiana” (and other flyover states, presumably) are acting out of necessity, not free choice.

Both the gadget enthusiast and the soft luddite are to be contrasted with the rational actor who does not recognize one category of technology as more “technological” than another. This person first defines his individual and social ends—including, perhaps, values from both of Solnit’s lists. He then chooses technologies—new and old—which seem to him to be the ones most likely to bring those ends about.

Tuesday, October 23, 2007

Oh yes, they Kan

The Christian Science Monitor published my letter in response to Dinesh D’Souza's “What atheists Kant refute.” D’Souza is a conservative author who, like Ann Coulter, uses over-the-top controversy to sell books and land College Republican speaking engagements.

His op-ed piece is based on a reading of the early sections of Kant’s Critique of Pure Reason. While it's probably not fair to refer to D’Souza as a pseudo-intellectual, he clearly hasn't done his homework here. Of course, his reason for taking up Kant is not to engage in serious and fair-minded Kantian scholarship; it’s to cherry-pick the philosophical tradition for arguments that may be made to serve the cause of social conservatism. D’Souza’s arguments rest on two fallacies.

The first is the alleged authority of Kant. D’Souza asserts that Kant has never been refuted. It's true that no work has ever been produced which is universally recognized as a refutation of Kant’s Critique, but philosophy doesn't really work that way. A lot of philosophers never saw Kant’s arguments as an adequate rebuttal of Humean empiricism. Others took the Kantian paradigm as far as it could go, but a critique can only bear so much fruit. There are certainly elements of Kantian philosophy still in currency, and I sympathize with quite a few of them. But unreconstructed Kantians are rare in the academy.

The second fallacy is inappropriately using the noumenon-phenomenon distinction to place one’s confidence in the existence of God outside the reach of human reason. Had D’Souza read the second half of the Critique, or perhaps read a commentary on it, he would know that Kant himself argues against this approach. When we’re debating whether or not God exists, we’re taking up an idea of God—a phenomenon—for consideration. D’Souza is arguing for a noumenal God. If my reading is correct, Kant allows that his Critique can free one for such a belief, but denies that it can be used in defense of such a belief. A noumenal God is not one that D’Souza would much care for, anyway: it would be unknowable, featureless, and unrelated to human life.

Note: This is a bit more purely philosophical than usual; I’ll get back to the tech talk soon!

Tuesday, October 16, 2007

The soft luddite

This month’s GQ—the 50th anniversary issue—has an article written by Scott Greenberger about the decline of the printed newspaper:
For many, it’s hard to imagine Sundays without a two-hour stint on the couch, surrounded by the detritus of the impossibly fat Sunday New York Times. ... There’s something to be said for rituals, and no good rituals involve staring at a twelve-inch screen.
I’m sure you've seen a lot of articles like this. I have come to refer to the attitude expressed in them as “soft luddism.” Unlike the classical luddite, who rejects technology because of its role in commoditizing labor, the soft luddite is all too happy to enjoy the benefits of technological advances. He just won’t admit it.

Gosh, I can’t live without my email these days, but wouldn’t it be nice if people still wrote letters? Has anyone stopped to consider what we’ve lost?

Soft luddite commentary falls into two recognizable sub-genres. The first is the rant of the befuddled writer, at the mercy of his computer because he’s indignant that he should have to invest any significant amount of time in learning how to use it. The second, much more common, is a creepy fetishism for industrial-age technology: lots of loving odes to ink-stained fingers, loud machinery, and the smell of developer fluid, moments lost in time like tears in rain. Greenberger again:
It’s depressing to think that future generations of men won’t know the joy of discovering the sports section left behind in a bathroom stall.
Yeah, isn’t it?

When I worked as a web producer at the Christian Science Monitor, most of our work consisted of converting material created “on the print side” for the web, and we were frequently made to run commentary pieces in the soft luddite spirit. The drumbeat of technophobic whining made me feel at times like the target of silent interdepartmental loathing.

Perhaps I’m naïve, but I doubt that was really the case. I knew that the editors were choosing pieces likely to appeal to the Monitor’s print subscribers, an aging demographic. And this was just prior to the industry-wide epiphany that brought web journalism to the forefront. At that time, it was still “The Newspaper,” not yet “the print product”—and the newsosaurs were confident that the web would remain forever marginal. Nowadays, as print budgets are being slashed and news companies are pouring money into web infrastructure, the hostility is more open, and in some cases becomes luddism of the good, old-fashioned, save-my-job-from-the-machines variety.

Of course, new technology shouldn’t always trump the old. Given an outcome we’re trying to reach, we should choose the technology which will help us reach it in the way that is most in line with our values. So the objection to the soft luddite is really an objection to nostalgia, a false consciousness that transforms the old and familiar into the natural and the good.

Tuesday, October 9, 2007

Bits become atoms

It is often said that the conceit of the first Matrix film—with the “whoa” it brings forth from both the protagonist and viewer—is a variation on the “brain in a vat” thought experiment often given in introductory philosophy classes to illustrate the position of the universal skeptic. As described in the Stanford Encyclopedia of Philosophy:
Consider the hypothesis that you are a disembodied brain floating in a vat of nutrient fluids. This brain is connected to a supercomputer whose program produces electrical impulses that stimulate the brain in just the way that normal brains are stimulated as a result of perceiving external objects in the normal way. (The movie ‘The Matrix’ depicts embodied brains which are so stimulated, while their bodies float in a vats.) If you are a brain in a vat, then you have experiences that are qualitatively indistinguishable from those of a normal perceiver. If you come to believe, on the basis of your computer-induced experiences, that you are looking at at tree, then you are sadly mistaken.
I would argue that while this comparison immediately leaps to mind, the real problem raised by the Matrix is much deeper. The brain-in-a-vat Gedankenexperiment implicitly assumes both that there is a real world, and that the computer-generated illusion isn’t it. In the Matrix films, however, this isn’t quite so straightforward.

Most of the simulated worlds we’re familiar with mimic reality at a very high level of emergence. Physics engines used in video games, for instance, copy the effects of Newtonian physics with macroscopic objects treated as primitives. But in the real world, the behavior of macroscopic objects emerges as a consequence of the crowd behavior of microscopic objects. Since the physics engine does not simulate these, it must substitute other causes to reach the same effect.

Take, for instance, the fact that two solid objects which collide will bounce away from one another rather than pass through each other. In the real world, this will occur because the particles which make up the solids will electrically repel each other; in a physics engine, this effect must be created with explicit collision detection and response rules.

But what if our physics engines had a great deal more computing power at their disposal, and simulated physics at the microscale, allowing macroscopic effects to emerge? In such a situation, the artificiality of the physical effects is less clear. They are not explicitly created by the programmer, but rather appear as a result of the same processes by which they come about in real life. As artificial intelligence researcher Steve Grand writes in his book Creation:
A computer simulation of an atom is not really an atom.... But if you make some molecules by combining those simulated atoms, it is not the molecules’ fault that their substrate is a sham; I think they will, in a very real sense, actually be molecules. ... the molecules made from it are second-order simulations because they are made not by combining computer instructions but by combining simulated objects.
Suppose we were to take this to its logical conclusion and posit a simulation based on the still-undiscovered Grand Unified Theory of physics, that is, a perfect physics engine from which even the behavior of the tiniest particles emerged from underlying rules rather than being directly simulated.

If we found ourselves in such a simulation, there is no way we could ever find it out; indeed, there is no way we can state with confidence that we are not. We don’t know anything about the nature of the ultimate substrate of this world. It could be a computer, the mind of God, or turtles all the way down. Yet this is the world which we refer to as “real.” I think we’re justified in referring to it as such, but in so doing we have to accept that being real does not necessarily exclude the possibility of being a computer simulation.

Although its portrayal is not entirely consistent, the fidelity of physical behaviors in the Matrix suggest that it’s just such a perfect simulation. The human occupants of the Matrix are shown to be aliens to that world. But it can never be entirely clear that the “real world” to which they escape is less of a simulation than the one they left, or that the natural inhabitants of the Matrix—the programs—aren’t less flesh-and-blood than their jacked-in counterparts. Indeed, both of these conclusions are hinted at in the second and third films.

Wednesday, October 3, 2007

Metaphysics matters

Kant opens the first preface to the Critique of Pure Reason with a rumination on metaphysics:
Human reason has this peculiar fate that in one species of its knowledge it is burdened by questions which, as prescribed by the very nature of reason itself, it is not able to ignore, but which, as transcending all its powers, it is also not able to answer.
In the 20th century, as science rose to its golden age, the then-dominant philosophy of science, logical positivism, succeeded in expelling metaphysics from favor among English-speaking philosophers. It is still seen in some circles as a little hokey to profess an interest in the Queen of the Sciences, and metaphysics remains a dirty word in science departments everywhere.

But there’s a reason why Kant says that metaphysics can’t be ignored. Today’s “common sense”—or as I prefer, “vernacular philosophy”*—did not spring ready-made from nature. It reflects the outcome of the intellectual battles of yesteryear, and those battles were shaped by historical and political forces. To fail to examine metaphysics does not free from you holding metaphysical opinions; it only ensures that those opinions will be uninformed.

That’s why it was refreshing to see an op-ed, “Dualism and disease,” at STLtoday.com, which blames the lingering, mind-and-body metaphysics of Descartes for inequity in today’s health insurance programs.

This blog is founded on the premise that figuring out how to live with our present and near-future technologies will require intellectual work beyond engineering and design, beyond sociology and economics, down to the very first principles of philosophy. What happens to epistemology when search becomes as fast as recollection, and knowledge moves from the cranium to the cloud? What happens to metaphysics when form learns to mimic matter, when simulations become indistinguishable from nature and algorithms become indistinguishable from machines? These are questions with real, practical consequences.

*Edit: In Small Pieces Loosely Joined, David Weinberger uses the phrase “default philosophy” to refer to the same phenomenon. His is a much more graceful expression, and I’ll probably crib it for future use.