Thursday, December 27, 2007
Wednesday, December 12, 2007
The two Microsofts
By Darren Abrecht, McClatchy Interactive
In the recent series of “Get a Mac” ads from Apple, John Hodgman personifies the PC as a boring, gray-suited middle-management type, and few of us care strongly enough to protest. Many of us appreciate Microsoft products for their familiarity or utility, but no one ever really gets excited about them. At least, that was the case until a little over a year ago, when Microsoft began unveiling a series of flashy new products. Suddenly they were eliciting the kind of gasps that techies usually reserve for companies like Apple or Google.
On the hardware side, consider Surface, Microsoft’s tabletop computing project. Imagine a multitouch display—like Apple’s iPhone, but much bigger—set into the form factor of a coffee table. The result is a truly novel computing platform, as different from desktop machines and mobile devices as they are from each other. Multiple users can simultaneously manipulate photos, maps and music using just their fingertips. The device can also interact with objects placed on its surface, including bluetooth-enabled phones.
On the software side, Microsoft has been generating a lot of buzz with a pair of projects from its Live Labs division. Photosynth can stitch together snapshots to build a three-dimensional model of a place. Photos of popular tourist destinations—already available on the web in large numbers—could be scooped up and used to create photo collages that could be “walked” through as if they were virtual worlds.
Meanwhile, Seadragon aims to create an infinitely zoomable graphical user interface. Imagine your collection of photo files arranged as a set of thumbnails on your desktop. Instead of opening a photo by clicking, you could just zoom in to view the photo in full resolution. When you’re done, zoom out, and keep zooming, until your entire collection of files becomes visible as if viewed from miles high.
These forward-looking initiatives will take real risks in being brought to the marketplace, which makes it all the more amazing that they’re being produced by Microsoft. The company has a well-earned reputation for waiting on the sidelines of an emerging market, letting smaller, nimbler companies absorb the risks of development, before hopping in and throwing its weight around to become a dominant player in the space.
Critics of the company refer to its “embrace-extend-extinguish” strategy: Microsoft adopts an established technology, produces a proprietary version that’s only partially compatible with existing standards, and then leverages its near-monopoly on the desktop to establish its own version as the new, de facto standard. The result is that companies which previously dominated the market find themselves marginalized.
Examples?
Novell, which produced the WordPerfect word processor in the mid-90s, claims that Microsoft’s Office suite made use of undocumented features of Windows which allowed it to out-perform competing office products. WordPerfect, the industry-standard word processor of the DOS era, faded into obscurity as Microsoft Word ascended.
Netscape claimed in Microsoft’s U.S. antitrust trial that the bundling of Internet Explorer with Windows amounted to unfair competition. Netscape’s browser, released earlier and at one time the most popular, has gone the way of WordPerfect.
The products underlying Microsoft’s other core businesses—Windows, Windows Mobile, MSN, the XBox—were late entries to the worlds of graphical operating systems, mobile operating systems, web portals and gaming consoles.
So how does one explain the two faces of Microsoft? With nearly 79,000 employees and a diverse product line, Microsoft is a galaxy of a corporation that can, at times, seem to lack a unified corporate voice. One thing is for certain: unless a project wins the support of the larger corporate culture, it will wither on the vine, regardless of the plaudits it receives from outsiders.
Consider the fate of Internet Explorer for Macintosh. When version 5 was released in 2000, it was widely praised as one of the best web browsers on any platform—the rare example of a Microsoft product for Macs that exceeded the Windows version. Then as now, the kind of people who usually take digs at Microsoft were giving the company a second look. But after the initial release, developers were diverted from the project, and the browser quickly began to lag behind the competition. Only minor updates and bug fixes followed until 2003, when Microsoft axed it.
The forces driving the new, radical innovation within Microsoft are swimming upstream against a culture deeply invested in the embrace-extend-extinguish strategy. It’s too soon to tell whether we’re seeing a new Microsoft or just a flash in the pan.
© 2007 McClatchy. Reprinted with permission.
In the recent series of “Get a Mac” ads from Apple, John Hodgman personifies the PC as a boring, gray-suited middle-management type, and few of us care strongly enough to protest. Many of us appreciate Microsoft products for their familiarity or utility, but no one ever really gets excited about them. At least, that was the case until a little over a year ago, when Microsoft began unveiling a series of flashy new products. Suddenly they were eliciting the kind of gasps that techies usually reserve for companies like Apple or Google.
On the hardware side, consider Surface, Microsoft’s tabletop computing project. Imagine a multitouch display—like Apple’s iPhone, but much bigger—set into the form factor of a coffee table. The result is a truly novel computing platform, as different from desktop machines and mobile devices as they are from each other. Multiple users can simultaneously manipulate photos, maps and music using just their fingertips. The device can also interact with objects placed on its surface, including bluetooth-enabled phones.
On the software side, Microsoft has been generating a lot of buzz with a pair of projects from its Live Labs division. Photosynth can stitch together snapshots to build a three-dimensional model of a place. Photos of popular tourist destinations—already available on the web in large numbers—could be scooped up and used to create photo collages that could be “walked” through as if they were virtual worlds.
Meanwhile, Seadragon aims to create an infinitely zoomable graphical user interface. Imagine your collection of photo files arranged as a set of thumbnails on your desktop. Instead of opening a photo by clicking, you could just zoom in to view the photo in full resolution. When you’re done, zoom out, and keep zooming, until your entire collection of files becomes visible as if viewed from miles high.
These forward-looking initiatives will take real risks in being brought to the marketplace, which makes it all the more amazing that they’re being produced by Microsoft. The company has a well-earned reputation for waiting on the sidelines of an emerging market, letting smaller, nimbler companies absorb the risks of development, before hopping in and throwing its weight around to become a dominant player in the space.
Critics of the company refer to its “embrace-extend-extinguish” strategy: Microsoft adopts an established technology, produces a proprietary version that’s only partially compatible with existing standards, and then leverages its near-monopoly on the desktop to establish its own version as the new, de facto standard. The result is that companies which previously dominated the market find themselves marginalized.
Examples?
Novell, which produced the WordPerfect word processor in the mid-90s, claims that Microsoft’s Office suite made use of undocumented features of Windows which allowed it to out-perform competing office products. WordPerfect, the industry-standard word processor of the DOS era, faded into obscurity as Microsoft Word ascended.
Netscape claimed in Microsoft’s U.S. antitrust trial that the bundling of Internet Explorer with Windows amounted to unfair competition. Netscape’s browser, released earlier and at one time the most popular, has gone the way of WordPerfect.
The products underlying Microsoft’s other core businesses—Windows, Windows Mobile, MSN, the XBox—were late entries to the worlds of graphical operating systems, mobile operating systems, web portals and gaming consoles.
So how does one explain the two faces of Microsoft? With nearly 79,000 employees and a diverse product line, Microsoft is a galaxy of a corporation that can, at times, seem to lack a unified corporate voice. One thing is for certain: unless a project wins the support of the larger corporate culture, it will wither on the vine, regardless of the plaudits it receives from outsiders.
Consider the fate of Internet Explorer for Macintosh. When version 5 was released in 2000, it was widely praised as one of the best web browsers on any platform—the rare example of a Microsoft product for Macs that exceeded the Windows version. Then as now, the kind of people who usually take digs at Microsoft were giving the company a second look. But after the initial release, developers were diverted from the project, and the browser quickly began to lag behind the competition. Only minor updates and bug fixes followed until 2003, when Microsoft axed it.
The forces driving the new, radical innovation within Microsoft are swimming upstream against a culture deeply invested in the embrace-extend-extinguish strategy. It’s too soon to tell whether we’re seeing a new Microsoft or just a flash in the pan.
© 2007 McClatchy. Reprinted with permission.
Labels:
column,
McClatchy,
Microsoft,
multi-touch,
Photosynth
Tuesday, December 4, 2007
A conservative prediction
I’m just going to come out and say what we all know to be true. Let’s call it the Manifest Destiny of the Internet.
The Manifest Destiny of the Internet is this: Every bit of media in existence1 will be digitized and made available to the world in a form that is instantly accessible from anywhere and free to use.2
The “pirates” will win, and win so completely that we will forget what the fuss was ever about. Grabbing media without impediment of any kind will come to seem as natural as breathing.
As Cory Doctorow likes to say, there is no future in which bits will get harder to copy. There was a brief period of time, starting with Gutenberg in the case of printed matter and Edison in the case of audio recordings, when mechanical reproduction was possible but difficult—hence, scarce and centrally controllable. That period of time is ending.
I chose the term “Manifest Destiny,” with all of its connotations both dark and bright, for a reason. The transition to this world will be ugly. It will take a long time, and won’t happen without a fight. A lot of good people will lose their livelihoods. The quality of media may suffer, at least in the short term, while new methods of quality control are worked out.
The old business models will survive for a generation on the back of consumer habit; they may survive for a generation more under the umbrella of government protection. In the end, though, as the efficiencies of the black and gray markets triumph, Big Media will run out of lobbying dollars. Intellectual property as we know it will be no more.
Massive efforts are already underway to digitize the world’s media and make it accessible online. Google, Microsoft, and Yahoo are working with the world’s libraries to digitize books. Virtually every audio recording ever released can be found online—a project realized without any centrally organized effort.
Many of these projects are not sanctioned by rights holders, but increasingly, copyright holders themselves are making their catalogs available, reasoning that traffic can be monetized in any number of ways—see the New York Times archive and NBC/Fox’s Hulu, for instance. Scarcities will arise in the post-copyright world, along with ways to leverage those scarcities for gain. In a world where the viewer has unlimited choice, eyeballs and attention spans will become the new commodities to be brokered.
Edit: Anthony Grafton presents a contrary view in the New Yorker.
1. I’ll limit this to media which has been published and survived long enough to be digitized.
2. Also: high-quality, searchable, and fully hypertextualized.
The Manifest Destiny of the Internet is this: Every bit of media in existence1 will be digitized and made available to the world in a form that is instantly accessible from anywhere and free to use.2
The “pirates” will win, and win so completely that we will forget what the fuss was ever about. Grabbing media without impediment of any kind will come to seem as natural as breathing.
As Cory Doctorow likes to say, there is no future in which bits will get harder to copy. There was a brief period of time, starting with Gutenberg in the case of printed matter and Edison in the case of audio recordings, when mechanical reproduction was possible but difficult—hence, scarce and centrally controllable. That period of time is ending.
I chose the term “Manifest Destiny,” with all of its connotations both dark and bright, for a reason. The transition to this world will be ugly. It will take a long time, and won’t happen without a fight. A lot of good people will lose their livelihoods. The quality of media may suffer, at least in the short term, while new methods of quality control are worked out.
The old business models will survive for a generation on the back of consumer habit; they may survive for a generation more under the umbrella of government protection. In the end, though, as the efficiencies of the black and gray markets triumph, Big Media will run out of lobbying dollars. Intellectual property as we know it will be no more.
Massive efforts are already underway to digitize the world’s media and make it accessible online. Google, Microsoft, and Yahoo are working with the world’s libraries to digitize books. Virtually every audio recording ever released can be found online—a project realized without any centrally organized effort.
Many of these projects are not sanctioned by rights holders, but increasingly, copyright holders themselves are making their catalogs available, reasoning that traffic can be monetized in any number of ways—see the New York Times archive and NBC/Fox’s Hulu, for instance. Scarcities will arise in the post-copyright world, along with ways to leverage those scarcities for gain. In a world where the viewer has unlimited choice, eyeballs and attention spans will become the new commodities to be brokered.
Edit: Anthony Grafton presents a contrary view in the New Yorker.
1. I’ll limit this to media which has been published and survived long enough to be digitized.
2. Also: high-quality, searchable, and fully hypertextualized.
Monday, November 26, 2007
Surf to it, click on it, wait for it to appear on your doorstep
You survived Black Friday—welcome to Cyber Monday!
Monday, November 19, 2007
Taking the left turn at Uncanny Valley
This weekend, I went with some friends to see Beowulf in IMAX 3-D. It was a very satisfying visual experience, but there was quite a bit of talk afterward about the failure of the computer-generated graphics to appear completely lifelike.
There was quite a bit of talk on this subject in the press, as well. The same conversation seems to happen every time someone makes a CGI movie that aspires to photorealism rather than falling back on the graphical tropes of the cartoon—the 2001 Final Fantasy movie got the same treatment.
It’s dangerous territory. Between depictions of life that we accept because we find them clearly artificial, and therefore harmless, and those that we accept because they fool us into thinking they are real, lie those depictions which we reject because they seem real, but somehow off. Robotics researchers use the term “uncanny valley” to describe this hypothetical space, whose occupants inspire in us a sense of mild to extreme revulsion.
We haven’t been able to cross the uncanny valley yet, with either our robots or our computer graphics. There will be a lot to think about when we succeed—science fictions authors have been preparing us since the mid-twentieth century. But I’m more interested, at least for the moment, about the prospects for computer-generated art once crossing the valley ceases to be an interesting challenge.
Consider, in broad strokes, the development of painting. The pursuit of realism occupied painters during the Renaissance. But once the techniques for realistic depiction had been developed and became widespread, realist painting ceased to be a fine art and became a craft—or the favored style of conservative and anti-intellectual regimes. The arrival of the camera freed painting from the requirements of a practical art, and modern painting became abstract or non-representational, a fitting vehicle for theory.
I think that the recent explosion of visually innovative video games is a good sign that a similar maturation of computer graphics is just around the corner.
There was quite a bit of talk on this subject in the press, as well. The same conversation seems to happen every time someone makes a CGI movie that aspires to photorealism rather than falling back on the graphical tropes of the cartoon—the 2001 Final Fantasy movie got the same treatment.
It’s dangerous territory. Between depictions of life that we accept because we find them clearly artificial, and therefore harmless, and those that we accept because they fool us into thinking they are real, lie those depictions which we reject because they seem real, but somehow off. Robotics researchers use the term “uncanny valley” to describe this hypothetical space, whose occupants inspire in us a sense of mild to extreme revulsion.
We haven’t been able to cross the uncanny valley yet, with either our robots or our computer graphics. There will be a lot to think about when we succeed—science fictions authors have been preparing us since the mid-twentieth century. But I’m more interested, at least for the moment, about the prospects for computer-generated art once crossing the valley ceases to be an interesting challenge.
Consider, in broad strokes, the development of painting. The pursuit of realism occupied painters during the Renaissance. But once the techniques for realistic depiction had been developed and became widespread, realist painting ceased to be a fine art and became a craft—or the favored style of conservative and anti-intellectual regimes. The arrival of the camera freed painting from the requirements of a practical art, and modern painting became abstract or non-representational, a fitting vehicle for theory.
I think that the recent explosion of visually innovative video games is a good sign that a similar maturation of computer graphics is just around the corner.
Tuesday, November 13, 2007
Link roundup: surface computing
In the interest of posting more frequently, I've decided to start posting link roundups. On any given day I run through 50+ rss feeds and news sites, and compile lists of links that I think might be of use in writing future columns or blog posts.
Here’s the first batch: links dealing with the recent explosion in surface computing and multi-touch interfaces. I think there’s a widespread sense that this represents the first viable update to the basic computing interface since the popularization of the mouse.
Like that device, multi-touch interfaces allow us to interact with simulated space in the same way we interact with real space: by moving our bodies. Surface computing, however, represents a much more elegant way of doing so. By combining the point of physical contact and the point of visual representation, simulated objects become more phenomenologically material.
Jeff Han’s early (Feb. 2006) multi-touch table is now available in expanded form at Neiman Marcus for a cool $100,000 (found at Techdirt). Throughout his demo, Han emphasizes the intuitiveness of the interface, going so far as to claim that the interface has disappeared. Multi-touch computing is the rare example of a technology that seems to have arrived fully naturalized: it has appeared in our world as a new and fascinating species of cultural object, instantly graspable.
This effect may help explain why it’s already showing up in the toolkits of the barkeeper and the musician.
Invocations of the computing interface from “Minority Report” are becoming cliché, as in this examination of Microsoft's Surface table.
A couple more: Sharp combines multi-touch with optical scanning and, of course, Apple's iPhone—the most commercially successful implementation of multi-touch to date.
Edit: Somehow I missed this massive history of multi-touch computing from Microsoft researcher Bill Buxton.
Here’s the first batch: links dealing with the recent explosion in surface computing and multi-touch interfaces. I think there’s a widespread sense that this represents the first viable update to the basic computing interface since the popularization of the mouse.
Like that device, multi-touch interfaces allow us to interact with simulated space in the same way we interact with real space: by moving our bodies. Surface computing, however, represents a much more elegant way of doing so. By combining the point of physical contact and the point of visual representation, simulated objects become more phenomenologically material.
Jeff Han’s early (Feb. 2006) multi-touch table is now available in expanded form at Neiman Marcus for a cool $100,000 (found at Techdirt). Throughout his demo, Han emphasizes the intuitiveness of the interface, going so far as to claim that the interface has disappeared. Multi-touch computing is the rare example of a technology that seems to have arrived fully naturalized: it has appeared in our world as a new and fascinating species of cultural object, instantly graspable.
This effect may help explain why it’s already showing up in the toolkits of the barkeeper and the musician.
Invocations of the computing interface from “Minority Report” are becoming cliché, as in this examination of Microsoft's Surface table.
A couple more: Sharp combines multi-touch with optical scanning and, of course, Apple's iPhone—the most commercially successful implementation of multi-touch to date.
Edit: Somehow I missed this massive history of multi-touch computing from Microsoft researcher Bill Buxton.
Labels:
announcement,
interface,
links,
multi-touch,
surface computing
Wednesday, November 7, 2007
This space intentionally left blank
I’ve just got too much going on. Blogging will resume next week.
Monday, October 29, 2007
One to keep an eye on
Writer and comedian Stephen Fry has a new technology column at the Guardian. In his first outing, “Welcome to dork talk”, he objects to the distinctions so often made between between humanists and technologists...
There are some who would justify design by tying it to the bottom line. While it is true that beauty can grease the wheels of functionality, and that pretty things sell, I think this approach has it backward. Aesthetics is the end; functionality is the means. There is no objective reason why we should bother to go on living at all—we do so out of sheer preference, out of our aesthetic appreciation for living over dying.
Utility cannot be divorced from fancy. As we have struggled for decades now with poorly designed, nearly unusable, “utilitarian” computer technology, we have gradually come around to acknowledging the importance of taking pleasure in design.
Well, people can be dippy about all things digital and still read books, they can go to the opera and watch a cricket match and apply for Led Zeppelin tickets without splitting themselves asunder. Very little is as mutually exclusive as we seem to find it convenient to imagine. ... So, believe me, a love of gizmos doesn’t make me averse to paper, leather and wood, old-fashioned Christmases, Preston Sturges films and country walks.... and between design and engineering.
What do I think is the point of a digital device? Is it all about function? Or am I a “style over substance” kind of a guy? Well, that last question will get my hackles up every time. As if style and substance are at war! As if a device can function if it has no style. As if a device can be called stylish that does not function superbly.There is, I think, a growing consciousness of this last point. The “utility” and “efficiency” of a technological solution are often touted as if these were virtues unto themselves, and not a means to a greater end. The engineer who cannot appreciate aesthetics is like a miser who dies with millions but without having ever known a single earthly decadence.
There are some who would justify design by tying it to the bottom line. While it is true that beauty can grease the wheels of functionality, and that pretty things sell, I think this approach has it backward. Aesthetics is the end; functionality is the means. There is no objective reason why we should bother to go on living at all—we do so out of sheer preference, out of our aesthetic appreciation for living over dying.
Utility cannot be divorced from fancy. As we have struggled for decades now with poorly designed, nearly unusable, “utilitarian” computer technology, we have gradually come around to acknowledging the importance of taking pleasure in design.
Labels:
aesthetics,
design,
instrumental rationality,
two cultures
Wednesday, October 24, 2007
A harder look at the soft luddite
I introduced the idea of “soft luddism” in a recent post, and attempted to define it by alluding to a genre of writing that I assume to be widely recognizable. I should have worked harder to give a more explicit definition, because the soft luddite is too easily confused for a number of similar characters.
The old-fashioned, “hard” luddite dislikes a specific technology or class of technologies for the enumerable disadvantages they deal him. The soft luddite, however, is uneasy about “technology” as an abstract category, because of the affront to humanistic values that it is perceived to represent. His error is diametrically opposed to that of the gadget fetishist or other bleeding-edge enthusiast, who loves this abstract category, “technology,” without regard to what use a given technology may be put.
This “technology” never includes familiar or naturalized technologies, but only the bleeding edge, technology in the most obvious sense. The humanistic values that the soft luddite defends are almost always underdefined, since to define them explicitly would be to make them available for fair comparison with the advantages offered by the technology under critique.
There is almost always an unconscious element of classism in this refusal. The soft luddite is usually a member of the upper-middle class. He mingles with the wealthy and knows their ways. He is thus acutely self-conscious of his need to live by the working-class values of utility and efficiency, but romanticizes the inefficiency that is the leisure of the wealthy. That same inefficiency is also the prison of the down-and-out, which gives us what is perhaps the most common criticism of the “voluntary simplicity” movement: that most of the world’s people are already practicing simplicity of the involuntary kind!
Rebecca Solnit has an article called “Finding Time” in the present issue of Orion magazine. It’s more honest than most soft luddite pieces you’ll read, in that it actually offers up some of the values that must be compared in making a rational technological choice:
To compare these two lists is not to compare machines with humans, but to compare two different tiers of Maslow’s hierarchy of needs. People who are struggling to feed themselves can’t worry, just yet, about epiphanies, meanings, and pleasures. Solnit explicitly rejects the accusation of elitism, but her “nomadic and remote tribal peoples” and “cash-poor, culture-rich people in places like Louisiana” (and other flyover states, presumably) are acting out of necessity, not free choice.
Both the gadget enthusiast and the soft luddite are to be contrasted with the rational actor who does not recognize one category of technology as more “technological” than another. This person first defines his individual and social ends—including, perhaps, values from both of Solnit’s lists. He then chooses technologies—new and old—which seem to him to be the ones most likely to bring those ends about.
The old-fashioned, “hard” luddite dislikes a specific technology or class of technologies for the enumerable disadvantages they deal him. The soft luddite, however, is uneasy about “technology” as an abstract category, because of the affront to humanistic values that it is perceived to represent. His error is diametrically opposed to that of the gadget fetishist or other bleeding-edge enthusiast, who loves this abstract category, “technology,” without regard to what use a given technology may be put.
This “technology” never includes familiar or naturalized technologies, but only the bleeding edge, technology in the most obvious sense. The humanistic values that the soft luddite defends are almost always underdefined, since to define them explicitly would be to make them available for fair comparison with the advantages offered by the technology under critique.
There is almost always an unconscious element of classism in this refusal. The soft luddite is usually a member of the upper-middle class. He mingles with the wealthy and knows their ways. He is thus acutely self-conscious of his need to live by the working-class values of utility and efficiency, but romanticizes the inefficiency that is the leisure of the wealthy. That same inefficiency is also the prison of the down-and-out, which gives us what is perhaps the most common criticism of the “voluntary simplicity” movement: that most of the world’s people are already practicing simplicity of the involuntary kind!
Rebecca Solnit has an article called “Finding Time” in the present issue of Orion magazine. It’s more honest than most soft luddite pieces you’ll read, in that it actually offers up some of the values that must be compared in making a rational technological choice:
The gains are simple and we know the adjectives: convenient, efficient, safe, fast, predictable, productive. All good things for a machine, but lost in the list is the language to argue that we are not machines and our lives include all sorts of subtleties—epiphanies, alliances, associations, meanings, purposes, pleasures—that engineers cannot design, factories cannot build, computers cannot measure, and marketers will not sell.You’ll notice, however, that the comparison is handicapped. On Solnit’s telling, convenience, efficiency, and the rest of the first list are not human values but machine values. We’re back to the bogeyman of a looming technological essence.
To compare these two lists is not to compare machines with humans, but to compare two different tiers of Maslow’s hierarchy of needs. People who are struggling to feed themselves can’t worry, just yet, about epiphanies, meanings, and pleasures. Solnit explicitly rejects the accusation of elitism, but her “nomadic and remote tribal peoples” and “cash-poor, culture-rich people in places like Louisiana” (and other flyover states, presumably) are acting out of necessity, not free choice.
Both the gadget enthusiast and the soft luddite are to be contrasted with the rational actor who does not recognize one category of technology as more “technological” than another. This person first defines his individual and social ends—including, perhaps, values from both of Solnit’s lists. He then chooses technologies—new and old—which seem to him to be the ones most likely to bring those ends about.
Tuesday, October 23, 2007
Oh yes, they Kan
The Christian Science Monitor published my letter in response to Dinesh D’Souza's “What atheists Kant refute.” D’Souza is a conservative author who, like Ann Coulter, uses over-the-top controversy to sell books and land College Republican speaking engagements.
His op-ed piece is based on a reading of the early sections of Kant’s Critique of Pure Reason. While it's probably not fair to refer to D’Souza as a pseudo-intellectual, he clearly hasn't done his homework here. Of course, his reason for taking up Kant is not to engage in serious and fair-minded Kantian scholarship; it’s to cherry-pick the philosophical tradition for arguments that may be made to serve the cause of social conservatism. D’Souza’s arguments rest on two fallacies.
The first is the alleged authority of Kant. D’Souza asserts that Kant has never been refuted. It's true that no work has ever been produced which is universally recognized as a refutation of Kant’s Critique, but philosophy doesn't really work that way. A lot of philosophers never saw Kant’s arguments as an adequate rebuttal of Humean empiricism. Others took the Kantian paradigm as far as it could go, but a critique can only bear so much fruit. There are certainly elements of Kantian philosophy still in currency, and I sympathize with quite a few of them. But unreconstructed Kantians are rare in the academy.
The second fallacy is inappropriately using the noumenon-phenomenon distinction to place one’s confidence in the existence of God outside the reach of human reason. Had D’Souza read the second half of the Critique, or perhaps read a commentary on it, he would know that Kant himself argues against this approach. When we’re debating whether or not God exists, we’re taking up an idea of God—a phenomenon—for consideration. D’Souza is arguing for a noumenal God. If my reading is correct, Kant allows that his Critique can free one for such a belief, but denies that it can be used in defense of such a belief. A noumenal God is not one that D’Souza would much care for, anyway: it would be unknowable, featureless, and unrelated to human life.
Note: This is a bit more purely philosophical than usual; I’ll get back to the tech talk soon!
His op-ed piece is based on a reading of the early sections of Kant’s Critique of Pure Reason. While it's probably not fair to refer to D’Souza as a pseudo-intellectual, he clearly hasn't done his homework here. Of course, his reason for taking up Kant is not to engage in serious and fair-minded Kantian scholarship; it’s to cherry-pick the philosophical tradition for arguments that may be made to serve the cause of social conservatism. D’Souza’s arguments rest on two fallacies.
The first is the alleged authority of Kant. D’Souza asserts that Kant has never been refuted. It's true that no work has ever been produced which is universally recognized as a refutation of Kant’s Critique, but philosophy doesn't really work that way. A lot of philosophers never saw Kant’s arguments as an adequate rebuttal of Humean empiricism. Others took the Kantian paradigm as far as it could go, but a critique can only bear so much fruit. There are certainly elements of Kantian philosophy still in currency, and I sympathize with quite a few of them. But unreconstructed Kantians are rare in the academy.
The second fallacy is inappropriately using the noumenon-phenomenon distinction to place one’s confidence in the existence of God outside the reach of human reason. Had D’Souza read the second half of the Critique, or perhaps read a commentary on it, he would know that Kant himself argues against this approach. When we’re debating whether or not God exists, we’re taking up an idea of God—a phenomenon—for consideration. D’Souza is arguing for a noumenal God. If my reading is correct, Kant allows that his Critique can free one for such a belief, but denies that it can be used in defense of such a belief. A noumenal God is not one that D’Souza would much care for, anyway: it would be unknowable, featureless, and unrelated to human life.
Note: This is a bit more purely philosophical than usual; I’ll get back to the tech talk soon!
Tuesday, October 16, 2007
The soft luddite
This month’s GQ—the 50th anniversary issue—has an article written by Scott Greenberger about the decline of the printed newspaper:
Gosh, I can’t live without my email these days, but wouldn’t it be nice if people still wrote letters? Has anyone stopped to consider what we’ve lost?
Soft luddite commentary falls into two recognizable sub-genres. The first is the rant of the befuddled writer, at the mercy of his computer because he’s indignant that he should have to invest any significant amount of time in learning how to use it. The second, much more common, is a creepy fetishism for industrial-age technology: lots of loving odes to ink-stained fingers, loud machinery, and the smell of developer fluid, moments lost in time like tears in rain. Greenberger again:
When I worked as a web producer at the Christian Science Monitor, most of our work consisted of converting material created “on the print side” for the web, and we were frequently made to run commentary pieces in the soft luddite spirit. The drumbeat of technophobic whining made me feel at times like the target of silent interdepartmental loathing.
Perhaps I’m naïve, but I doubt that was really the case. I knew that the editors were choosing pieces likely to appeal to the Monitor’s print subscribers, an aging demographic. And this was just prior to the industry-wide epiphany that brought web journalism to the forefront. At that time, it was still “The Newspaper,” not yet “the print product”—and the newsosaurs were confident that the web would remain forever marginal. Nowadays, as print budgets are being slashed and news companies are pouring money into web infrastructure, the hostility is more open, and in some cases becomes luddism of the good, old-fashioned, save-my-job-from-the-machines variety.
Of course, new technology shouldn’t always trump the old. Given an outcome we’re trying to reach, we should choose the technology which will help us reach it in the way that is most in line with our values. So the objection to the soft luddite is really an objection to nostalgia, a false consciousness that transforms the old and familiar into the natural and the good.
For many, it’s hard to imagine Sundays without a two-hour stint on the couch, surrounded by the detritus of the impossibly fat Sunday New York Times. ... There’s something to be said for rituals, and no good rituals involve staring at a twelve-inch screen.I’m sure you've seen a lot of articles like this. I have come to refer to the attitude expressed in them as “soft luddism.” Unlike the classical luddite, who rejects technology because of its role in commoditizing labor, the soft luddite is all too happy to enjoy the benefits of technological advances. He just won’t admit it.
Gosh, I can’t live without my email these days, but wouldn’t it be nice if people still wrote letters? Has anyone stopped to consider what we’ve lost?
Soft luddite commentary falls into two recognizable sub-genres. The first is the rant of the befuddled writer, at the mercy of his computer because he’s indignant that he should have to invest any significant amount of time in learning how to use it. The second, much more common, is a creepy fetishism for industrial-age technology: lots of loving odes to ink-stained fingers, loud machinery, and the smell of developer fluid, moments lost in time like tears in rain. Greenberger again:
It’s depressing to think that future generations of men won’t know the joy of discovering the sports section left behind in a bathroom stall.Yeah, isn’t it?
When I worked as a web producer at the Christian Science Monitor, most of our work consisted of converting material created “on the print side” for the web, and we were frequently made to run commentary pieces in the soft luddite spirit. The drumbeat of technophobic whining made me feel at times like the target of silent interdepartmental loathing.
Perhaps I’m naïve, but I doubt that was really the case. I knew that the editors were choosing pieces likely to appeal to the Monitor’s print subscribers, an aging demographic. And this was just prior to the industry-wide epiphany that brought web journalism to the forefront. At that time, it was still “The Newspaper,” not yet “the print product”—and the newsosaurs were confident that the web would remain forever marginal. Nowadays, as print budgets are being slashed and news companies are pouring money into web infrastructure, the hostility is more open, and in some cases becomes luddism of the good, old-fashioned, save-my-job-from-the-machines variety.
Of course, new technology shouldn’t always trump the old. Given an outcome we’re trying to reach, we should choose the technology which will help us reach it in the way that is most in line with our values. So the objection to the soft luddite is really an objection to nostalgia, a false consciousness that transforms the old and familiar into the natural and the good.
Tuesday, October 9, 2007
Bits become atoms
It is often said that the conceit of the first Matrix film—with the “whoa” it brings forth from both the protagonist and viewer—is a variation on the “brain in a vat” thought experiment often given in introductory philosophy classes to illustrate the position of the universal skeptic. As described in the Stanford Encyclopedia of Philosophy:
Most of the simulated worlds we’re familiar with mimic reality at a very high level of emergence. Physics engines used in video games, for instance, copy the effects of Newtonian physics with macroscopic objects treated as primitives. But in the real world, the behavior of macroscopic objects emerges as a consequence of the crowd behavior of microscopic objects. Since the physics engine does not simulate these, it must substitute other causes to reach the same effect.
Take, for instance, the fact that two solid objects which collide will bounce away from one another rather than pass through each other. In the real world, this will occur because the particles which make up the solids will electrically repel each other; in a physics engine, this effect must be created with explicit collision detection and response rules.
But what if our physics engines had a great deal more computing power at their disposal, and simulated physics at the microscale, allowing macroscopic effects to emerge? In such a situation, the artificiality of the physical effects is less clear. They are not explicitly created by the programmer, but rather appear as a result of the same processes by which they come about in real life. As artificial intelligence researcher Steve Grand writes in his book Creation:
If we found ourselves in such a simulation, there is no way we could ever find it out; indeed, there is no way we can state with confidence that we are not. We don’t know anything about the nature of the ultimate substrate of this world. It could be a computer, the mind of God, or turtles all the way down. Yet this is the world which we refer to as “real.” I think we’re justified in referring to it as such, but in so doing we have to accept that being real does not necessarily exclude the possibility of being a computer simulation.
Although its portrayal is not entirely consistent, the fidelity of physical behaviors in the Matrix suggest that it’s just such a perfect simulation. The human occupants of the Matrix are shown to be aliens to that world. But it can never be entirely clear that the “real world” to which they escape is less of a simulation than the one they left, or that the natural inhabitants of the Matrix—the programs—aren’t less flesh-and-blood than their jacked-in counterparts. Indeed, both of these conclusions are hinted at in the second and third films.
Consider the hypothesis that you are a disembodied brain floating in a vat of nutrient fluids. This brain is connected to a supercomputer whose program produces electrical impulses that stimulate the brain in just the way that normal brains are stimulated as a result of perceiving external objects in the normal way. (The movie ‘The Matrix’ depicts embodied brains which are so stimulated, while their bodies float in a vats.) If you are a brain in a vat, then you have experiences that are qualitatively indistinguishable from those of a normal perceiver. If you come to believe, on the basis of your computer-induced experiences, that you are looking at at tree, then you are sadly mistaken.I would argue that while this comparison immediately leaps to mind, the real problem raised by the Matrix is much deeper. The brain-in-a-vat Gedankenexperiment implicitly assumes both that there is a real world, and that the computer-generated illusion isn’t it. In the Matrix films, however, this isn’t quite so straightforward.
Most of the simulated worlds we’re familiar with mimic reality at a very high level of emergence. Physics engines used in video games, for instance, copy the effects of Newtonian physics with macroscopic objects treated as primitives. But in the real world, the behavior of macroscopic objects emerges as a consequence of the crowd behavior of microscopic objects. Since the physics engine does not simulate these, it must substitute other causes to reach the same effect.
Take, for instance, the fact that two solid objects which collide will bounce away from one another rather than pass through each other. In the real world, this will occur because the particles which make up the solids will electrically repel each other; in a physics engine, this effect must be created with explicit collision detection and response rules.
But what if our physics engines had a great deal more computing power at their disposal, and simulated physics at the microscale, allowing macroscopic effects to emerge? In such a situation, the artificiality of the physical effects is less clear. They are not explicitly created by the programmer, but rather appear as a result of the same processes by which they come about in real life. As artificial intelligence researcher Steve Grand writes in his book Creation:
A computer simulation of an atom is not really an atom.... But if you make some molecules by combining those simulated atoms, it is not the molecules’ fault that their substrate is a sham; I think they will, in a very real sense, actually be molecules. ... the molecules made from it are second-order simulations because they are made not by combining computer instructions but by combining simulated objects.Suppose we were to take this to its logical conclusion and posit a simulation based on the still-undiscovered Grand Unified Theory of physics, that is, a perfect physics engine from which even the behavior of the tiniest particles emerged from underlying rules rather than being directly simulated.
If we found ourselves in such a simulation, there is no way we could ever find it out; indeed, there is no way we can state with confidence that we are not. We don’t know anything about the nature of the ultimate substrate of this world. It could be a computer, the mind of God, or turtles all the way down. Yet this is the world which we refer to as “real.” I think we’re justified in referring to it as such, but in so doing we have to accept that being real does not necessarily exclude the possibility of being a computer simulation.
Although its portrayal is not entirely consistent, the fidelity of physical behaviors in the Matrix suggest that it’s just such a perfect simulation. The human occupants of the Matrix are shown to be aliens to that world. But it can never be entirely clear that the “real world” to which they escape is less of a simulation than the one they left, or that the natural inhabitants of the Matrix—the programs—aren’t less flesh-and-blood than their jacked-in counterparts. Indeed, both of these conclusions are hinted at in the second and third films.
Wednesday, October 3, 2007
Metaphysics matters
Kant opens the first preface to the Critique of Pure Reason with a rumination on metaphysics:
But there’s a reason why Kant says that metaphysics can’t be ignored. Today’s “common sense”—or as I prefer, “vernacular philosophy”*—did not spring ready-made from nature. It reflects the outcome of the intellectual battles of yesteryear, and those battles were shaped by historical and political forces. To fail to examine metaphysics does not free from you holding metaphysical opinions; it only ensures that those opinions will be uninformed.
That’s why it was refreshing to see an op-ed, “Dualism and disease,” at STLtoday.com, which blames the lingering, mind-and-body metaphysics of Descartes for inequity in today’s health insurance programs.
This blog is founded on the premise that figuring out how to live with our present and near-future technologies will require intellectual work beyond engineering and design, beyond sociology and economics, down to the very first principles of philosophy. What happens to epistemology when search becomes as fast as recollection, and knowledge moves from the cranium to the cloud? What happens to metaphysics when form learns to mimic matter, when simulations become indistinguishable from nature and algorithms become indistinguishable from machines? These are questions with real, practical consequences.
*Edit: In Small Pieces Loosely Joined, David Weinberger uses the phrase “default philosophy” to refer to the same phenomenon. His is a much more graceful expression, and I’ll probably crib it for future use.
Human reason has this peculiar fate that in one species of its knowledge it is burdened by questions which, as prescribed by the very nature of reason itself, it is not able to ignore, but which, as transcending all its powers, it is also not able to answer.In the 20th century, as science rose to its golden age, the then-dominant philosophy of science, logical positivism, succeeded in expelling metaphysics from favor among English-speaking philosophers. It is still seen in some circles as a little hokey to profess an interest in the Queen of the Sciences, and metaphysics remains a dirty word in science departments everywhere.
But there’s a reason why Kant says that metaphysics can’t be ignored. Today’s “common sense”—or as I prefer, “vernacular philosophy”*—did not spring ready-made from nature. It reflects the outcome of the intellectual battles of yesteryear, and those battles were shaped by historical and political forces. To fail to examine metaphysics does not free from you holding metaphysical opinions; it only ensures that those opinions will be uninformed.
That’s why it was refreshing to see an op-ed, “Dualism and disease,” at STLtoday.com, which blames the lingering, mind-and-body metaphysics of Descartes for inequity in today’s health insurance programs.
This blog is founded on the premise that figuring out how to live with our present and near-future technologies will require intellectual work beyond engineering and design, beyond sociology and economics, down to the very first principles of philosophy. What happens to epistemology when search becomes as fast as recollection, and knowledge moves from the cranium to the cloud? What happens to metaphysics when form learns to mimic matter, when simulations become indistinguishable from nature and algorithms become indistinguishable from machines? These are questions with real, practical consequences.
*Edit: In Small Pieces Loosely Joined, David Weinberger uses the phrase “default philosophy” to refer to the same phenomenon. His is a much more graceful expression, and I’ll probably crib it for future use.
Friday, September 28, 2007
After the Net comes the Mesh?
By Darren Abrecht, McClatchy Interactive
Wireless mesh networks hold promise for lowering barriers to network connectivity. They also hold the potential to alter the balance of power in cyberspace, and revive the hopes of those who once believed that the Internet could provide a forum for open communication beyond the reach of corporate or government censorship.
Verizon and AT&T have both expressed interest in bidding on wireless spectrum in the FCC’s January 2008 auction. So has Google. Many analysts see this unusual move as a ploy to force the telecoms to observe open access rules which Google supports but which the telecoms oppose. However, at least one technology pundit, PBS’s Robert Cringely, believes that Google is planning to use the spectrum to build a large-scale mesh network and bring wireless broadband to the masses.
Meshes behave differently than the communications networks we’re familiar with. In our existing communication infrastructure, most client devices connect directly to a service provider’s network. When you make a cell phone call, for instance, the first stop for the signal after leaving your handset is your carrier’s nearest tower. If your phone can’t find a tower owned by your carrier or one of its roaming partners, you don’t get to make the phone call.
With a wireless mesh, however, other client devices become links in the communication infrastructure. A cell phone that was part of a mesh network wouldn’t necessarily be stranded if it couldn’t find a tower right away. It could search for other cell phones within range, and then bounce a signal from mobile to mobile until it found a tower. If the person you’re trying to call isn’t too far away, you might be able to connect to them without even using a tower. A Swedish company called TerraNet is trying out a cell phone system based on this concept. TerraNet phone calls, which are free, are routed using only other cell phones and the Internet.
The same principle has been used to extend the range of Wi-Fi Internet access points. MIT is working with the city of Cambridge, Mass., to provide free Internet access throughout the city by means of a wireless mesh. A company called Meraki is doing the same for San Francisco. Meraki is partially funded by Google, adding fuel to the rumor of a wireless ‘Googlenet.’
While a mesh network can provide a route to the Internet backbone, using a mesh to connect individual users in a peer-to-peer fashion may prove to be the more revolutionary application. The One Laptop Per Child project plans for its devices to be used in this way, communicating directly with each other to enable chat, voice-over-IP, and project collaboration without accessing the Internet.
Bringing the power of social computing to the world’s poor is just one revolution that mesh networking could bring about. If everyone used their laptop as a server, client and router, it just might breathe life back into the dream, once held by many, that the Internet could be a positive force for liberty.
In 1996, early in the Internet’s explosion as a popular medium and just prior to its broad commercialization, activist and Electronic Frontier Foundation founder John Perry Barlow released “A Declaration of the Independence of Cyberspace,” a cyber-libertarian manifesto. Addressed to the “governments of the Industrial World,” the declaration asserted the identity of the Internet as an autonomous democratic community, exempt from the authority of existing governments.
The declaration was widely read and its sentiments broadly shared among early adopters. However, the Wild West of the early Web was not to last. Its independence was a consequence of the cultural gap between computing enthusiasts and authority figures, not an intrinsic feature of the Net itself. No one has believed in the independence of cyberspace since the government of the world’s most populous nation erected the “Great Firewall of China”—an electronic censor monitoring all Net traffic in that country—to restrict the free flow of ideas to its people.
In the U.S., the government and Internet backbone corporations have worked together to compromise civil liberties, often using copyright, pornography and terrorism as pretexts to invade the privacy of civilians. AT&T is the target of a class-action lawsuit alleging that the company collaborated with the NSA to illegally spy on American citizens. A federal judge has ruled the lawsuit may go forward, despite the government’s attempt to have it dismissed on state secret grounds.
Wireless mesh networks could allow users to communicate by routing signals through computers held by private citizens, without the need to pass through a backbone controlled by corrupt government and corporate entities. The result would be to wrest control of the infrastructure from powerful interests and bring it under the domain of the public good.
©2007 McClatchy. Reprinted with permission.
Wireless mesh networks hold promise for lowering barriers to network connectivity. They also hold the potential to alter the balance of power in cyberspace, and revive the hopes of those who once believed that the Internet could provide a forum for open communication beyond the reach of corporate or government censorship.
Verizon and AT&T have both expressed interest in bidding on wireless spectrum in the FCC’s January 2008 auction. So has Google. Many analysts see this unusual move as a ploy to force the telecoms to observe open access rules which Google supports but which the telecoms oppose. However, at least one technology pundit, PBS’s Robert Cringely, believes that Google is planning to use the spectrum to build a large-scale mesh network and bring wireless broadband to the masses.
Meshes behave differently than the communications networks we’re familiar with. In our existing communication infrastructure, most client devices connect directly to a service provider’s network. When you make a cell phone call, for instance, the first stop for the signal after leaving your handset is your carrier’s nearest tower. If your phone can’t find a tower owned by your carrier or one of its roaming partners, you don’t get to make the phone call.
With a wireless mesh, however, other client devices become links in the communication infrastructure. A cell phone that was part of a mesh network wouldn’t necessarily be stranded if it couldn’t find a tower right away. It could search for other cell phones within range, and then bounce a signal from mobile to mobile until it found a tower. If the person you’re trying to call isn’t too far away, you might be able to connect to them without even using a tower. A Swedish company called TerraNet is trying out a cell phone system based on this concept. TerraNet phone calls, which are free, are routed using only other cell phones and the Internet.
The same principle has been used to extend the range of Wi-Fi Internet access points. MIT is working with the city of Cambridge, Mass., to provide free Internet access throughout the city by means of a wireless mesh. A company called Meraki is doing the same for San Francisco. Meraki is partially funded by Google, adding fuel to the rumor of a wireless ‘Googlenet.’
While a mesh network can provide a route to the Internet backbone, using a mesh to connect individual users in a peer-to-peer fashion may prove to be the more revolutionary application. The One Laptop Per Child project plans for its devices to be used in this way, communicating directly with each other to enable chat, voice-over-IP, and project collaboration without accessing the Internet.
Bringing the power of social computing to the world’s poor is just one revolution that mesh networking could bring about. If everyone used their laptop as a server, client and router, it just might breathe life back into the dream, once held by many, that the Internet could be a positive force for liberty.
In 1996, early in the Internet’s explosion as a popular medium and just prior to its broad commercialization, activist and Electronic Frontier Foundation founder John Perry Barlow released “A Declaration of the Independence of Cyberspace,” a cyber-libertarian manifesto. Addressed to the “governments of the Industrial World,” the declaration asserted the identity of the Internet as an autonomous democratic community, exempt from the authority of existing governments.
The declaration was widely read and its sentiments broadly shared among early adopters. However, the Wild West of the early Web was not to last. Its independence was a consequence of the cultural gap between computing enthusiasts and authority figures, not an intrinsic feature of the Net itself. No one has believed in the independence of cyberspace since the government of the world’s most populous nation erected the “Great Firewall of China”—an electronic censor monitoring all Net traffic in that country—to restrict the free flow of ideas to its people.
In the U.S., the government and Internet backbone corporations have worked together to compromise civil liberties, often using copyright, pornography and terrorism as pretexts to invade the privacy of civilians. AT&T is the target of a class-action lawsuit alleging that the company collaborated with the NSA to illegally spy on American citizens. A federal judge has ruled the lawsuit may go forward, despite the government’s attempt to have it dismissed on state secret grounds.
Wireless mesh networks could allow users to communicate by routing signals through computers held by private citizens, without the need to pass through a backbone controlled by corrupt government and corporate entities. The result would be to wrest control of the infrastructure from powerful interests and bring it under the domain of the public good.
©2007 McClatchy. Reprinted with permission.
Labels:
column,
cyberlibertarian,
democratic technology,
McClatchy,
mesh
Thursday, September 27, 2007
Time and the Technium
I recently discovered that, in addition to his widely-read “Cool Tools” blog, former Wired executive editor and Long Now Foundation board member Kevin Kelly has a blog called “The Technium.” If you’ve never heard of it, go read a few entries. It’s everything I hope this blog could someday be.
In the inaugural post, “My Search for the Meaning of Tech,” Kelly introduces the idea of the Technium:
In order to understand how technology and society evolve and transform each other, however, the concept must be freed from the contemporary moment. Technology is more than a handful of arbitrary categories that history has brought to momentary prominence. Once you drop the “cutting-edge” requirement, it becomes difficult to draw a line between artifacts that count as technology and those that don’t. Eventually, one must concede that technology includes all structures, concrete and abstract, which humans have created in order to bring about some imagined, desired end.
Viewed in this way, technology becomes inseparable from techne, one of Aristotle’s five categories of human knowledge. The study of technology becomes the study, not of artifacts or a disembodied historical force, but of human being and knowing.
In the inaugural post, “My Search for the Meaning of Tech,” Kelly introduces the idea of the Technium:
It’s a word I’ve reluctantly coined to designate the greater sphere of technology—one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types.We all have a habit of referring to only the leading edge of innovation as “technology”—in part, no doubt, because of the process of naturalization I described earlier. Things that count as “technology,” in our vernacular way of speaking, include computers, biotech, advanced manufacturing processes, space-aged alloys. Things that don’t count include bricks, novels, roads, languages.
In order to understand how technology and society evolve and transform each other, however, the concept must be freed from the contemporary moment. Technology is more than a handful of arbitrary categories that history has brought to momentary prominence. Once you drop the “cutting-edge” requirement, it becomes difficult to draw a line between artifacts that count as technology and those that don’t. Eventually, one must concede that technology includes all structures, concrete and abstract, which humans have created in order to bring about some imagined, desired end.
Viewed in this way, technology becomes inseparable from techne, one of Aristotle’s five categories of human knowledge. The study of technology becomes the study, not of artifacts or a disembodied historical force, but of human being and knowing.
Labels:
Aristotle,
Kevin Kelly,
technological essentialism
Monday, September 24, 2007
I can't let you do that, Dave
Two articles caught my eye over the weekend because of their opposed views on the relationship of society to technology: Regina Lynn’s “Rude People, Not Tech, Cause Bad Manners” at Wired.com and George Johnson’s “An Oracle for Our Time, Part Man, Part Machine,” at NYTimes.com.
Lynn’s column argues against a common complaint: that an increasing number of people defer face-to-face interaction in order to connect via IM, cell phones, etc., often leading to obliviousness of people in their immediate, physical vicinity.
According to Lynn, technology doesn’t impede, but only enables interpersonal relationships. An IM-only friend is as real as an in-the-flesh friend. Indeed, for the socially awkward, electronically mediated relationships may be deeper and more open than those conducted face-to-face. And the real problem with too-loud-in-public cell phone conversations is less the gadget than the oaf holding it.
I tend to agree, with some reservations. If technology is capable of enabling “good” socialization, it must likewise be capable of magnifying our rudeness.
Lynn sees us as in control of our actions. The technology has no insidious impact on human behavior. It is powerless and thus blameless: cell phones don’t offend people, people offend people. Yet no one can claim that antipathy towards public cell phone use isn’t a social phenomenon, or that it could exist if there were no cell phones.
Johnson’s piece begins by breaking down the etymology of “algorithm” before omenously announcing, “It was the Internet that stripped the word of its innocence.”
He takes issue with two distinct, but closely related phenomena of the internet age: the automation of judgment through the use of powerful algorithms, e.g. Google’s PageRank or NewsRank, and systems—both human- and machine-directed—which perform knowledge-intensive tasks through crowdsourcing, e.g. Wikipedia or Amazon’s Mechanical Turk.
Johnson fears that these entities are robbing of us of judgment and enslaving us to a technological hive-mind. His is less an argument than an appeal to the viscera: on his account, these entities are horror-show symbiotes of man and machine, perhaps a “buzzing mechanism with replaceable human parts”, or “an organism with an immune system of human leukocytes”. The illustrations accompanying the article belong to a type commonly seen in the media of fearful technological essentialism: crude cyborgs of human tissue and forms superficially evocative of industrial tech.
It seems unlikely that Wikipedia will overthrow the nation-state as the leading technology for the subjugation of individual will (though virtual reality pioneer Jaron Lanier, for one, is troubled by the possibility). Systems can and do take on a direction of their own, but they are always created to satisfy a set of human interests. Rather than framing our fears in terms of man-versus-machine, let us ask who the machine is working for, and if their values are our values.
Lynn’s column argues against a common complaint: that an increasing number of people defer face-to-face interaction in order to connect via IM, cell phones, etc., often leading to obliviousness of people in their immediate, physical vicinity.
According to Lynn, technology doesn’t impede, but only enables interpersonal relationships. An IM-only friend is as real as an in-the-flesh friend. Indeed, for the socially awkward, electronically mediated relationships may be deeper and more open than those conducted face-to-face. And the real problem with too-loud-in-public cell phone conversations is less the gadget than the oaf holding it.
I tend to agree, with some reservations. If technology is capable of enabling “good” socialization, it must likewise be capable of magnifying our rudeness.
Lynn sees us as in control of our actions. The technology has no insidious impact on human behavior. It is powerless and thus blameless: cell phones don’t offend people, people offend people. Yet no one can claim that antipathy towards public cell phone use isn’t a social phenomenon, or that it could exist if there were no cell phones.
Johnson’s piece begins by breaking down the etymology of “algorithm” before omenously announcing, “It was the Internet that stripped the word of its innocence.”
He takes issue with two distinct, but closely related phenomena of the internet age: the automation of judgment through the use of powerful algorithms, e.g. Google’s PageRank or NewsRank, and systems—both human- and machine-directed—which perform knowledge-intensive tasks through crowdsourcing, e.g. Wikipedia or Amazon’s Mechanical Turk.
Johnson fears that these entities are robbing of us of judgment and enslaving us to a technological hive-mind. His is less an argument than an appeal to the viscera: on his account, these entities are horror-show symbiotes of man and machine, perhaps a “buzzing mechanism with replaceable human parts”, or “an organism with an immune system of human leukocytes”. The illustrations accompanying the article belong to a type commonly seen in the media of fearful technological essentialism: crude cyborgs of human tissue and forms superficially evocative of industrial tech.
It seems unlikely that Wikipedia will overthrow the nation-state as the leading technology for the subjugation of individual will (though virtual reality pioneer Jaron Lanier, for one, is troubled by the possibility). Systems can and do take on a direction of their own, but they are always created to satisfy a set of human interests. Rather than framing our fears in terms of man-versus-machine, let us ask who the machine is working for, and if their values are our values.
Tuesday, September 18, 2007
Dedication
text–script–machine is dedicated to the memory of Ronald Anderson, SJ, my friend and advisor, who passed away in June at the age of 57.
Ron was a Jesuit priest and held PhDs in physics and philosophy. His research involved applying the techniques of literary analysis to 19th century scientific texts, in order to better understand the development of electrical theory. He was also deeply interested in how the Catholic faith might respond to the challenges of postmodernism.
It was Ron who introduced me to science and technology studies, and I will sorely miss not being able to correspond with him as I set out on this endeavor.
Ron was a Jesuit priest and held PhDs in physics and philosophy. His research involved applying the techniques of literary analysis to 19th century scientific texts, in order to better understand the development of electrical theory. He was also deeply interested in how the Catholic faith might respond to the challenges of postmodernism.
It was Ron who introduced me to science and technology studies, and I will sorely miss not being able to correspond with him as I set out on this endeavor.
about that official launch...
I had intended to do a sort of “grand opening,” including a mass email announcement, a new design, and a rigorous posting schedule. But I seem to be easing into things—collecting notes for good topics to write on, letting people know one-by-one, and so on. This is working for me so far, and I think I’m going to keep it up.
I’m a trained graphic designer—if not a practicing one—so using the default blogger template bugs me. There will be a new design along at some point. I’m also going to do my best to post at least three times a week from here on out, though life will get in the way at times.
If you have a Google account, please feel free to log in and comment on or argue with any post you see here. If you’re just tuning in, the original post features some brief notes on what this blog is about.
I’m a trained graphic designer—if not a practicing one—so using the default blogger template bugs me. There will be a new design along at some point. I’m also going to do my best to post at least three times a week from here on out, though life will get in the way at times.
If you have a Google account, please feel free to log in and comment on or argue with any post you see here. If you’re just tuning in, the original post features some brief notes on what this blog is about.
Monday, September 17, 2007
Shiny plastic cases
One of the things that fascinates me about technology is what I call technological encapsulation, the process of obfuscation by which the engineering miracle is brought down to earth and made humane.
We’re really not all that interested in how the gadget got here. We only want it to exist, ready-to-hand, to help us in satisfying our wants and needs. And so we hide each of its Aristotelian causes, except the final cause, which is our own reason for employing it.
Like a grain of sand in an oyster shell, the technological artifact must be physically enclosed. Otherwise, its rude workings may remind us of the inferiority of our understanding to that of the technologists who created it. Thus its formal and material causes are hidden from view. And since we might object to the environmental or socioeconomic misdeeds that were likely involved in the creation of any modern artifact, its production process is hidden behind factory walls, in foreign nations, and concealed as an industrial secret.
Of course, we must be complicit in the process of technological encapsulation; if we wanted to open the case or research the production process, we could. And some people do—but this is a distinctly different attitude in approaching the object than the manufacturer intends, or most people practice.
If the process of encapsulation is complete, the technological artifact will enter our world as a “natural” object, or at least a humane one. But the process is almost never complete, and so most artifacts require the passing of a generation to become naturalized.
Witness, for instance, the phenomenon of typewritten text. An entire generation of writers railed against the dehumanizing aspects of the clattering mechanical beast. Of course, the pen it replaced was also unnatural, strictly speaking, but had the advantage of familiarity. Now, typewriters are quaint collectibles, calling to mind a more romantic era when the novel was king and writing was still performed with mechanical devices in physical space. And modern techies of a conservative bent swear by “plain text,” i.e., typewritten text, as the simplest, most honest incarnation of the written word.
We’re really not all that interested in how the gadget got here. We only want it to exist, ready-to-hand, to help us in satisfying our wants and needs. And so we hide each of its Aristotelian causes, except the final cause, which is our own reason for employing it.
Like a grain of sand in an oyster shell, the technological artifact must be physically enclosed. Otherwise, its rude workings may remind us of the inferiority of our understanding to that of the technologists who created it. Thus its formal and material causes are hidden from view. And since we might object to the environmental or socioeconomic misdeeds that were likely involved in the creation of any modern artifact, its production process is hidden behind factory walls, in foreign nations, and concealed as an industrial secret.
Of course, we must be complicit in the process of technological encapsulation; if we wanted to open the case or research the production process, we could. And some people do—but this is a distinctly different attitude in approaching the object than the manufacturer intends, or most people practice.
If the process of encapsulation is complete, the technological artifact will enter our world as a “natural” object, or at least a humane one. But the process is almost never complete, and so most artifacts require the passing of a generation to become naturalized.
Witness, for instance, the phenomenon of typewritten text. An entire generation of writers railed against the dehumanizing aspects of the clattering mechanical beast. Of course, the pen it replaced was also unnatural, strictly speaking, but had the advantage of familiarity. Now, typewriters are quaint collectibles, calling to mind a more romantic era when the novel was king and writing was still performed with mechanical devices in physical space. And modern techies of a conservative bent swear by “plain text,” i.e., typewritten text, as the simplest, most honest incarnation of the written word.
Labels:
Aristotle,
encapsulation,
Heidegger,
naturalization
Wednesday, August 29, 2007
Pie in the sky
When Apollo 11 landed on the moon in 1969, it was a Copernican moment in a way. The keepers of the universe-on-paper, the universe of cosmological theory, had long ago banished the earth from its privileged position at the center of it all. The universe-as-lived, however, remained unavoidably Ptolemaic. For our ordinary comings and goings, and consequently for the liberal arts and social sciences, the earth has been the end-all and be-all.
But the moon landing, which fulfilled the project defined by Jules Verne in From the Earth to the Moon a little more than a century after he first penned it, expanded our realm of influence by a sphere. The center of gravity of human life shifted outward, and we began to think of ourselves as spacefaring.
Lately, we have begun to think of the moon—which might as well have been made of green cheese, for all we knew or cared—as a real economic resource.
As my brother reminds me, that mining the moon will ever prove worthwhile is still a long shot. But the mere fact that it has been proposed signals a sea change in our thinking.
But the moon landing, which fulfilled the project defined by Jules Verne in From the Earth to the Moon a little more than a century after he first penned it, expanded our realm of influence by a sphere. The center of gravity of human life shifted outward, and we began to think of ourselves as spacefaring.
Lately, we have begun to think of the moon—which might as well have been made of green cheese, for all we knew or cared—as a real economic resource.
As my brother reminds me, that mining the moon will ever prove worthwhile is still a long shot. But the mere fact that it has been proposed signals a sea change in our thinking.
Monday, August 27, 2007
The Spock Entity Resolution and Extraction Challenge
Newly launched people search engine Spock is offering $50,000 to the team that can create the best algorithm for identifying unique individuals from the ever-changing morass of personal data on the internet. The team of judges is composed of software engineers, computer science professors, and a venture capitalist.
It’s too bad that no philosophers or sociologists were invited to tag along. They could have advised the Spock team that personal identity poses some of the oldest problems around. A Buddhist or Heraclitean would argue that the entity Spock is chasing is a non-entity, that no enduring personality exists—only temporarily assembled bundles of individual properties.
Without going that far, one still finds it nearly impossible to identify a collection of features that an individual retains from infancy to elderhood that is unique enough to be essential to them. People change party affiliation, citizenship, gender, size, and hair color. People change interests, ideologies, addresses, names. Perhaps your DNA is coincidentally unique to you, but what if you had a twin (or a clone)? What if you changed your genetic makeup through gene therapy?
Of course, there is at least one identity that remains constant and trackable, for those of us who aren’t secret agents, at least—and I suspect it’s the one Spock is trying to identify us with. It’s the one that cashes the checks—our “official” identity for civic and financial purposes. Unfortunately for Spock, the figures that nail it down—DOB, SSN, etc.—are the ones we are well-advised to keep off the internet. And for those of us unlucky enough to have our identities stolen, even this “official” identity doesn’t always resolve to a unique individual.
Spock shows much promise. But identity as found on the internet is unavoidably a bundle of properties without a substance to adhere to. Insofar as people are multifaceted or conformist, Spock will never entirely be able to resolve them.
It’s too bad that no philosophers or sociologists were invited to tag along. They could have advised the Spock team that personal identity poses some of the oldest problems around. A Buddhist or Heraclitean would argue that the entity Spock is chasing is a non-entity, that no enduring personality exists—only temporarily assembled bundles of individual properties.
Without going that far, one still finds it nearly impossible to identify a collection of features that an individual retains from infancy to elderhood that is unique enough to be essential to them. People change party affiliation, citizenship, gender, size, and hair color. People change interests, ideologies, addresses, names. Perhaps your DNA is coincidentally unique to you, but what if you had a twin (or a clone)? What if you changed your genetic makeup through gene therapy?
Of course, there is at least one identity that remains constant and trackable, for those of us who aren’t secret agents, at least—and I suspect it’s the one Spock is trying to identify us with. It’s the one that cashes the checks—our “official” identity for civic and financial purposes. Unfortunately for Spock, the figures that nail it down—DOB, SSN, etc.—are the ones we are well-advised to keep off the internet. And for those of us unlucky enough to have our identities stolen, even this “official” identity doesn’t always resolve to a unique individual.
Spock shows much promise. But identity as found on the internet is unavoidably a bundle of properties without a substance to adhere to. Insofar as people are multifaceted or conformist, Spock will never entirely be able to resolve them.
Labels:
bundle theory,
personal identity,
search,
spock
Tuesday, June 12, 2007
The uber-gadget, deferred
By Darren Abrecht, McClatchy Interactive
Two years ago, while working and studying in Boston, I considered buying a PDA but decided to wait. I had limited space in my pack and already had to contend with a cell phone, mp3 player, digital camera, and laptop. The U.S. Army Survival Manual advises:
“In preparing your survival kit, select items you can use for more than one purpose. If you have two items that will serve the same function, pick the one you can use for another function.”
In this case, what’s good for the soldier is good for the urban commuter. At the time, I could look at the trend lines of converging device functionality and predict that, in a year or two, I would be able to replace my aging gadget fleet with a single lump of shiny plastic. This uber-gadget would perform the functions of the PDA, cell phone, internet communicator, digital music player and camera well enough for serious everyday use, even if not as well as an agglomeration of dedicated devices.
The Apple iPhone debuts at the end of this month as the much-hyped fulfillment of convergence manifest destiny. Does its advent signal a brave new era of pants with only one bulging pocket?
Alas. As I was watching my brother-in-law trying to connect his Nintendo DS to my home wireless network the other day, it occurred to me that the reality of the situation is—and could only ever have been—something quite different. While the iPhone or a similar device would allow me to do away with all the little lumps of plastic in my life, no single gadget could ever be everyone’s end-all. While many people carry around multiple pieces of personal tech, most of them have a go-to device, closer to their hearts and the center of their digital lives than the others. Ask yourself: if you could take only one gadget with you, which would it be? For most people, this would be a cell phone. For others, it might be a music player, GPS device, or handheld game console.
Instead of a single class of uber-gadget, what seems to be emerging is a plurality of multifunction device types, each organized around a different primary function. Even the iPhone is, first and foremost, a cell phone—a point underscored by Steve Jobs at its announcement, when he declared that its “killer app” is making phone calls.
Whatever your gadget of choice, the odds are good that beyond its primary function it can do many of the things that other devices can do. A few years out, devices like your favorite will almost certainly continue to exist as a class—and will take on more and more functionality that overlaps with other device classes.
Who knows? Some day, we might even see an iPod with a built-in FM radio.
©2007 McClatchy. Reprinted with permission.
Two years ago, while working and studying in Boston, I considered buying a PDA but decided to wait. I had limited space in my pack and already had to contend with a cell phone, mp3 player, digital camera, and laptop. The U.S. Army Survival Manual advises:
“In preparing your survival kit, select items you can use for more than one purpose. If you have two items that will serve the same function, pick the one you can use for another function.”
In this case, what’s good for the soldier is good for the urban commuter. At the time, I could look at the trend lines of converging device functionality and predict that, in a year or two, I would be able to replace my aging gadget fleet with a single lump of shiny plastic. This uber-gadget would perform the functions of the PDA, cell phone, internet communicator, digital music player and camera well enough for serious everyday use, even if not as well as an agglomeration of dedicated devices.
The Apple iPhone debuts at the end of this month as the much-hyped fulfillment of convergence manifest destiny. Does its advent signal a brave new era of pants with only one bulging pocket?
Alas. As I was watching my brother-in-law trying to connect his Nintendo DS to my home wireless network the other day, it occurred to me that the reality of the situation is—and could only ever have been—something quite different. While the iPhone or a similar device would allow me to do away with all the little lumps of plastic in my life, no single gadget could ever be everyone’s end-all. While many people carry around multiple pieces of personal tech, most of them have a go-to device, closer to their hearts and the center of their digital lives than the others. Ask yourself: if you could take only one gadget with you, which would it be? For most people, this would be a cell phone. For others, it might be a music player, GPS device, or handheld game console.
Instead of a single class of uber-gadget, what seems to be emerging is a plurality of multifunction device types, each organized around a different primary function. Even the iPhone is, first and foremost, a cell phone—a point underscored by Steve Jobs at its announcement, when he declared that its “killer app” is making phone calls.
Whatever your gadget of choice, the odds are good that beyond its primary function it can do many of the things that other devices can do. A few years out, devices like your favorite will almost certainly continue to exist as a class—and will take on more and more functionality that overlaps with other device classes.
Who knows? Some day, we might even see an iPod with a built-in FM radio.
©2007 McClatchy. Reprinted with permission.
Friday, May 11, 2007
Moving merchandise like media
By Darren Abrecht, McClatchy Interactive
There is something about shopping online that makes obtaining physical things feel a lot like obtaining information: click, click, get this weekend’s weather report; click, click, get a pair of sunglasses. With just a few more clicks, you can compare prices on the same item at different stores, see the rest of the manufacturer’s product line, or view similar items from other manufacturers.
This way of shopping is tremendously liberating, a point that was driven home for me one day when I was standing in front of the wall of small kitchen items at Target, shopping for a basting brush. I realized suddenly that all of the products were sorted by manufacturer, meaning that if I wanted to compare basting brushes, I had to search up and down the entire wall. I wanted to click the top of a column and change the sorting options on my search results.
With the exception of a few items that truly need to be seen in the flesh, I would gladly buy everything online. The only thing that prevents me—and, I suspect, many others—from taking that leap is the high cost of shipping. The lower sticker prices that can often be found online are offset by the price of getting it to your door, especially in the case of anything larger than a bread basket.
This is the point at which the physicality of the thing you’re trying to “download” comes back into sharp focus. You can pay more to get it there faster, or pay less if you’re willing to wait, but you will have to pay to get it there. While different studies give different figures for the percentage of virtual shopping carts abandoned over “shipping shock,” all of the numbers are big ones. Finding a way to reduce the cost of shipping is a major challenge for the online retail industry.
Of course, moving data around isn’t free, either, despite appearances. Not only does the user have to pay for access, publishers also have to pay for the bandwidth their visitors use. But getting information online “feels free” in a way that getting objects shipped to your door doesn’t.
A recent story from Wired News presents an opportunity to think about these two very different species of deliverables in similar terms. Google and the Space Telescope Science Institute were trying to move a 120-terabyte repository of data from the Institute’s servers to its new home on Google’s. As anyone who has ever had a too-large e-mail attachment dropped en route could tell you, the Internet is woefully suited for data transfer on that magnitude.
Google’s solution, christened “FedExNet,” was to ship an array of empty hard drives to the scientists, who shipped them back full. Commentators on various Internet community sites were quick to dig up a quote, attributed to computer scientist Andrew Tanenbaum: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
All this serves to remind us that it was only very recently that the commercial distribution of data came to be separable in principle from the movement of discrete physical objects. Newsprint, strips of magnetic tape, and plastic platters are still with us, but their importance is steadily diminishing. Not so long ago, they were the only game in town.
The story serves to remind us, too, that the Internet, revolutionary though it may be, is just the latest in a series of vascular networks built to faciliate communication and commerce between far-flung groups of people. Each of these networks—the highway system, the telephone lines, the railways, to name a few—has its own unique characteristics. Nonetheless, they share certain essential similarities that present opportunities to think about them in similar terms, such that it becomes meaningful to discuss the bandwidth of a station wagon.
Given these similarities, is there anything that can be learned from the business models of content distributors that could make moving boxes down the highway “feel free” for the end-user? This may seem an odd question to ask, given that web content has been notoriously difficult to monetize. But to the extent that it has generated revenue, it has done so by adapting techniques from older, more established content distribution models. These techniques have already proven their worth at paying for shipping, because they date from the plastic-and-paper era, when consumer-sized portions of content had to be shipped, just like Google’s hard drives or that set of cutlery you just ordered.
The first and most obvious of these techniques is bundled advertising. How much would a manufacturer pay for you to find their insert as you open your package and remember that, in addition to the new sweater, you will also need a pair of mittens? Most large retailers already keep extensive data on what items are likely to be purchased together, so enabling targeted advertising of this sort would require very little additional investment. The ads could be used to subsidize shipping, perhaps allowing a broader range of items to be shipped free-of-charge.
A second technique, the subscription model, is already being tested in a new program at Amazon. The service, called Amazon Prime, takes a cue from the free-content illusion associated with a home internet subscription. Internet content feels free because you pay the same, easily-forgotten price whether you download 100 gigabytes or 5. Amazon Prime translates this logic to the highway. Sign up for the service, and you pay $79 for a year of free, two-day shipping on all purchases, with a few exceptions.
Amazon has a history of pioneering innovative methods for dealing with the high cost of shipping. They were among the first to offer free shipping on certain items or purchases above a certain price threshhold, now standard industry practices. Although some experimentation will be necessary to find the proper price-point for a service such as this, I predict that applying the subscription model to shipping will pay off both for Amazon and for other online retailers who follow in its footsteps.
©2007 McClatchy. Reprinted with permission.
There is something about shopping online that makes obtaining physical things feel a lot like obtaining information: click, click, get this weekend’s weather report; click, click, get a pair of sunglasses. With just a few more clicks, you can compare prices on the same item at different stores, see the rest of the manufacturer’s product line, or view similar items from other manufacturers.
This way of shopping is tremendously liberating, a point that was driven home for me one day when I was standing in front of the wall of small kitchen items at Target, shopping for a basting brush. I realized suddenly that all of the products were sorted by manufacturer, meaning that if I wanted to compare basting brushes, I had to search up and down the entire wall. I wanted to click the top of a column and change the sorting options on my search results.
With the exception of a few items that truly need to be seen in the flesh, I would gladly buy everything online. The only thing that prevents me—and, I suspect, many others—from taking that leap is the high cost of shipping. The lower sticker prices that can often be found online are offset by the price of getting it to your door, especially in the case of anything larger than a bread basket.
This is the point at which the physicality of the thing you’re trying to “download” comes back into sharp focus. You can pay more to get it there faster, or pay less if you’re willing to wait, but you will have to pay to get it there. While different studies give different figures for the percentage of virtual shopping carts abandoned over “shipping shock,” all of the numbers are big ones. Finding a way to reduce the cost of shipping is a major challenge for the online retail industry.
Of course, moving data around isn’t free, either, despite appearances. Not only does the user have to pay for access, publishers also have to pay for the bandwidth their visitors use. But getting information online “feels free” in a way that getting objects shipped to your door doesn’t.
A recent story from Wired News presents an opportunity to think about these two very different species of deliverables in similar terms. Google and the Space Telescope Science Institute were trying to move a 120-terabyte repository of data from the Institute’s servers to its new home on Google’s. As anyone who has ever had a too-large e-mail attachment dropped en route could tell you, the Internet is woefully suited for data transfer on that magnitude.
Google’s solution, christened “FedExNet,” was to ship an array of empty hard drives to the scientists, who shipped them back full. Commentators on various Internet community sites were quick to dig up a quote, attributed to computer scientist Andrew Tanenbaum: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
All this serves to remind us that it was only very recently that the commercial distribution of data came to be separable in principle from the movement of discrete physical objects. Newsprint, strips of magnetic tape, and plastic platters are still with us, but their importance is steadily diminishing. Not so long ago, they were the only game in town.
The story serves to remind us, too, that the Internet, revolutionary though it may be, is just the latest in a series of vascular networks built to faciliate communication and commerce between far-flung groups of people. Each of these networks—the highway system, the telephone lines, the railways, to name a few—has its own unique characteristics. Nonetheless, they share certain essential similarities that present opportunities to think about them in similar terms, such that it becomes meaningful to discuss the bandwidth of a station wagon.
Given these similarities, is there anything that can be learned from the business models of content distributors that could make moving boxes down the highway “feel free” for the end-user? This may seem an odd question to ask, given that web content has been notoriously difficult to monetize. But to the extent that it has generated revenue, it has done so by adapting techniques from older, more established content distribution models. These techniques have already proven their worth at paying for shipping, because they date from the plastic-and-paper era, when consumer-sized portions of content had to be shipped, just like Google’s hard drives or that set of cutlery you just ordered.
The first and most obvious of these techniques is bundled advertising. How much would a manufacturer pay for you to find their insert as you open your package and remember that, in addition to the new sweater, you will also need a pair of mittens? Most large retailers already keep extensive data on what items are likely to be purchased together, so enabling targeted advertising of this sort would require very little additional investment. The ads could be used to subsidize shipping, perhaps allowing a broader range of items to be shipped free-of-charge.
A second technique, the subscription model, is already being tested in a new program at Amazon. The service, called Amazon Prime, takes a cue from the free-content illusion associated with a home internet subscription. Internet content feels free because you pay the same, easily-forgotten price whether you download 100 gigabytes or 5. Amazon Prime translates this logic to the highway. Sign up for the service, and you pay $79 for a year of free, two-day shipping on all purchases, with a few exceptions.
Amazon has a history of pioneering innovative methods for dealing with the high cost of shipping. They were among the first to offer free shipping on certain items or purchases above a certain price threshhold, now standard industry practices. Although some experimentation will be necessary to find the proper price-point for a service such as this, I predict that applying the subscription model to shipping will pay off both for Amazon and for other online retailers who follow in its footsteps.
©2007 McClatchy. Reprinted with permission.
Labels:
Amazon Prime,
column,
McClatchy,
online shopping
official launch in July August
I am hoping to have enough prep work done to begin posting regularly in early- to mid-July August. I hope, too, to debut a unique visual design for text–script–machine at that time.
In the meantime, this space will serve primarily as an archive for my technology columns appearing on McClatchy newspaper websites.
In the meantime, this space will serve primarily as an archive for my technology columns appearing on McClatchy newspaper websites.
Technology is a humanism
text–script–machine is a blog dedicated to the premise that the boundaries between our physical, mental, and representational life-worlds are dissolving, giving rise to a new, emergent reality. This reality is prefigured, but not entirely realized, by such twentieth-century concepts as Gibson’s cyberspace and Derrida’s “enlarged” textuality.
The name text–script–machine is meant to draw attention to the script, i.e., the computer program, as a paradigm for the entities which will inhabit this new reality. The script dwells at an intersection of the previously distinct categories of text and machine. A work of both literature and engineering, the world it invokes is both fictional and real.
In particular, the “object” of object-oriented programming, which by virtue of its opacity gains metaphysical substance, has begun to serve as the building block of simulations which are more than simulations, enabling such characteristic developments as science in silico and a brisk commerce in virtual goods.
Meanwhile, computing is transforming our experience of physical space, not only through the influence of its output devices—screens, speakers, printers, rapid prototyping devices, and so on—but also by changing our habits, expectations, social interactions, and theoretical categories.
text–script–machine will take it for granted that engineering cannot be allowed to stand separate from humanistic questions; in particular, the methodological abstraction of “how” from “for whom” must be regarded as a mistaken notion from a now-fading historical era.
This will probably suffice for introductory remarks. At any rate, the character of the blog will be determined less by my mission statement for it than by what trends emerge as it develops. This is very much an exercise in exploration, and it’s far too early to determine what that exercise might turn up.
The name text–script–machine is meant to draw attention to the script, i.e., the computer program, as a paradigm for the entities which will inhabit this new reality. The script dwells at an intersection of the previously distinct categories of text and machine. A work of both literature and engineering, the world it invokes is both fictional and real.
In particular, the “object” of object-oriented programming, which by virtue of its opacity gains metaphysical substance, has begun to serve as the building block of simulations which are more than simulations, enabling such characteristic developments as science in silico and a brisk commerce in virtual goods.
Meanwhile, computing is transforming our experience of physical space, not only through the influence of its output devices—screens, speakers, printers, rapid prototyping devices, and so on—but also by changing our habits, expectations, social interactions, and theoretical categories.
text–script–machine will take it for granted that engineering cannot be allowed to stand separate from humanistic questions; in particular, the methodological abstraction of “how” from “for whom” must be regarded as a mistaken notion from a now-fading historical era.
This will probably suffice for introductory remarks. At any rate, the character of the blog will be determined less by my mission statement for it than by what trends emerge as it develops. This is very much an exercise in exploration, and it’s far too early to determine what that exercise might turn up.
Subscribe to:
Posts (Atom)