Monday, June 30, 2008
On hiatus for home renovations
Around the time of my last blog post, my wife and I bought a “fixer-upper.” Blogging to resume after fixer-upping.
Thursday, May 15, 2008
One more step on the road to the mirror world
When the Street View component of Google Maps debuted, I was amazed. More than anything else, I was shocked at what must be the extraordinary expense and effort of gathering and stitching together that much photographic data.
I was not optimistic about the prospects that the project could be maintained. Could even a corporation as well-heeled as Google afford the upkeep on a fleet of cars equipped with 360° cameras and hard drives, sending them out frequently enough to keep the imagery from falling terribly behind the times—and then give it away for free?
This week, a new component of Google Maps debuted. Along the top of the Google Maps window is a row of buttons offering alternate views: “Traffic”, “Satellite”, “Terrain”, and so on. A new button, “More”, appeared a couple of days ago. Roll over the button and you have the option of adding Wikipedia entries or geotagged photos.
The addition of photography is the most interesting to me, because I expect the availability of geotagged photos to balloon in the years to come. With the boom in sites that allow photos to be indexed and searched by location, how long before we see cameras with built-in GPS that automatically adds latitude and longitude in the photo file metadata?
And with the explosion in the availability of these photos, how hard would it be to use a Photosynth-like program to stitch them into existing Street View photos, updating the view whenever someone, anywhere, takes a picture and uploads it?
Eventually, I expect that products like Street View will morph into a fully immersive 3D model of the world, with heavily-trafficked areas of the real world being updated in something close to real time. Wouldn’t it be great if all the security cameras in the typical urban space could feed into that model? We would be just as watched-over, and could even participate in the watching.
I was not optimistic about the prospects that the project could be maintained. Could even a corporation as well-heeled as Google afford the upkeep on a fleet of cars equipped with 360° cameras and hard drives, sending them out frequently enough to keep the imagery from falling terribly behind the times—and then give it away for free?
This week, a new component of Google Maps debuted. Along the top of the Google Maps window is a row of buttons offering alternate views: “Traffic”, “Satellite”, “Terrain”, and so on. A new button, “More”, appeared a couple of days ago. Roll over the button and you have the option of adding Wikipedia entries or geotagged photos.
The addition of photography is the most interesting to me, because I expect the availability of geotagged photos to balloon in the years to come. With the boom in sites that allow photos to be indexed and searched by location, how long before we see cameras with built-in GPS that automatically adds latitude and longitude in the photo file metadata?
And with the explosion in the availability of these photos, how hard would it be to use a Photosynth-like program to stitch them into existing Street View photos, updating the view whenever someone, anywhere, takes a picture and uploads it?
Eventually, I expect that products like Street View will morph into a fully immersive 3D model of the world, with heavily-trafficked areas of the real world being updated in something close to real time. Wouldn’t it be great if all the security cameras in the typical urban space could feed into that model? We would be just as watched-over, and could even participate in the watching.
Labels:
crowdsourcing,
Google Maps,
metaverse,
mirror worlds,
Photosynth
Thursday, April 24, 2008
Virtual money: first across the divide
Kant once wrote that “A hundred real thalers do not contain the least coin more than a hundred possible thalers.” (Critique of Pure Reason, A599)
These words might have been true in 1781, but at the time I’m writing this, a US dollar is worth 264 Linden dollars. Something unprecedented is going on with virtual currencies.
Kant’s point was to claim that, given a concept, to claim that the object posited by the concept exists is to add nothing to the concept itself. It just describes a particular relationship between the concept and the real world. This seems like a rather academic argument—and it is—but it points out the futility of trying to prove that something exists by armchair reasoning. Kant was specifically targeting Anselm’s ontological argument for the existence of God.
If we were to set about looking for 100 thalers, we wouldn’t have to specify that we’re looking for 100 thalers that exist. The last bit would be taken for granted. So trying to add existence as an additional predicate to the concept of our 100 thalers is pointless, according to Kant.
That would indisputably be the case, if we only ever approached concepts as necessarily referring to objects in the real world. But that’s not true; make-believe is another way that we engage with concepts. A child at play could indeed be searching for 100 thalers that do not exist. I’ve spent a lot of time lately trying to rustle up Gil while playing Final Fantasy XII.
In Kant’s time, the distinction between fiction and non-fiction was clear. But during the twentieth century, mass media allowed fiction to become the jumping-off point for new social realities. Fan communities made the production of entire fictional universes profitable. People began speaking Klingon and invested themselves in social role-playing games.
The social element is key to explaining how virtual currency has broken through to the real world. If I like something, it has value to me, whether it’s real or fictional. If the pool of people who value something is large enough, and trade can occur, than economic forces will come into play. Online role-playing games has allowed the creation of fictional goods that can be traded among massive numbers of people.
It cannot go unremarked that while fictional money has become real, our real money long ago became fictional. With the abolishment of the gold standard and the adoption of fiat currency, our money became nothing but a function of intersubjective perception of value—a move that prepared us to accept the possibility of virtual currencies.
These words might have been true in 1781, but at the time I’m writing this, a US dollar is worth 264 Linden dollars. Something unprecedented is going on with virtual currencies.
Kant’s point was to claim that, given a concept, to claim that the object posited by the concept exists is to add nothing to the concept itself. It just describes a particular relationship between the concept and the real world. This seems like a rather academic argument—and it is—but it points out the futility of trying to prove that something exists by armchair reasoning. Kant was specifically targeting Anselm’s ontological argument for the existence of God.
If we were to set about looking for 100 thalers, we wouldn’t have to specify that we’re looking for 100 thalers that exist. The last bit would be taken for granted. So trying to add existence as an additional predicate to the concept of our 100 thalers is pointless, according to Kant.
That would indisputably be the case, if we only ever approached concepts as necessarily referring to objects in the real world. But that’s not true; make-believe is another way that we engage with concepts. A child at play could indeed be searching for 100 thalers that do not exist. I’ve spent a lot of time lately trying to rustle up Gil while playing Final Fantasy XII.
In Kant’s time, the distinction between fiction and non-fiction was clear. But during the twentieth century, mass media allowed fiction to become the jumping-off point for new social realities. Fan communities made the production of entire fictional universes profitable. People began speaking Klingon and invested themselves in social role-playing games.
The social element is key to explaining how virtual currency has broken through to the real world. If I like something, it has value to me, whether it’s real or fictional. If the pool of people who value something is large enough, and trade can occur, than economic forces will come into play. Online role-playing games has allowed the creation of fictional goods that can be traded among massive numbers of people.
It cannot go unremarked that while fictional money has become real, our real money long ago became fictional. With the abolishment of the gold standard and the adoption of fiat currency, our money became nothing but a function of intersubjective perception of value—a move that prepared us to accept the possibility of virtual currencies.
Sunday, April 6, 2008
New feature
Instead of link roundup posts, I’ve decided to stream my most recent tech-related links on the sidebar to the right.
This should see quite a bit more activity than the mainbar. Given the subject matter and my actual posting frequency, I’m beginning to think that my initial goal of three posts a week was too optimistic. I can see to it that the links list is updated more or less every workday, and I will continue with my roughly once-per-week posting frequency.
This should see quite a bit more activity than the mainbar. Given the subject matter and my actual posting frequency, I’m beginning to think that my initial goal of three posts a week was too optimistic. I can see to it that the links list is updated more or less every workday, and I will continue with my roughly once-per-week posting frequency.
Wednesday, March 19, 2008
R.I.P., Arthur C. Clarke
Clarke passed away today in Sri Lanka at the age of 90.
His three laws of prediction, as stated on Wikipedia:
His three laws of prediction, as stated on Wikipedia:
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
Friday, March 7, 2008
Life on the Jumbotron
I went to a Carolina Hurricanes game with my wife and a couple from her office last night, and it gave me an occasion to make a few observations.
The modern professional sporting event is a hyperreal spectacle on a grand scale, in which reality and representation are reflected endlessly, back and forth onto one another, until they are impossible to disentangle.
The constructedness of pro hockey is relentless, and shows up in everything from the idea of professional sport itself—the salaries of men tied to their performance in a series of events with seemingly arbitrary rules—to the indoor climate control that severs hockey from its situatedness as a game of the North and of winter.
Yet the assembled audience gives the game the weight of intersubjectively verified reality, and the reporters on Press Row immediately elevate that verifiability from the level of shared anecdote to the level of published fact.
The game itself is only a fraction of the spectacle. The action below is reflected on the Jumbotron above, where it is given the malleability of media and becomes one element of a hyperkinetic multimedia collage. Live game footage, instant replays, movie clips, player interviews, commercials, celebratory graphics, and audience shots all tumble by at a frenzied pace.
I have sat on Press Row as a reporter, armed with a wifi connection, a television monitor, and a thick press packet full of statistics and news clippings. From that vantage the event becomes a hypertextual artifact, full of references to the communities which sustain it, to past and future games, and to the present game as both live and mediated spectacle.
For the ordinary spectator, the audience shots are one of the most important elements of the experience. We go to the game not so much to watch an event as to be a part of an audience, with all of its attendant rituals. By playing the role of the fan well, you might be rewarded with a moment on the Jumbotron, to see yourself being seen, to become as undeniably real as the game itself, if only for an instant.
The modern professional sporting event is a hyperreal spectacle on a grand scale, in which reality and representation are reflected endlessly, back and forth onto one another, until they are impossible to disentangle.
The constructedness of pro hockey is relentless, and shows up in everything from the idea of professional sport itself—the salaries of men tied to their performance in a series of events with seemingly arbitrary rules—to the indoor climate control that severs hockey from its situatedness as a game of the North and of winter.
Yet the assembled audience gives the game the weight of intersubjectively verified reality, and the reporters on Press Row immediately elevate that verifiability from the level of shared anecdote to the level of published fact.
The game itself is only a fraction of the spectacle. The action below is reflected on the Jumbotron above, where it is given the malleability of media and becomes one element of a hyperkinetic multimedia collage. Live game footage, instant replays, movie clips, player interviews, commercials, celebratory graphics, and audience shots all tumble by at a frenzied pace.
I have sat on Press Row as a reporter, armed with a wifi connection, a television monitor, and a thick press packet full of statistics and news clippings. From that vantage the event becomes a hypertextual artifact, full of references to the communities which sustain it, to past and future games, and to the present game as both live and mediated spectacle.
For the ordinary spectator, the audience shots are one of the most important elements of the experience. We go to the game not so much to watch an event as to be a part of an audience, with all of its attendant rituals. By playing the role of the fan well, you might be rewarded with a moment on the Jumbotron, to see yourself being seen, to become as undeniably real as the game itself, if only for an instant.
Wednesday, March 5, 2008
Links: Web 2.0 freedom, VR interfaces, soft luddites
- The Web democratized media for the technologically adept. Web 2.0 is democratizing it for everyone else. But what happens when the regime notices?
- Researchers at the University of Tokyo have created a pair of goggles that recognizes and stores references to objects that you look at.
- Meanwhile, researchers at Carnegie Mellon have created an electromagnetic device that allows you to feel virtual objects.
- Readers report an emotional attachment to physical books. I wonder if this is simply a matter of familiarity, as the article implies. I doubt it. I suspect that the cultural mythos surrounding the book is adding a quasi-moral dimension here. Could you really bring yourself to abandon the good, true, trusty book, emblem of the learned and vehicle of the Word?
- BuzzFeed gently mocks the technology-fasting that seems to be all the rage lately, while providing several links to examples of the phenomenon.
Tuesday, February 26, 2008
Gaming goes hyperdimensional
My wife likes to make fun of my spatial reasoning skills. She’s an intern architect and a master closet packer, able to maximize the use of space in any situation. In contrast, whenever I try to imagine any but the simplest, most static of three-dimensional spaces, my brain starts moving very, very slowly. I mostly navigate the world by thinking about space in either the plan or elevation view.
As a kid, I always failed those tests where you had to imagine rotating a solid object and then pick it out from a list of similar solids. I never solved my Rubik’s cube, despite wasting man-months of my life in the attempt.
My condition couldn’t have been helped by the amount of time I spent racing around in the two-dimensional Nintendo dungeons of my formative years.
I’ve grown comfortable with the 3D interfaces that have become standard console game fare in the last decade or so. But it was not an easy transition. I remember one nauseous dream that consisted of being trapped in a sped-up Super Mario 64 level. I gave up after less than an hour of playing The Legend of Zelda: Ocarina of Time, declaring that no 3D Zelda could ever be as good as 2D A Link to the Past (now that I think about it, I’m not sure that this prediction was actually wrong).
But why stop with straightforward three-dimensional gameplay? One of the great things about virtual worlds is the possibility of bending even our most basic physical rules. Super Paper Mario for the Wii, and more recently the indy game Fez, have experimented with gameplay where the same virtual space can be 3D one minute, 2D the next, as if your in-game persona were the sphere from Abbott’s Flatland. One of last year’s biggest sleeper hits, Portal, throws you into a world where you can create wormholes that flatten distances and bend gravity around corners. 4D Tetris, anyone?
I’m in no position to review these games, since I haven’t played any of them. If I did, I think I would have smoke coming out of my ears. But I’m interested to see if games like this catch on, and what advances in science and mathematics might come from a generation that grew up playing them.
There’s been quite a bit of evidence that video games can teach the brain new tricks. Will the kids of tomorrow be as comfortable working in Hilbert space as the rest of us are on a Cartesian graph? Quantum mechanics, cosmology, systems analysis, topology, and game theory, among countless others, are fields that could benefit from minds that are natively hyperdimensional.
Then again, all those hours with the Rubik’s cube didn’t teach me anything.
As a kid, I always failed those tests where you had to imagine rotating a solid object and then pick it out from a list of similar solids. I never solved my Rubik’s cube, despite wasting man-months of my life in the attempt.
My condition couldn’t have been helped by the amount of time I spent racing around in the two-dimensional Nintendo dungeons of my formative years.
I’ve grown comfortable with the 3D interfaces that have become standard console game fare in the last decade or so. But it was not an easy transition. I remember one nauseous dream that consisted of being trapped in a sped-up Super Mario 64 level. I gave up after less than an hour of playing The Legend of Zelda: Ocarina of Time, declaring that no 3D Zelda could ever be as good as 2D A Link to the Past (now that I think about it, I’m not sure that this prediction was actually wrong).
But why stop with straightforward three-dimensional gameplay? One of the great things about virtual worlds is the possibility of bending even our most basic physical rules. Super Paper Mario for the Wii, and more recently the indy game Fez, have experimented with gameplay where the same virtual space can be 3D one minute, 2D the next, as if your in-game persona were the sphere from Abbott’s Flatland. One of last year’s biggest sleeper hits, Portal, throws you into a world where you can create wormholes that flatten distances and bend gravity around corners. 4D Tetris, anyone?
I’m in no position to review these games, since I haven’t played any of them. If I did, I think I would have smoke coming out of my ears. But I’m interested to see if games like this catch on, and what advances in science and mathematics might come from a generation that grew up playing them.
There’s been quite a bit of evidence that video games can teach the brain new tricks. Will the kids of tomorrow be as comfortable working in Hilbert space as the rest of us are on a Cartesian graph? Quantum mechanics, cosmology, systems analysis, topology, and game theory, among countless others, are fields that could benefit from minds that are natively hyperdimensional.
Then again, all those hours with the Rubik’s cube didn’t teach me anything.
Monday, February 18, 2008
New issue of Information, Communication & Society
I’m trying to familiarize myself with the world of STS journals. Information, Communication & Society 11:1 is out, with a few articles that seem interesting:
In “Software defaults as de facto regulation”, Rajiv Shah and Christian Sandvig argue that software defaults may be seen as imposing something like the rule of law upon non-technical users who lack the sophistication to change them. Security and privacy experts must therefore recognize the ineffectiveness of recommendations that target the end-user.
In “Effects of Internet use and social resources on changes in depression”, Bessière et al. study how various types of Internet use affect depression sufferers. The not-too-surprising results: using the Internet for communicating with loved ones is more likely to improve your depression than using it for solitary web surfing.
David Beer looks at the personal MP3 player and its transformative effects on personal and social music experiences in “The iconic interface and the veneer of simplicity”.
Unfortunately, I can't get at any of these through my soon-to-expire university library proxy. Your mileage may vary. Beer’s article in particular, I think, may have me hoofing it down to D.H. Hill for the dead tree version.
In “Software defaults as de facto regulation”, Rajiv Shah and Christian Sandvig argue that software defaults may be seen as imposing something like the rule of law upon non-technical users who lack the sophistication to change them. Security and privacy experts must therefore recognize the ineffectiveness of recommendations that target the end-user.
In “Effects of Internet use and social resources on changes in depression”, Bessière et al. study how various types of Internet use affect depression sufferers. The not-too-surprising results: using the Internet for communicating with loved ones is more likely to improve your depression than using it for solitary web surfing.
David Beer looks at the personal MP3 player and its transformative effects on personal and social music experiences in “The iconic interface and the veneer of simplicity”.
Unfortunately, I can't get at any of these through my soon-to-expire university library proxy. Your mileage may vary. Beer’s article in particular, I think, may have me hoofing it down to D.H. Hill for the dead tree version.
Friday, February 1, 2008
Reflections on a haircut
When I arrived at the philosophy program at Boston College, I noticed that an unusual number of men associated with the department—both graduate students and professors—groomed themselves in one of two distinct styles, both of which struck me as somehow more “philosophical.”
The first was to allow one’s beard to grow long and unkempt. Unlike the unkempt beards I had seen on undergraduates at NCSU, this was not a token of laziness or disregard for one’s personal appearance. The individuals wearing beards were otherwise impeccably well-groomed, and oftentimes found in class wearing suits. I came to call this look “The Socrates.”
The second look was to be completely clean-shaven, and wear one’s hair gelled back and up, in order to create the illusion that one’s head was several inches taller than it really was. I called this “The Derrida.” My hair was kind of already going in that direction, and I thought it looked like a good, modern look for a man in a philosophy program, so this is the style I adopted.
(After finishing up at BC, I wore my hair down with a loose part for several months. I never really liked it, so I’m back to The Derrida, or the “Wall Street hair” as one of my coworkers calls it.)
Having formulated these observations, I recently enjoyed reading Barthes’ “The Iconography of the Abbé Pierre” in his Mythologies collection:
What does all this have to do with technology? Consider the industrial design explosion of the last decade. There are quite a lot of individuals who find it all a lot of fuss with no substance. If your box works well, who cares what it looks like? These are the folks who would never buy an iPod, because you would have to be an idiot to shell out for one when you can get a player that has more storage space for less.
But the market disagrees with the theory that an iPod is worth less than a SanDisk. The iPod signals to others, among other things, an appreciation for Apple products, for fashion, and for modern design. Your choice of mp3 player may not play a role in determining your identity, but it does play a considerable role in communicating your identity. It has value, in short, as a Barthesian myth.
If you need confirmation of this theory, just count how many times in a day your friends and coworkers emote—either in disgust or appreciation—at the personal appearances, affections, and possessions of people they don’t know, or know only marginally. The things we surround ourselves with have utility not merely in pursuing our individual tele, but in pursuing our goals as social actors, as well. This is common sense, but too often disappears from discussions of an artifact’s utility as such.
The first was to allow one’s beard to grow long and unkempt. Unlike the unkempt beards I had seen on undergraduates at NCSU, this was not a token of laziness or disregard for one’s personal appearance. The individuals wearing beards were otherwise impeccably well-groomed, and oftentimes found in class wearing suits. I came to call this look “The Socrates.”
The second look was to be completely clean-shaven, and wear one’s hair gelled back and up, in order to create the illusion that one’s head was several inches taller than it really was. I called this “The Derrida.” My hair was kind of already going in that direction, and I thought it looked like a good, modern look for a man in a philosophy program, so this is the style I adopted.
(After finishing up at BC, I wore my hair down with a loose part for several months. I never really liked it, so I’m back to The Derrida, or the “Wall Street hair” as one of my coworkers calls it.)
Having formulated these observations, I recently enjoyed reading Barthes’ “The Iconography of the Abbé Pierre” in his Mythologies collection:
The Abbé Pierre’s haircut, obviously devised so as to reach a neutral equilibrium between short hair (an indispensable convention if one does not want to be noticed) and unkempt hair (a state suitable to express contempt for other conventions), thus becomes the capillary archetype of saintliness...
For among priests, it is no due to chance whether one is bearded or not; beards are chiefly the attribute of missionaries or Capuchins, they cannot but signify apostleship and poverty... Shaven priests are supposed to be more temporal, bearded ones more evangelical...Barthes worries about the overconsumption of these signs, in place of true justice and charity. I think this is a legitimate concern. But I have also noticed that in our socially constructed world, truth must come with a little bit of rhetoric in order to find a foothold.
What does all this have to do with technology? Consider the industrial design explosion of the last decade. There are quite a lot of individuals who find it all a lot of fuss with no substance. If your box works well, who cares what it looks like? These are the folks who would never buy an iPod, because you would have to be an idiot to shell out for one when you can get a player that has more storage space for less.
But the market disagrees with the theory that an iPod is worth less than a SanDisk. The iPod signals to others, among other things, an appreciation for Apple products, for fashion, and for modern design. Your choice of mp3 player may not play a role in determining your identity, but it does play a considerable role in communicating your identity. It has value, in short, as a Barthesian myth.
If you need confirmation of this theory, just count how many times in a day your friends and coworkers emote—either in disgust or appreciation—at the personal appearances, affections, and possessions of people they don’t know, or know only marginally. The things we surround ourselves with have utility not merely in pursuing our individual tele, but in pursuing our goals as social actors, as well. This is common sense, but too often disappears from discussions of an artifact’s utility as such.
Wednesday, January 30, 2008
from the paper to the cloud
For some time, I’ve noticed that what constitutes the “real” copy of a text has been slowly drifting into cyberspace and Platonic abstraction. Back in the early 90s, when I discovered computing, it was understood that you had to have a printer in order to take what was on the screen and make a real document out of it. Word processing was just moving the words around on-screen in preparation for printing. In high school and into my early undergraduate years, I seldom bothered to save a word processing document that had already been printed out and turned in.
It was the ubiquity of Microsoft Word, Adobe Acrobat, and email that changed all of this. We became accustomed to shuttling documents around, reading them onscreen, inserting our edits and then shuttling them back. Increasingly, you seldom printed a document out unless you wanted something handy to mark up. Once you were done marking it up, it went in the white paper recycling bin, because what else could you do with it? Type it back in? It was no longer a living, editable, emailable text. It was just dead paper.
Next up: the document moves from our individual hard drives and inboxes, where multiple copies of the same document in different drafts might exist in confusion, to online storage with versioning. Our one-off electronic copies are being pushed away from the center by group-editable, online-only master texts, just as they themselves once pushed off the paper copies.
In the post-cyberspace world, paper will be redeemed. Ubiquitous cameras combined with advanced OCR will pull in the static printed text and transform it again into a living document, and electronic ink displays will put the text back in our hands. But it will remain the case that any copy is just a shadow of the Form; destroy it and the text will live on.
It was the ubiquity of Microsoft Word, Adobe Acrobat, and email that changed all of this. We became accustomed to shuttling documents around, reading them onscreen, inserting our edits and then shuttling them back. Increasingly, you seldom printed a document out unless you wanted something handy to mark up. Once you were done marking it up, it went in the white paper recycling bin, because what else could you do with it? Type it back in? It was no longer a living, editable, emailable text. It was just dead paper.
Next up: the document moves from our individual hard drives and inboxes, where multiple copies of the same document in different drafts might exist in confusion, to online storage with versioning. Our one-off electronic copies are being pushed away from the center by group-editable, online-only master texts, just as they themselves once pushed off the paper copies.
In the post-cyberspace world, paper will be redeemed. Ubiquitous cameras combined with advanced OCR will pull in the static printed text and transform it again into a living document, and electronic ink displays will put the text back in our hands. But it will remain the case that any copy is just a shadow of the Form; destroy it and the text will live on.
Tuesday, January 15, 2008
Simulating cities from the Republic to Micropolis
Last week, the source code of the original SimCity video game was opened under the GPL license as “Micropolis.” SimCity is the prototypical simulation-as-entertainment game, but constructing imaginary cities to see how things turn out has a long history:
Indeed, so it is with all of the socities imagined by the political theorists. How could their utopias come about except through the single-party rule of ideologues? Every attempt to create a civilization that followed the blueprint of some singular political theorist has resulted in tyranny—at least, every attempt that was not checked by the actions of some other party equally bent on seeing their theory of government brought to fruition.
This is also why I have become wary of strict constructionalist interpretations of the Constitution, for instance. I recognize the need for stability, particularly in the day-to-day operations of government and the guarantee of individual rights. But such a document also serves to give power to the intellectuals who drafted it, far in excess of what is fitting for a democratic society. The Constitution is, among other things, a tool for the long-departed to exert their political will on the people.
It would be interesting to see a SimCity which places you not in the role of a God-like mayor, but of a citizen. The player could have various means at their disposal to steer the direction of the city through persuasion and politicking.
If we could watch a city coming to be in theory, would we also see its justice coming to be, and its injustice as well? (Plato’s Republic, 369a)SimCity was also one of the first “god games,” strategy games where a player is given a bird’s eye view of a city, social group, or ecosystem and allowed god-like power in managing its development. The idea transfers well to political philosophy as state-simulation, for who is ultimately the ruler of the Republic? It is not the guardians but the theorist, Plato himself. The guardians were not free to imagine, for instance, that the best thing for their city might be to set up a representative democracy.
Indeed, so it is with all of the socities imagined by the political theorists. How could their utopias come about except through the single-party rule of ideologues? Every attempt to create a civilization that followed the blueprint of some singular political theorist has resulted in tyranny—at least, every attempt that was not checked by the actions of some other party equally bent on seeing their theory of government brought to fruition.
This is also why I have become wary of strict constructionalist interpretations of the Constitution, for instance. I recognize the need for stability, particularly in the day-to-day operations of government and the guarantee of individual rights. But such a document also serves to give power to the intellectuals who drafted it, far in excess of what is fitting for a democratic society. The Constitution is, among other things, a tool for the long-departed to exert their political will on the people.
It would be interesting to see a SimCity which places you not in the role of a God-like mayor, but of a citizen. The player could have various means at their disposal to steer the direction of the city through persuasion and politicking.
Labels:
democratic technology,
Plato,
political philosophy,
SimCity,
simulation
Monday, January 7, 2008
from .txt to text
As an online news producer, I have often found myself using software tools that seem clumsily suited to working with journalistic content. It’s becoming somewhat cliché in the industry to say that all content management systems are bad, and I believe I have an idea why that is.
The problem is that there are a number of words that are shared between the humanistic disciplines and computer science—words which are deceptive in that they appear to be simple and straightforward in meaning, but which in fact have evolved different, if overlapping, meanings in each of those two traditions. I refer to words like “graphic” and “music” but especially to “text.”
To the writer, the word “text” can mean any number of things—a handwritten note, an article from a journal, a book. A text may include illustrations, photographs, snippets from other languages, unusual formatting choices, whatever is necessary to express the intent of the author. These are not add-ons to the content, but integral to it.
In contrast, to the computer scientist, “text” is what a text editor edits. The word refers to a binary representation of a typewritten text. There is a limited character palette, as each character must be assigned a number. These are then arranged in sequence, allowing for display only in rigid rows, left-to-right, top-to-bottom (some latitude is allowed for non-Latin alphabets, but that is a relatively recent development), with no other formatting allowed.
Any additional presentational features of the text—including hard line breaks, character formatting, kerning and leading, mathematical typesetting, included images—must be added through any of a number of conventions allowing metadata to be introduced into the flow of the text. Examples include LaTeX, HTML, and the Microsoft Word document format. The text then splits into code and WYSIWYG views, and your opinion as to which of these constitutes the “real” text is likely to be a function of whether you work as a writer or a computer programmer. The people who use the software want to manipulate the WYSIWYG text, but the people who create the software are concerned with manipulating the code text.
The theory behind WYSIWYG word processing is that this should make no difference, since the two forms are simply different expressions of the same structure. In reality, however, the code text represents the structure directly, and the WYSIWYG text is a translation. The situation is better with well-crafted software, but writing texts that differ markedly in structure from typewritten or “plain” texts is often impossible or extremely difficult without learning how to work directly in the code.
I’m not sure this situation can ever be fixed as long as WYSIWYG texts are coded as “plain” texts with embedded metadata. The writer will require a new model of textual representation that deals with characters and formatting information at the same level of abstraction.
The problem is that there are a number of words that are shared between the humanistic disciplines and computer science—words which are deceptive in that they appear to be simple and straightforward in meaning, but which in fact have evolved different, if overlapping, meanings in each of those two traditions. I refer to words like “graphic” and “music” but especially to “text.”
To the writer, the word “text” can mean any number of things—a handwritten note, an article from a journal, a book. A text may include illustrations, photographs, snippets from other languages, unusual formatting choices, whatever is necessary to express the intent of the author. These are not add-ons to the content, but integral to it.
In contrast, to the computer scientist, “text” is what a text editor edits. The word refers to a binary representation of a typewritten text. There is a limited character palette, as each character must be assigned a number. These are then arranged in sequence, allowing for display only in rigid rows, left-to-right, top-to-bottom (some latitude is allowed for non-Latin alphabets, but that is a relatively recent development), with no other formatting allowed.
Any additional presentational features of the text—including hard line breaks, character formatting, kerning and leading, mathematical typesetting, included images—must be added through any of a number of conventions allowing metadata to be introduced into the flow of the text. Examples include LaTeX, HTML, and the Microsoft Word document format. The text then splits into code and WYSIWYG views, and your opinion as to which of these constitutes the “real” text is likely to be a function of whether you work as a writer or a computer programmer. The people who use the software want to manipulate the WYSIWYG text, but the people who create the software are concerned with manipulating the code text.
The theory behind WYSIWYG word processing is that this should make no difference, since the two forms are simply different expressions of the same structure. In reality, however, the code text represents the structure directly, and the WYSIWYG text is a translation. The situation is better with well-crafted software, but writing texts that differ markedly in structure from typewritten or “plain” texts is often impossible or extremely difficult without learning how to work directly in the code.
I’m not sure this situation can ever be fixed as long as WYSIWYG texts are coded as “plain” texts with embedded metadata. The writer will require a new model of textual representation that deals with characters and formatting information at the same level of abstraction.
Subscribe to:
Posts (Atom)