For some time, I’ve noticed that what constitutes the “real” copy of a text has been slowly drifting into cyberspace and Platonic abstraction. Back in the early 90s, when I discovered computing, it was understood that you had to have a printer in order to take what was on the screen and make a real document out of it. Word processing was just moving the words around on-screen in preparation for printing. In high school and into my early undergraduate years, I seldom bothered to save a word processing document that had already been printed out and turned in.
It was the ubiquity of Microsoft Word, Adobe Acrobat, and email that changed all of this. We became accustomed to shuttling documents around, reading them onscreen, inserting our edits and then shuttling them back. Increasingly, you seldom printed a document out unless you wanted something handy to mark up. Once you were done marking it up, it went in the white paper recycling bin, because what else could you do with it? Type it back in? It was no longer a living, editable, emailable text. It was just dead paper.
Next up: the document moves from our individual hard drives and inboxes, where multiple copies of the same document in different drafts might exist in confusion, to online storage with versioning. Our one-off electronic copies are being pushed away from the center by group-editable, online-only master texts, just as they themselves once pushed off the paper copies.
In the post-cyberspace world, paper will be redeemed. Ubiquitous cameras combined with advanced OCR will pull in the static printed text and transform it again into a living document, and electronic ink displays will put the text back in our hands. But it will remain the case that any copy is just a shadow of the Form; destroy it and the text will live on.
Wednesday, January 30, 2008
Tuesday, January 15, 2008
Simulating cities from the Republic to Micropolis
Last week, the source code of the original SimCity video game was opened under the GPL license as “Micropolis.” SimCity is the prototypical simulation-as-entertainment game, but constructing imaginary cities to see how things turn out has a long history:
Indeed, so it is with all of the socities imagined by the political theorists. How could their utopias come about except through the single-party rule of ideologues? Every attempt to create a civilization that followed the blueprint of some singular political theorist has resulted in tyranny—at least, every attempt that was not checked by the actions of some other party equally bent on seeing their theory of government brought to fruition.
This is also why I have become wary of strict constructionalist interpretations of the Constitution, for instance. I recognize the need for stability, particularly in the day-to-day operations of government and the guarantee of individual rights. But such a document also serves to give power to the intellectuals who drafted it, far in excess of what is fitting for a democratic society. The Constitution is, among other things, a tool for the long-departed to exert their political will on the people.
It would be interesting to see a SimCity which places you not in the role of a God-like mayor, but of a citizen. The player could have various means at their disposal to steer the direction of the city through persuasion and politicking.
If we could watch a city coming to be in theory, would we also see its justice coming to be, and its injustice as well? (Plato’s Republic, 369a)SimCity was also one of the first “god games,” strategy games where a player is given a bird’s eye view of a city, social group, or ecosystem and allowed god-like power in managing its development. The idea transfers well to political philosophy as state-simulation, for who is ultimately the ruler of the Republic? It is not the guardians but the theorist, Plato himself. The guardians were not free to imagine, for instance, that the best thing for their city might be to set up a representative democracy.
Indeed, so it is with all of the socities imagined by the political theorists. How could their utopias come about except through the single-party rule of ideologues? Every attempt to create a civilization that followed the blueprint of some singular political theorist has resulted in tyranny—at least, every attempt that was not checked by the actions of some other party equally bent on seeing their theory of government brought to fruition.
This is also why I have become wary of strict constructionalist interpretations of the Constitution, for instance. I recognize the need for stability, particularly in the day-to-day operations of government and the guarantee of individual rights. But such a document also serves to give power to the intellectuals who drafted it, far in excess of what is fitting for a democratic society. The Constitution is, among other things, a tool for the long-departed to exert their political will on the people.
It would be interesting to see a SimCity which places you not in the role of a God-like mayor, but of a citizen. The player could have various means at their disposal to steer the direction of the city through persuasion and politicking.
Labels:
democratic technology,
Plato,
political philosophy,
SimCity,
simulation
Monday, January 7, 2008
from .txt to text
As an online news producer, I have often found myself using software tools that seem clumsily suited to working with journalistic content. It’s becoming somewhat cliché in the industry to say that all content management systems are bad, and I believe I have an idea why that is.
The problem is that there are a number of words that are shared between the humanistic disciplines and computer science—words which are deceptive in that they appear to be simple and straightforward in meaning, but which in fact have evolved different, if overlapping, meanings in each of those two traditions. I refer to words like “graphic” and “music” but especially to “text.”
To the writer, the word “text” can mean any number of things—a handwritten note, an article from a journal, a book. A text may include illustrations, photographs, snippets from other languages, unusual formatting choices, whatever is necessary to express the intent of the author. These are not add-ons to the content, but integral to it.
In contrast, to the computer scientist, “text” is what a text editor edits. The word refers to a binary representation of a typewritten text. There is a limited character palette, as each character must be assigned a number. These are then arranged in sequence, allowing for display only in rigid rows, left-to-right, top-to-bottom (some latitude is allowed for non-Latin alphabets, but that is a relatively recent development), with no other formatting allowed.
Any additional presentational features of the text—including hard line breaks, character formatting, kerning and leading, mathematical typesetting, included images—must be added through any of a number of conventions allowing metadata to be introduced into the flow of the text. Examples include LaTeX, HTML, and the Microsoft Word document format. The text then splits into code and WYSIWYG views, and your opinion as to which of these constitutes the “real” text is likely to be a function of whether you work as a writer or a computer programmer. The people who use the software want to manipulate the WYSIWYG text, but the people who create the software are concerned with manipulating the code text.
The theory behind WYSIWYG word processing is that this should make no difference, since the two forms are simply different expressions of the same structure. In reality, however, the code text represents the structure directly, and the WYSIWYG text is a translation. The situation is better with well-crafted software, but writing texts that differ markedly in structure from typewritten or “plain” texts is often impossible or extremely difficult without learning how to work directly in the code.
I’m not sure this situation can ever be fixed as long as WYSIWYG texts are coded as “plain” texts with embedded metadata. The writer will require a new model of textual representation that deals with characters and formatting information at the same level of abstraction.
The problem is that there are a number of words that are shared between the humanistic disciplines and computer science—words which are deceptive in that they appear to be simple and straightforward in meaning, but which in fact have evolved different, if overlapping, meanings in each of those two traditions. I refer to words like “graphic” and “music” but especially to “text.”
To the writer, the word “text” can mean any number of things—a handwritten note, an article from a journal, a book. A text may include illustrations, photographs, snippets from other languages, unusual formatting choices, whatever is necessary to express the intent of the author. These are not add-ons to the content, but integral to it.
In contrast, to the computer scientist, “text” is what a text editor edits. The word refers to a binary representation of a typewritten text. There is a limited character palette, as each character must be assigned a number. These are then arranged in sequence, allowing for display only in rigid rows, left-to-right, top-to-bottom (some latitude is allowed for non-Latin alphabets, but that is a relatively recent development), with no other formatting allowed.
Any additional presentational features of the text—including hard line breaks, character formatting, kerning and leading, mathematical typesetting, included images—must be added through any of a number of conventions allowing metadata to be introduced into the flow of the text. Examples include LaTeX, HTML, and the Microsoft Word document format. The text then splits into code and WYSIWYG views, and your opinion as to which of these constitutes the “real” text is likely to be a function of whether you work as a writer or a computer programmer. The people who use the software want to manipulate the WYSIWYG text, but the people who create the software are concerned with manipulating the code text.
The theory behind WYSIWYG word processing is that this should make no difference, since the two forms are simply different expressions of the same structure. In reality, however, the code text represents the structure directly, and the WYSIWYG text is a translation. The situation is better with well-crafted software, but writing texts that differ markedly in structure from typewritten or “plain” texts is often impossible or extremely difficult without learning how to work directly in the code.
I’m not sure this situation can ever be fixed as long as WYSIWYG texts are coded as “plain” texts with embedded metadata. The writer will require a new model of textual representation that deals with characters and formatting information at the same level of abstraction.
Subscribe to:
Posts (Atom)