Complexism involves the extension of the world-view suggested by complexity science into the problem space of the arts and humanities. In doing so complexism provides a higher synthesis that subsumes both modern and postmodern concerns, attitudes, and activities. Complexism provides an intellectual meeting ground where 20th century conflicts between science and the humanities can be reconciled.

For a provisional overview see
Complexism and evolutionary art.

While I'll try my best to make entries here of value, please understand that I'm using this blog as a sort of scratch pad. I'm going to feel free at times to speculate wildly, change my mind, contradict myself, not include citations, and otherwise brainstorm.

Saturday, October 11, 2008

Networked art - new and more of the same

In places where discourse about the internet exercises now familiar postmodern moves, one finds some of the most peculiar notions about the underlying technology. It's not uncommon to hear technical protocols like TCP/IP described as mechanisms of political oppression and social control. But most of all one hears gushing descriptions of unprecedented revolution and radical paradigm shifts. And network based art is posited nothing less than the ultimate requiem for the author a la Barthes.

It seems to me it's time for some careful, and even skeptical, thought about how revolutionary the network experience really is. And I mean this in both a technological and intellectual sense.

In the early 70's I was able to create software that ran on a networked system called PLATO. It already had, albeit with less power, essentially all of the networking features many think of as new and world changing. These included: bit mapped graphics, linking and hypertext, graphics, sound, an easy to use scripting/programming language, and user definable fonts.

But most of all PLATO had an online community. It allowed multi-user real-time interaction across an international network. PLATO included what we now call bulletin boards, email, chat rooms, instant messaging, screen sharing, and even emoticons. And, of course, the killer app - interactive multi-user games. Again, all of this in the early 70's and some even a bit before.

I've always suspected that if PLATO had been born on a coast rather than in the midwest (University of Illinois) on midwestern hardware (Control Data Corp.) it would be better remembered. But that's not why I'm posting about it here.

Many of those with non-trivial exposure to PLATO came away with the same lessons being rediscovered today. First, people are much more interested in interacting with other people than with machines. (Perhaps that will change if we ever build Turing test-worthy machines.) Second, modes of human communication in and out of real-time are different, and each has its own virtues. Third, computer mediated experiences work best when they are small, distributed, independently developed, and interlinked rather than monolithic and controlled from the top down.

But how much of this new wisdom is really new, and how much of it is about networks per se? People have put interaction with other humans first both before and after networks. We've known that real-time and time-delayed modes of human communication are different and yet equally useful both before and after networks. And it's a basic insight from complexity science that human society is best understood as a bottom up process, whether the medium of communication is digital, analog, or shouts and smoke signals.

If so little of this is truly revolutionary why did PLATO ultimately fail while the web succeeded beyond its inventors wildest dreams? It wasn't due to a lack of vision on the part of its creators. Both have similar roots in education and research with an eye to the public good. And it certainly wasn't due to a lack of critical theory, deconstruction, and so on. Both PLATO and the Web happily sprang into being without needing to consult the canons of postmodern thought.

Ultimately it came down to mundane economics. The perceived value of PLATO was out of whack with its cost. The Web persisted and grew because the cost of the technology had finally ducked under the corresponding price threshold.

Those commenting on the growth of the internet should keep at least one foot on the ground. It's not that I think there is nothing new under the sun, although there is some of that. It's mostly that the breathless hyperbole I so often hear from those in the humanities regarding the network experience sounds awfully naive. It will not wear well over time. For some of us it doesn't even wear well now.

Networked art is possible now mostly because of economic reasons, not because social structures have been overturned, hierarchies questioned, authorship problematized, and so on. Certainly such critical issues are worth consideration, but I would argue they are mostly orthogonal to the growth of the network.

p.s. for more info about PLATO the wikipedia entry is actually pretty good:

http://en.wikipedia.org/wiki/PLATO_(computer_system)

Sunday, September 28, 2008

About possible generative story telling systems

For a long time I've had a gut feeling that the emphasis on story telling, aka narrative, is somewhat overblown in a number of disciplines. It seems obvious that any complex adaptive system is going to have to deal with "story" in the following sense. Everything exists in time and space. Spacial objects change over a period of time. And there's your story.

As trivial as that might sound, I think there is a kernel of critique there. Because in many contemporary schools of thought a quick step is made from the unfolding of physical events in time to the (attempted) capture and representation of events in and as language. And we are off to the races with all manner of linguistic concerns, often to the point where the world is forgotten and language is posed as a first principle.

Language is of course really important. And humans are the best language processing systems we know of. But events precede language, not the other way around. (Precede in most every way, certainly logically, ontologically, and temporally). Not that you would know that by listening to many of those in the humanities and even some social scientists.

Anyway, it seems to me that if you want to build a generative story telling system you shouldn't start with text processing or really any kind of linguistic concerns at all. You need to begin by simulating the world. Or a piece of it. And only after you can create (virtual) physical events that take place over time do you need to think about how to present them. And you can do that in language, as video, in Second Life, or any number of ways.

Generative story telling has to begin with a world simulation not because language is difficult, but rather because of language's lesser ontological status.

Wednesday, September 24, 2008

Free will in a nutshell

The debate about free will is something of an old chestnut. And so here is my position in a nutshell. (Previously posted on the eugene mailing list.)

My best guess is that behavior bubbles up from pre-conscious neural processes. Consciousness is mostly an observer of behavior already well on its way. But it doesn't feel that way. That's perhaps because observing our own observing would lead to an impossible infinite regress. So when we feel our own behaviors they feel uncaused. And that creates the inner sense of free will.

At the same time...

We are each individually a jumble of incredibly complex cross-connected chaotic processes. This makes our behavior, beyond a
certain point, fundamentally unpredictable. Our own feeling of free will, combined with the observed unpredictability of others, leads us to posit free will in others as well. And others are happy to agree.

Consciousness as the first person experience of qualia is a whole other matter. A much larger mystery, in my book, than free will.

Kurzweil & Gelernter debate the limits of intelligent machines

The following is a post I made on the "eugene" generative art mailing list. It is in response to the debate that can be viewed (or listened to) by clicking here:

Public Debate on the Limits of Intelligent Machines

David Gelernter
Ray Kurzweil, inventor, writer
Rodney Brooks, moderator

The debate wasn't about the limits of intelligence in intelligent machines. Turing machine issues, as interesting as they may be, didn't really enter into it at all. The real question was:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

The assumption that consciousness scales up with intelligence shouldn't go uninspected. There are certainly more exciting and speculative ways to question this, but one simple issue is whether they scale linearly. It may be that while intelligence scales linearly consciousness increases like a sigmoid function. I.e. suddenly some kind of "critical mass" is reached and the smoothly learning intelligent agent suddenly "pops" into consciousness. I don't think this is unlikely. My daily routine of waking up from sleep sure feels like a sigmoid function. (To be sure throughout the day we are "more awake" at some times than others, but it still feels bifurcated. I'm reminded here of catastrophe theory.)

In other words, it may well be that my cat is nearly as conscious as I am. I just don't know. It seems like I can't possibly know. And this is the key point.

Kurzweil at the very outside allows that the question can be taken in two senses. First there is apparent consciousness. This is indicated by behavior that to an outside observer appears to imply or require a conscious agent. And then there is (Searle alert!) first-person consciousness. This references, for example, the experience of qualia by a conscious agent.

Kurzweil is, at least initially, quite fair in this regard. He recognizes the issue of first-person consciousness and immediately opines that it is one of those areas that can't be explored via the scientific method. (Recognizing that the scientific method is a fuzzy set of techniques). He also observes that apparent consciousness *can" be explored by science.

I think he also notes that some people believe that entities that cannot be explored by science either don't exist or should be treated as if they don't exist. He, quite fairly I think, doesn't claim to be one of these people.

Having laid that out he then spends almost every minute of the remaining video talking about how apparent consciousness can be created and how remarkably close in time we are to that happening.

Kurzweil gets a little slippery when he says (in effect) that in time people will come to *believe* these machines with apparent consciousness *also* have first-person consciousness. But I think the careful listener will note that that is not a statement about scientific knowledge, or even a claim to truth about the matter.

Meanwhile Gelernter is fighting a battle on the (punk rock alert!) agnostic front. I find his argumentation a bit less straight forward. Maybe even a bit dishonest. I say that because he seems to intermix unrelated points as if to imply that in combination they strongly suggest, perhaps even prove, that extensions to current computing technology cannot and will not lead to conscious entities.

He notes the inaccessibility of first-person consciousness to others. I.e. One can't even prove other people are conscious, so how are we going to prove computers are conscious?

An entirely different point is that there may be some unknown aspect of brain chemistry that gives rise to consciousness, and that aspect is unique to chemistry and lacking in electronic circuitry.

He cites the "where is it?" objection by asking where in the Chinese Room is the consciousness of the Chinese translator located? It's already stipulated that it isn't in the clerk mechanically looking up and transcribing Chinese characters. Is it floating in the air? (Kurzweil retorts that such an argument could be used to "prove" that *we* are not conscious. But you are conscious...aren't you?)

He cites an objection that simulating the brain isn't enough. e.g. A simulation of photosynthesis can be highly accurate, but there is no plant, no ATP, no storage of energy, etc. Similarly, even if you could simulate the brain, there would be no mind and no consciousness.

He cites another kind of objection that simulating the brain isn't enough. A simulation of the brain alone leaves out all manner of processes in other parts of the body that impact what we (perhaps unfortunately) think of as brain function.

In short Gelernter doesn't assert a unified point of view. He takes a sort of shotgun approach in spreading doubt, attempting to undermine Kurzweil's displayed certainty.

Note that very little, perhaps none, of the above is about intelligence per se. It is really about the question of consciousness. What is it? Where does it come from? Can you make it happen? (Reference Gelernter's amusing aside that it can be created, and he will explain how off camera.)

My takeaway is that if the question is:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

then both speakers spend most of their time not directly answering.

And when they do answer the question in the most direct and honest way they more or less agree on the same response.

"We probably won't know. We probably can't know."

A debate with a happy ending! Sort of...

Sunday, January 6, 2008

Read this paper for an introduction to Complexism

As noted on the main page, this blog is a work in progress towards the development of a new paradigm called "Complexism."  More entries will appear later.

For now I'd encourage you to visit my website at:


And to read this for an introduction to Complexism: