Complexism involves the extension of the world-view suggested by complexity science into the problem space of the arts and humanities. In doing so complexism provides a higher synthesis that subsumes both modern and postmodern concerns, attitudes, and activities. Complexism provides an intellectual meeting ground where 20th century conflicts between science and the humanities can be reconciled.

For a provisional overview see
Complexism and evolutionary art.

While I'll try my best to make entries here of value, please understand that I'm using this blog as a sort of scratch pad. I'm going to feel free at times to speculate wildly, change my mind, contradict myself, not include citations, and otherwise brainstorm.

Sunday, September 28, 2008

About possible generative story telling systems

For a long time I've had a gut feeling that the emphasis on story telling, aka narrative, is somewhat overblown in a number of disciplines. It seems obvious that any complex adaptive system is going to have to deal with "story" in the following sense. Everything exists in time and space. Spacial objects change over a period of time. And there's your story.

As trivial as that might sound, I think there is a kernel of critique there. Because in many contemporary schools of thought a quick step is made from the unfolding of physical events in time to the (attempted) capture and representation of events in and as language. And we are off to the races with all manner of linguistic concerns, often to the point where the world is forgotten and language is posed as a first principle.

Language is of course really important. And humans are the best language processing systems we know of. But events precede language, not the other way around. (Precede in most every way, certainly logically, ontologically, and temporally). Not that you would know that by listening to many of those in the humanities and even some social scientists.

Anyway, it seems to me that if you want to build a generative story telling system you shouldn't start with text processing or really any kind of linguistic concerns at all. You need to begin by simulating the world. Or a piece of it. And only after you can create (virtual) physical events that take place over time do you need to think about how to present them. And you can do that in language, as video, in Second Life, or any number of ways.

Generative story telling has to begin with a world simulation not because language is difficult, but rather because of language's lesser ontological status.

Wednesday, September 24, 2008

Free will in a nutshell

The debate about free will is something of an old chestnut. And so here is my position in a nutshell. (Previously posted on the eugene mailing list.)

My best guess is that behavior bubbles up from pre-conscious neural processes. Consciousness is mostly an observer of behavior already well on its way. But it doesn't feel that way. That's perhaps because observing our own observing would lead to an impossible infinite regress. So when we feel our own behaviors they feel uncaused. And that creates the inner sense of free will.

At the same time...

We are each individually a jumble of incredibly complex cross-connected chaotic processes. This makes our behavior, beyond a
certain point, fundamentally unpredictable. Our own feeling of free will, combined with the observed unpredictability of others, leads us to posit free will in others as well. And others are happy to agree.

Consciousness as the first person experience of qualia is a whole other matter. A much larger mystery, in my book, than free will.

Kurzweil & Gelernter debate the limits of intelligent machines

The following is a post I made on the "eugene" generative art mailing list. It is in response to the debate that can be viewed (or listened to) by clicking here:

Public Debate on the Limits of Intelligent Machines

David Gelernter
Ray Kurzweil, inventor, writer
Rodney Brooks, moderator

The debate wasn't about the limits of intelligence in intelligent machines. Turing machine issues, as interesting as they may be, didn't really enter into it at all. The real question was:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

The assumption that consciousness scales up with intelligence shouldn't go uninspected. There are certainly more exciting and speculative ways to question this, but one simple issue is whether they scale linearly. It may be that while intelligence scales linearly consciousness increases like a sigmoid function. I.e. suddenly some kind of "critical mass" is reached and the smoothly learning intelligent agent suddenly "pops" into consciousness. I don't think this is unlikely. My daily routine of waking up from sleep sure feels like a sigmoid function. (To be sure throughout the day we are "more awake" at some times than others, but it still feels bifurcated. I'm reminded here of catastrophe theory.)

In other words, it may well be that my cat is nearly as conscious as I am. I just don't know. It seems like I can't possibly know. And this is the key point.

Kurzweil at the very outside allows that the question can be taken in two senses. First there is apparent consciousness. This is indicated by behavior that to an outside observer appears to imply or require a conscious agent. And then there is (Searle alert!) first-person consciousness. This references, for example, the experience of qualia by a conscious agent.

Kurzweil is, at least initially, quite fair in this regard. He recognizes the issue of first-person consciousness and immediately opines that it is one of those areas that can't be explored via the scientific method. (Recognizing that the scientific method is a fuzzy set of techniques). He also observes that apparent consciousness *can" be explored by science.

I think he also notes that some people believe that entities that cannot be explored by science either don't exist or should be treated as if they don't exist. He, quite fairly I think, doesn't claim to be one of these people.

Having laid that out he then spends almost every minute of the remaining video talking about how apparent consciousness can be created and how remarkably close in time we are to that happening.

Kurzweil gets a little slippery when he says (in effect) that in time people will come to *believe* these machines with apparent consciousness *also* have first-person consciousness. But I think the careful listener will note that that is not a statement about scientific knowledge, or even a claim to truth about the matter.

Meanwhile Gelernter is fighting a battle on the (punk rock alert!) agnostic front. I find his argumentation a bit less straight forward. Maybe even a bit dishonest. I say that because he seems to intermix unrelated points as if to imply that in combination they strongly suggest, perhaps even prove, that extensions to current computing technology cannot and will not lead to conscious entities.

He notes the inaccessibility of first-person consciousness to others. I.e. One can't even prove other people are conscious, so how are we going to prove computers are conscious?

An entirely different point is that there may be some unknown aspect of brain chemistry that gives rise to consciousness, and that aspect is unique to chemistry and lacking in electronic circuitry.

He cites the "where is it?" objection by asking where in the Chinese Room is the consciousness of the Chinese translator located? It's already stipulated that it isn't in the clerk mechanically looking up and transcribing Chinese characters. Is it floating in the air? (Kurzweil retorts that such an argument could be used to "prove" that *we* are not conscious. But you are conscious...aren't you?)

He cites an objection that simulating the brain isn't enough. e.g. A simulation of photosynthesis can be highly accurate, but there is no plant, no ATP, no storage of energy, etc. Similarly, even if you could simulate the brain, there would be no mind and no consciousness.

He cites another kind of objection that simulating the brain isn't enough. A simulation of the brain alone leaves out all manner of processes in other parts of the body that impact what we (perhaps unfortunately) think of as brain function.

In short Gelernter doesn't assert a unified point of view. He takes a sort of shotgun approach in spreading doubt, attempting to undermine Kurzweil's displayed certainty.

Note that very little, perhaps none, of the above is about intelligence per se. It is really about the question of consciousness. What is it? Where does it come from? Can you make it happen? (Reference Gelernter's amusing aside that it can be created, and he will explain how off camera.)

My takeaway is that if the question is:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

then both speakers spend most of their time not directly answering.

And when they do answer the question in the most direct and honest way they more or less agree on the same response.

"We probably won't know. We probably can't know."

A debate with a happy ending! Sort of...