Complexism involves the extension of the world-view suggested by complexity science into the problem space of the arts and humanities. In doing so complexism provides a higher synthesis that subsumes both modern and postmodern concerns, attitudes, and activities. Complexism provides an intellectual meeting ground where 20th century conflicts between science and the humanities can be reconciled.

For a provisional overview see
Complexism and evolutionary art.

While I'll try my best to make entries here of value, please understand that I'm using this blog as a sort of scratch pad. I'm going to feel free at times to speculate wildly, change my mind, contradict myself, not include citations, and otherwise brainstorm.

Monday, January 2, 2012

Generative Art, Formalism, and Intentionality

It's been a very long while since I've posted here. I've been putting most of my efforts into publications that "matter" in terms of gaining tenure and academic recognition. Here is a response to an online discussion most won't see otherwise. It makes a couple points about generative art relating to formalism and intentionality worth capturing here.

---

First, it concerns me greatly when, especially in the context of generative art, there is an implicit point made that art without political content is somehow lacking, or that formal art is somehow not enough in itself, or is merely a phase to be passed though and left behind. (Not that anyone here is saying precisely that...)

I'm fond of saying that art is too important to be wasted on politics, and politics is too important to be trusted to artists. This is, of course, intentionally provocative in its glibness. But perhaps the following will add some meat.

Form matters. Form isn't just a concern for artists, it also has to do with science and philosophy and religion. Artists in the modern period made would-be heroic claims to privileged understanding of form as expressions of their inner psyche and the channeling of primordial forces. Artists in the postmodern period in the process of rejecting the claims to privilege and high art, a claim also attacked in part by the identity politics promoted in postmodern critical studies, rejected formalism. Beauty came to be thought of as, at best, a naive and useless notion, and at worst a destructive tool of ideology and political oppression.

Generative art, and especially generative art that harnesses what we are learning from complexity science, is a unique opportunity to rehabilitate formalism in art. It presents form as anything but arbitrary. It presents beauty as the result of an understandable universe neutral to human social construction in a fair and unbiased way.

Formalism in art can now be thought of as neither a claim to privilege nor meaningless beauty. Form can be appreciated as a real, meaningful, publicly understandable process available to all. Relative to the postmodern era, tired and played out, this new conception of form is revolutionary and well worth exploration in its own right.

Second point, regarding the claim that "generative processes are used to negate intentionality."

They certainly can be, but they also certainly don't have to be. A trivial example would be generative techniques used in Hollywood animated filmmaking. They might, for example, use L-systems and so on to create a forest scene. There is no negation of intentionality. The art director gets the look he or she wants. It's a purely pragmatic decision.

Frankly I see the term "generative art" as having very little content. It's a starting point in that it is a name for a subset of art made in a certain way. But it says nothing about that art in terms of content, meaning, value, criticism (other than categorization), and so on.

It's a lot like the term "painting." Painting refers to work made by applying pigment to a surface. But any statement like "painting is about revealing the soul" or "painting is about mimesis" or whatever is bound to be wrong. Wrong because painting can be about these things, but also so much more.

The one thing all generative art does has in common, by definition, is the use of generative systems. That's why in my take on it the next step is to ask "what can we say about systems?" I try to put that question in the context of complexity science because I view that as the current best universal take on systems. And indeed it yields a way to sort out subsets of generative art, and it turns out those subsets came into practice in a historical order.

But beyond that I find statements that generative art is this or that wrong in that they are overly exclusionary. What *could* be said is something like "at this point in art history the most useful generative art addresses the issue of intentionality." That would be a debatable point, but it doesn't deny the category of "generative art" to art that really should be included.

Personally I am not very interested in the issues around intentionality, and I'm very much less interested in the intersection of art and politics. What is interesting to me is how complex generative systems give us a way to explore the very nature of the universe.

Monday, March 1, 2010

Art and science are not the same

I frequently hear opinions about the dissolution of boundaries between various disciplines. For example the notion that there is no difference between art and science. Or that mathematics is a form of science.

To be sure interdisciplinary work is in vogue. And frankly it's something that I've always taken for granted and practiced on my own and with others. But it seems to me there are important and useful differences between various disciplines and something is lost when that is forgotten.

A place to start might be to ask why are there multiple disciplines to begin with. At the time of Leonardo Di Vinci it was possible to simultaneously practice a mastery of art and science at the then highest level. Not that many did, but it was within the realm of possibility. That seems to no longer be the case. Each field and subfield is so competitive and richly populated that success requires focused attention. Mastery of all is beyond anyone's grasp.

Some see the formation of hyper-competitive distinct disciplines as something peculiar to western capitalist society in the modern age. On the other hand even in the most "primitive" societies there are often distinct roles for the shaman, the warrior, and the farmer.

My own view is that while there are economic realities that encourage division of labor and specialization, the differences among the disciplines are real and inescapable. And that is because those differences are rooted in universal modes of experience.

When the self encounters the other there are two significant groupings of experience. First there is the relationship between the self and the apparent world. And second there is the relationship between the self and other people. Some might wonder whether the second is really a subset of the first. I would argue that humans are so fundamentally social and inevitably dependent on others that human relations deserve separate consideration. And there are good reasons to think that we are hardwired to treat other humans as a special case.

When we encounter the world this happens in two relatively distinct experiential modes. There is an outward mode where the senses are alive with input, our bodies manipulate external objects as output, and our minds are engaged with the processing of both. But there is also an inward mode where sensory and bodily activity is diminished, and the mind is occupied with abstractions, concepts, emotions, memories, and other mental objects.

In normal life, of course, we are constantly shifting between the inward and the outward. This may happen very quickly. But there are also times of concentrated effort where we sustain an outward (e.g. sports ) or inward (e.g. contemplation) stance. I trust that most people would agree that sometimes they look outward and sometimes they look inward.

When we encounter other people there are also two kinds of relatively distinct experiential modes in play. There are those experiences that can be confidently communicated and independently experienced. And then there are experiences that resist communication and verification.

The former we can call public. This doesn't mean the encounter must be publicized, it just means that in principle the experience can be communicated to others fully and reliably. And most of all they are experiences where we can invite others to "see for themselves" and ascertain whether or not their experience matches ours.

The latter we can call private. Again, this doesn't refer to experiences that we keep secret. Rather it refers to experiences with significant aspects that are in principle ineffable. Try as we may, any description falls qualitatively short of the mark. And most of all they are experiences that others may or may not have, and we can never know if their experience matches ours.

The public versus private distinction may at first sound obscure, but in fact it's something we deal with daily. For example, we can have a public experience of measuring the wavelength of red light with a spectrometer. We can ask another person to use their spectrometer and verify the wavelength we measured. What we can't ask another person to do is verify that our aesthetic perception of red is the same as their perception of red. (This brings us to the notion of qualia and to some extent what John Searle has called "first person ontology").

Combining these two polarities we are left with four modes of encounter between the self and the other. This is illustrated below:





Each quadrant represents one of these four modes. For example, there are times when we are facing outward having experiences that others can, in principle, confidently duplicate. Measuring the distance between two points would be an example. There are other times when we look inward and have experiences that others can never, even in principle, confidently experience for themselves. Grief due to the passing of a loved one is an example. While it seems certain that all healthy humans are capable of something called grief, we can never be sure of what that feels like for another person.

There are, however, inward experiences that can be duplicated and verified by others. An example would be proving the Pythagorean theorem. There are also outward experiences that have significantly private aspects. We can't really know whether my experience of a beautiful sunset is the same as your experience of that same sunset.

These four modes are experienced by all humans at various times and places. We move fluidly between them, often very quickly and typically without making any special note of it.

These four modes of encounter correspond to what I consider the 4 major discipline areas that can be further subdivided to include many others. This is illustrated below:





In the next post I'll explain how each of the four discipline areas relate to each other. For now it's enough to say that there are real differences between the disciplines, and that those differences correspond to the four inevitable modes of encounter that make up human experience.

Saturday, October 11, 2008

Networked art - new and more of the same

In places where discourse about the internet exercises now familiar postmodern moves, one finds some of the most peculiar notions about the underlying technology. It's not uncommon to hear technical protocols like TCP/IP described as mechanisms of political oppression and social control. But most of all one hears gushing descriptions of unprecedented revolution and radical paradigm shifts. And network based art is posited nothing less than the ultimate requiem for the author a la Barthes.

It seems to me it's time for some careful, and even skeptical, thought about how revolutionary the network experience really is. And I mean this in both a technological and intellectual sense.

In the early 70's I was able to create software that ran on a networked system called PLATO. It already had, albeit with less power, essentially all of the networking features many think of as new and world changing. These included: bit mapped graphics, linking and hypertext, graphics, sound, an easy to use scripting/programming language, and user definable fonts.

But most of all PLATO had an online community. It allowed multi-user real-time interaction across an international network. PLATO included what we now call bulletin boards, email, chat rooms, instant messaging, screen sharing, and even emoticons. And, of course, the killer app - interactive multi-user games. Again, all of this in the early 70's and some even a bit before.

I've always suspected that if PLATO had been born on a coast rather than in the midwest (University of Illinois) on midwestern hardware (Control Data Corp.) it would be better remembered. But that's not why I'm posting about it here.

Many of those with non-trivial exposure to PLATO came away with the same lessons being rediscovered today. First, people are much more interested in interacting with other people than with machines. (Perhaps that will change if we ever build Turing test-worthy machines.) Second, modes of human communication in and out of real-time are different, and each has its own virtues. Third, computer mediated experiences work best when they are small, distributed, independently developed, and interlinked rather than monolithic and controlled from the top down.

But how much of this new wisdom is really new, and how much of it is about networks per se? People have put interaction with other humans first both before and after networks. We've known that real-time and time-delayed modes of human communication are different and yet equally useful both before and after networks. And it's a basic insight from complexity science that human society is best understood as a bottom up process, whether the medium of communication is digital, analog, or shouts and smoke signals.

If so little of this is truly revolutionary why did PLATO ultimately fail while the web succeeded beyond its inventors wildest dreams? It wasn't due to a lack of vision on the part of its creators. Both have similar roots in education and research with an eye to the public good. And it certainly wasn't due to a lack of critical theory, deconstruction, and so on. Both PLATO and the Web happily sprang into being without needing to consult the canons of postmodern thought.

Ultimately it came down to mundane economics. The perceived value of PLATO was out of whack with its cost. The Web persisted and grew because the cost of the technology had finally ducked under the corresponding price threshold.

Those commenting on the growth of the internet should keep at least one foot on the ground. It's not that I think there is nothing new under the sun, although there is some of that. It's mostly that the breathless hyperbole I so often hear from those in the humanities regarding the network experience sounds awfully naive. It will not wear well over time. For some of us it doesn't even wear well now.

Networked art is possible now mostly because of economic reasons, not because social structures have been overturned, hierarchies questioned, authorship problematized, and so on. Certainly such critical issues are worth consideration, but I would argue they are mostly orthogonal to the growth of the network.

p.s. for more info about PLATO the wikipedia entry is actually pretty good:

http://en.wikipedia.org/wiki/PLATO_(computer_system)

Sunday, September 28, 2008

About possible generative story telling systems

For a long time I've had a gut feeling that the emphasis on story telling, aka narrative, is somewhat overblown in a number of disciplines. It seems obvious that any complex adaptive system is going to have to deal with "story" in the following sense. Everything exists in time and space. Spacial objects change over a period of time. And there's your story.

As trivial as that might sound, I think there is a kernel of critique there. Because in many contemporary schools of thought a quick step is made from the unfolding of physical events in time to the (attempted) capture and representation of events in and as language. And we are off to the races with all manner of linguistic concerns, often to the point where the world is forgotten and language is posed as a first principle.

Language is of course really important. And humans are the best language processing systems we know of. But events precede language, not the other way around. (Precede in most every way, certainly logically, ontologically, and temporally). Not that you would know that by listening to many of those in the humanities and even some social scientists.

Anyway, it seems to me that if you want to build a generative story telling system you shouldn't start with text processing or really any kind of linguistic concerns at all. You need to begin by simulating the world. Or a piece of it. And only after you can create (virtual) physical events that take place over time do you need to think about how to present them. And you can do that in language, as video, in Second Life, or any number of ways.

Generative story telling has to begin with a world simulation not because language is difficult, but rather because of language's lesser ontological status.

Wednesday, September 24, 2008

Free will in a nutshell

The debate about free will is something of an old chestnut. And so here is my position in a nutshell. (Previously posted on the eugene mailing list.)

My best guess is that behavior bubbles up from pre-conscious neural processes. Consciousness is mostly an observer of behavior already well on its way. But it doesn't feel that way. That's perhaps because observing our own observing would lead to an impossible infinite regress. So when we feel our own behaviors they feel uncaused. And that creates the inner sense of free will.

At the same time...

We are each individually a jumble of incredibly complex cross-connected chaotic processes. This makes our behavior, beyond a
certain point, fundamentally unpredictable. Our own feeling of free will, combined with the observed unpredictability of others, leads us to posit free will in others as well. And others are happy to agree.

Consciousness as the first person experience of qualia is a whole other matter. A much larger mystery, in my book, than free will.

Kurzweil & Gelernter debate the limits of intelligent machines

The following is a post I made on the "eugene" generative art mailing list. It is in response to the debate that can be viewed (or listened to) by clicking here:

Public Debate on the Limits of Intelligent Machines

David Gelernter
Ray Kurzweil, inventor, writer
Rodney Brooks, moderator

The debate wasn't about the limits of intelligence in intelligent machines. Turing machine issues, as interesting as they may be, didn't really enter into it at all. The real question was:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

The assumption that consciousness scales up with intelligence shouldn't go uninspected. There are certainly more exciting and speculative ways to question this, but one simple issue is whether they scale linearly. It may be that while intelligence scales linearly consciousness increases like a sigmoid function. I.e. suddenly some kind of "critical mass" is reached and the smoothly learning intelligent agent suddenly "pops" into consciousness. I don't think this is unlikely. My daily routine of waking up from sleep sure feels like a sigmoid function. (To be sure throughout the day we are "more awake" at some times than others, but it still feels bifurcated. I'm reminded here of catastrophe theory.)

In other words, it may well be that my cat is nearly as conscious as I am. I just don't know. It seems like I can't possibly know. And this is the key point.

Kurzweil at the very outside allows that the question can be taken in two senses. First there is apparent consciousness. This is indicated by behavior that to an outside observer appears to imply or require a conscious agent. And then there is (Searle alert!) first-person consciousness. This references, for example, the experience of qualia by a conscious agent.

Kurzweil is, at least initially, quite fair in this regard. He recognizes the issue of first-person consciousness and immediately opines that it is one of those areas that can't be explored via the scientific method. (Recognizing that the scientific method is a fuzzy set of techniques). He also observes that apparent consciousness *can" be explored by science.

I think he also notes that some people believe that entities that cannot be explored by science either don't exist or should be treated as if they don't exist. He, quite fairly I think, doesn't claim to be one of these people.

Having laid that out he then spends almost every minute of the remaining video talking about how apparent consciousness can be created and how remarkably close in time we are to that happening.

Kurzweil gets a little slippery when he says (in effect) that in time people will come to *believe* these machines with apparent consciousness *also* have first-person consciousness. But I think the careful listener will note that that is not a statement about scientific knowledge, or even a claim to truth about the matter.

Meanwhile Gelernter is fighting a battle on the (punk rock alert!) agnostic front. I find his argumentation a bit less straight forward. Maybe even a bit dishonest. I say that because he seems to intermix unrelated points as if to imply that in combination they strongly suggest, perhaps even prove, that extensions to current computing technology cannot and will not lead to conscious entities.

He notes the inaccessibility of first-person consciousness to others. I.e. One can't even prove other people are conscious, so how are we going to prove computers are conscious?

An entirely different point is that there may be some unknown aspect of brain chemistry that gives rise to consciousness, and that aspect is unique to chemistry and lacking in electronic circuitry.

He cites the "where is it?" objection by asking where in the Chinese Room is the consciousness of the Chinese translator located? It's already stipulated that it isn't in the clerk mechanically looking up and transcribing Chinese characters. Is it floating in the air? (Kurzweil retorts that such an argument could be used to "prove" that *we* are not conscious. But you are conscious...aren't you?)

He cites an objection that simulating the brain isn't enough. e.g. A simulation of photosynthesis can be highly accurate, but there is no plant, no ATP, no storage of energy, etc. Similarly, even if you could simulate the brain, there would be no mind and no consciousness.

He cites another kind of objection that simulating the brain isn't enough. A simulation of the brain alone leaves out all manner of processes in other parts of the body that impact what we (perhaps unfortunately) think of as brain function.

In short Gelernter doesn't assert a unified point of view. He takes a sort of shotgun approach in spreading doubt, attempting to undermine Kurzweil's displayed certainty.

Note that very little, perhaps none, of the above is about intelligence per se. It is really about the question of consciousness. What is it? Where does it come from? Can you make it happen? (Reference Gelernter's amusing aside that it can be created, and he will explain how off camera.)

My takeaway is that if the question is:

Will technological advances in computing result in the creation of computers with consciousness, or merely highly intelligent zombies?

then both speakers spend most of their time not directly answering.

And when they do answer the question in the most direct and honest way they more or less agree on the same response.

"We probably won't know. We probably can't know."

A debate with a happy ending! Sort of...

Sunday, January 6, 2008

Read this paper for an introduction to Complexism

As noted on the main page, this blog is a work in progress towards the development of a new paradigm called "Complexism."  More entries will appear later.

For now I'd encourage you to visit my website at:


And to read this for an introduction to Complexism: