Sunday, January 06, 2008

Central vs. Distributed Processing in Contemporary Computer Science and Emotion Theory

Entry for 3 January 2008:

The conversation that Conal and I began but weren’t able to finish (even if that had been possible) had to do with what Conal calls the crisis in contemporary computer science over parallel processing. He says that in order to move forward computer scientists and programmers now need to learn how to work effectively with distributed parallel processor networks, including massive distributed networks. However, computer scientists and programmers have gotten stuck in a paradigm of central, linear processing, originally adopted simply because it was simpler and more manageable. Now, the limits of individual processors in terms of miniaturization and speed are being approached, which means that the way forward is parallel processors working together. But how do these processors work together? Does it need a master processor? Conal says that current practice treats different processes/processors as separate things that then need top-down governing processes and pipelines and protocols to govern their interactions; however, this soon becomes unmanageably complex, like the multi-body problem in Newtonian physics: A soon as you get to 3 interacting bodies, it becomes difficult or impossible to fully calculate and therefore predict their interactions. Instead, like physicists they resort to looking at them two at a time and various heuristic shortcuts to approximating things.

To Conal this is the wrong approach and makes everything difficult and kludgey. Conal loves elegance and autonomy; instead of centralized, linear processes, he points to the cellular automata and self-organizing systems of complexity theory, and to the simple, elegant underlying processes of chaotic phenomena. Afterall, it’s pretty clear that the human brain is a massively distributed parallel processing system, with lots of competing, semi-independent but interacting subprocesses. There isn’t a central processor (although there are after-the-fact meta-reflective processes that process the process), and it certainly isn’t linear. And all this is capable of working together with tremendous efficiency and grace, although (it's clear to me) it is just as capable of tying itself into chaos, conflict and confusion. Computers should be able to do this, but can’t as long as programmers and designers think in out-moded terms of centralized control and linearity.

Conal points out that natural systems manage to integrate multiple processes in elegant ways that don’t require complicated protocols governing their interactions. (I would add: that's because the natural systems that don't work like this don't succeed and are weeded out by natural selection.) He argues that it would be more productive to observe how these successful systems work rather than designing sets of commands or instructions for controlling these processes.

What is so interesting about this is the resonance with person-centred/experiential ideas of encouraging natural, organic, emergent processes in people, of honouring and supporting multivocality/multiple self aspects, and of observing natural change processes via close process observation (e.g., task analysis). I’m sure that exploring these ideas further with Conal will prove to be interesting and useful, and I look forward to doing on so on future visits. In the meantime, some of his creative work on what he call s tangible functional programming and also his whimsical graphics programs can be found on his website at: http://conal.net/

No comments: