The imbalanced augmentation of the human intellect.


In this article I will reexamine two important technological developments that prefigure the personal computer and the Web. I will also describe some of the mounting problems that come with ubiquitous connectivity to a seemingly infinite amount of information from the perspective of creative research in the visual arts.

“A sponge reduced by a pounding movement to a dust of individual cells, the living dust formed by a multitude of isolated beings is lost in the new sponge which it reconstitutes. A fragment of siphonophore is on it’s own an autonomous being; however, the whole siphonophore, in which the fragment participates, is itself not very different form a being possessing its unity. It is only beginning with linear animals ( worms, insects, fish, reptiles, birds, or mammals) that living individuals lose definitively the faculty of constituting, severally, groups which are linked in a single body. Non-linear animals ( like the siphonophore, like coral) unite in colonies whose elements are cemented together–but they don’t form societies. On the contrary, superior animals gather without having bodily links between them: bees and man have, without a single doubt, an autonomous body–but are they for all that autonomous beings?” 

–Georges Bataille1


The Web has continued to expand to ever greater boundaries since it’s inception and as of now there are still no indications of where the expansion will subside. Contained within the Web are documents of seemingly any conceivable topic from an endless range of sources, often unaccountable. There is no standard protocol for recording a documents source of authorship or date of creation and while this information is often available, it is just as often not. As a source of information for research, it has both obvious problems and benefits. As a source of research for the visual arts, it seems that the benefits outweigh the problems to the extent that for many it is the only source in their research. The Web is an excellent source of visual stimulus. Much of the information that exists on the Web today is in the form of unstructured data 2 which generally means non-machine readable, or images of some sort (either moving, or still). These images include practically anything on earth, including almost all historically recorded works of art. If there is an image in the artist’s memory or imagination, it can often be found somewhere on the Web with enough persistence and search skills. This can create fundamental conflicts with how many contemporary artists perceive their working practice.

Originality has traditionally been been held as one of the highest attributes in a work of art and much of the energy in contemporary art practice is spent dealing with that aspect (see Warhol, Duchamp, Prince, etc.) One of the problems with originality being regarded as such an important aspect in art making is that as more art gets made and cataloged it becomes exceedingly more difficult to create original art. This is a problem that has many arguments and stand points and there exist many rationalizations as to how originality can be upheld indefinitely or how it doesn’t really matter, but to the individual artist depending on their disposition, the concern of originality can become a paralyzing preoccupation. On the Web this situation is amplified, because now anyone with access ( computer, cell phone, etc…) can post images and videos that are accessible from anywhere in the world by anyone at anytime. 

The first section of this paper explains in more detail this originality complex and connects that to Kenneth Werbin’s concept of Superconnected Entropy. A potential solution to the originality complex and Superconnected Entropy is in how the Web is perceived as a resource. 

The following two sections take a look at two separate cultural developments that were made during the 60s, unconnected at the time but are useful to correlate looking for alternative perspectives on how we understand the Web and the Personal Computer as technological resources. The first was in Silicon Valley with the work of Ted Nelson and Douglas Engelbart. These two figures took the first concrete steps in the implementations towards what has become the Web and the Personal Computer, and their personal philosophies have impacted our understanding of these technologies. The second cultural development took place in the Paris Beat Motel by Brion Gysin and Ian Sommerville and further developed by William Burroughs. Silicon Valley and the Beat Motel are not normally associated with each other but nevertheless share a common link in that they were both efforts to augment the human mind through the technological mediation. It is in this aspect that I think there are insights that can help with finding new ways to perceive the Web and the Personal Computer as a technological resource. I suspect that problems with attention span, information overload and confusions of meaning could be problems that derive with how we conceptualize information technology in so far as our expectations and demands that we make of it. 

The effects of Superconnected Entropy in the production of Meaning

The reason that originality in creative thought is of a primary concern for artists and designers is because in contemporary visual arts, a primary objective is for specific meaning to be embedded or excluded from the work. Meaning in an artwork is derived through the combination of form, content, and context. If the author of the work does not have a clear idea of context in his work then he cannot have a full understanding of the meaning that will be interpreted by it. 

“True” originality in a visual artwork is something impossible to assess because the relationship between form, content and context is dynamic and complex. But it is significantly easier to determine the meaning of a work of art when less context is available to it. There seems to be a window of ideal context for a work of visual art to have meaning safely ascribed to it. With no context, a work’s meaning cannot be understood, but with too much context the meaning becomes undecidable as well. This is because part of a visual art work’s context has to do with other works that have been made in a similar vein. If and when other artworks that are similar to the artwork in question in either form, content or context come into existence, this makes a difference as to how the work in question is to be understood. For instance, if there is a work that has a totally unique form and content with absolutely no known historical precedence, it’s quite possible that it would be ignored in the context of art altogether! Now suppose ten years later that form and content gets repeated a considerable amount and suddenly the work that was denied status as an work of art becomes an important historical precedent (for example see A.Kaprow, M.Duchamp). 

To make matters more complicated, the post-modern art movement made great strides in denying that there need to be any recognizable signature in the combination of form, content and context. Essentially anything that is called an art work is an art work. This led to a hyper-fragmentation of genres in artistic production, a legacy still pervasive today that makes it difficult to even describe exactly what constitutes post-modernism 3. One of the easiest characteristics to agree on about whatever post-modernism might be is that its multiple meanings and interpretations create a cacophony. With the ubiquitous availability of the means of mechanical reproduction and the explosion in electronic mass media there are simply too many voices for there ever to be an authoritative consensus on what constitutes art. The truth is that humanity has always created a cacophony of ideas about what art is, but until now having a say in the matter has been systematically suppressed through the nature of cost prohibitive systems of communication(i.e. paper publishing industries vs. electronic media ). Post-modernism was a symptom of mass media’s effect on the production of meaning rather than a movement.

As one example of how the production of meaning has changed since the proliferation of digital communication networks we can look at Barthes work in semiotics. In Barthes’ 1972 essay, Myth Today 4, he defines a myth as the result of a signifier that has been culturally encoded to contain not only its natural signification but also its culturally specific signification. He describes three different types of reading in the process of understanding a myth. He states: 

How is myth received? We must once more come back to the duplicity of its signifier, which is at once meaning and form. I can produce three different types of meaning by focusing on the one or the other or both at the same time. 5

He goes on to explain the three types of reading: 

To focus on an empty signifier and let the concept fill the form is the reading similar to the producer of myth, who starts with a concept and seeks a form for it. This could be thought of as the perspective of the artist. 

If the focus is on a full signifier that has both meaning and form and the consequential distortion that one imposes over the other, this type of reading is for that of the mythologist. This could also be thought of as the role of the art critic. 

Lastly, if the focus is on the mythical signifier as an intrinsic whole, one will receive an ambiguous signification. This is the reading of the myth consumer. This could be thought of as the museum goer, or the general public.

In a problematic footnote, he says,

The freedom in choosing what one focuses on is a problem which does not belong to the province of semiology: it depends on the concrete situation of the subject. 6

Barthes’ three types of interpretation seem to be designed as discrete practices that could perform in a stable manner so long as they reside within an analogue environment because in digital communication networks there is no concrete situation of the subject. However, how does this system hold up when there is no longer any polarization in the distribution of information? We live in a time now where the deciphering of the myth has become naturalized. Any individual on a momentary basis perpetually undertakes the production of myth. The individual production of myth is the requisite for the consumption or deciphering of other types of myth. In Barthes’ system, even the consumption of myth is static in a sense that there are some for whom it is appropriate at this particular time to consume myth and conversely, others at that same moment for whom it is appropriate to produce myth for consumption. This type of system can only hold true under circumstances where media distribution is strictly controlled and sanctioned. In the conditions of our time, the flow of myth is completely uncontrolled and diffused. Existence of reality is dependent on the abstract and chaotic network of myth, a network that has now fully matured into its own being, complete with products and actions that are totally unpredictable. Barthes addresses the dynamics of a myth using symbols as needed and forgetting them when they are no longer needed, it is a simple extension to imagine that there may be multiple myths working with the same sets of symbols at the same time. But what happens when there is every myth working with every symbol all of the time? There is no meaning when there is omnipresent meaning. There is no sense to be made of a symbol or a myth beyond case-by-case individual perception. There is only a completely subjective and private perception of reality. A myth only may function in the moment it is perceived; until then it is in a pool of infinite myth waiting for use. The moment of perception therefore is also effectively the moment of manufacture.

Kenneth Werbin 7 writes about the effects that social networking sites as isolated systems and the obfuscation that they can inflict over the inherently wide open nature of the everyday world. He notes that in a hyper connected information system, according to cybernetic theory, meaning or life will always tend towards entropy according the the 2nd law of thermodynamics:

In many ways, the Internet, the blogosphere, and social networking superconnector sites like facebook and myspace are all nothing more than reflections of the grandest of all isolated systems, the universe; and like all isolated systems they are all tending towards maximum disorder–entropy. With so many versions of so many stories, and so many highly intertwined people, tales and hyperlinked positions, how are we ever to see the humanity through the entropy? How are we ever to be on the same page as other human beings living in the here and now? 8

One website that I see as particularly emblematic in the confusion of signification is VVORK, an art blog that posts extensively on contemporary visual art galleries and museums all over the world. This is a blog that could be described as an art blog because it only deals with contemporary art, or it could also be called a photo blog because it posts mostly photos of art, mostly installed in galleries or sometimes museums and not much else. There are two things that make VVORK so interesting in relation to the previously described confusion of signification. 

Firstly the sheer volume of content that it posts. It seems impossible that anyone could manage to keep track of so much new artwork every day, every month continuously without any end in sight but VVORK makes the impression that it can. 

Second, out of all this new artwork that it manages to find and post everyday, there is almost never anything written about the image they are posting, at most there is sometimes a short description but there is never any opinions expressed beyond the simple fact that they chose to post the image at all. This is a very unorthodox practice in the visual arts. The voice of the critic has been historically the single most important mechanism in determining how a visual art work relates to the contemporary corpus and its historical lineage. It seems that VVORK aims to disrupt that paradigm by trying to post everything as flattened, equal, atomic, or decontextualized.

Regardless of any particular stance on the issue of originality in contemporary visual arts it is still possible to recognize that this sense of paralysis that an artist might experience when prematurely seeing the very work of art that they were planning to execute being displayed in a museum or gallery as something that should be avoided. Because wether or not the work that the artist had in mind was or was not original, there is no telling how it might turn out until it has actually been executed. From the perspective of art making as a whole this may or may not be a problem and if anything maybe less work would be better right now, but from the perspective of the individual it seems that this is a problem worth working on. If there were a method or a strategy that I could employ to get my self unstuck out of the total paralysis of information overload I would surely appreciate it’s availability.

Dream Machines

In 1962 Douglas Engelbart prepared a commissioned report for the United States Air Force Office of Scientific Research titled, Augmenting Human Intellect. In it he describes the program that he had been conducting with the Augmentation Research Center at Stanford Research Institute. The report presents a system for increasing human intellectual effectiveness and summarizes the results of the first phase of research. Following the publication of that report subsequent development by the Augmentation Research Center led up to the completion of his On-Line System as was demonstrated in December of 1968 at the Fall Joint Computer Conference in San Francisco. This was the first time that hypertext, email, and the computer mouse were ever publicly demonstrated.

In 1974 Ted Nelson published Computer Lib / Dream Machines. From ideas that he had been developing and writing about throughout the 60s he put together a manifesto of sorts that calls into action, the general public to fight for the cause of liberating computer technology that had been kept out of the hands of the people up till that point. The publication predates the first home computer, the Altair, which was made available in 1975. Computer Lib / Dream Machines was revolutionary not just in the sense that it envisioned, and described to a non technological audience many of the innovations that would take place in the home computer market and the emergence of the Web but also because it took a stance that computers, when they do become available to the general public should not be considered as cold calculating machines. Throughout the book emphasis is that it is important for everyone to learn about what computers were and what they could do. His central argument being that if people are unaware of the roll that computers could play in the evolution of humanity, an opportunity would be lost, that personal computers could be understood as many things but they will ultimately be understood however the public receives them. In this way Computer Lib / Dream Machines was designed to be like a public service announcement for the impending digital era. 

Both Engelbart and Nelson were both directly influenced and inspired by the earlier work of Vannevar Bush, who was an pioneer in what was known at the time as the Library Problem which became the foundation of information science. The Library Problem was grounded in the observation that the quantity of information that the human race is producing (in the 1940s) is growing at an unprecedented rate and without developing new technologies for managing all this new information there would be dire consequences involved such as the inability of knowledge to be produced efficiently. The solution that Bush proposed was that all of the mechanical drudgery of managing information needed to be mediated or outsourced to machines so that humans may have more time to spend on more meaningful occupations like deep contemplation. This solution was taken up directly by Engelbart in his Augmenting Human Intellect. Nelson comments that both Bush and Engelbart are key figures in the computer revolution that was yet to come when he published Computer Lib / Dream Machines

Engelbart’s 1962 report begins as follows:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive  solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of defining solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids. 

Man’s population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity. Augmenting man’s intellect, in the sense defined above would warrant full pursuit by an enlightened society if there could be shown a reasonable approach and some plausible benefits. 

This report covers the first phase of a program aimed at developing means to augment the human intellect. These “means” can include many things–all of which appear to be by extensions of means developed and used in the past to help man apply his native sensory, mental and motor capabilities–and we consider the whole system of a human and his augmentation means as a proper field of search for practical possibilities. It is a very important system to our society, and like most systems it’s performance can best be improved by considering the whole as a set of interacting components rather than by considering the components in isolation. 9

The first paragraph starts by defining what “augmenting human intellect” is supposed to mean. It gives the impression that it should be taken to mean improving the efficiency at which one is able to tackle a problem. But then towards the bottom of the first paragraph he starts to get into more complicated territory by using language like, “hunches”, “cut-and-try”, “feel for a situation” and including them seamlessly with the more straight forward project defined just before. It would be one thing to build a machine to automate the storage and retrieval of documents or to assist with arithmetic calculation but creating a machine that would facilitate intuitive thought is an entirely more ambitious and exciting project, the solution for which is not entirely feasible with the technologies that have evolved out of his project.

He then introduces the problem of the dramatic increase in the production of information that they were experiencing in 1962, underling the words complexity and urgency.

Of particular interest to my argument is in the third paragraph where he specifies that a program to augment human intelligence would employ all of the means used by man in the past to help apply his native sensory, mental, and motor capabilities. And also saying that it is a very important system to our society and it’s performance can be best improved by considering the whole as a set of interacting components rather than by considering the components in isolation. What is so interesting to me in this third paragraph is that he first acknowledges that the system would include means for improving man’s sensory capabilities and second that he emphasizes that it is important to consider the all the components including the human operator as a complex and dynamic system rather than looking at the components in isolation.

It is clear through this introduction that the project that Engelbart was conducting at the Stanford Research Institute aimed to be a holistic approach to a problem that he felt was impending and necessary to deal with. The problem was the overproduction of information, the solution was in his eyes to augment all of the faculties of human intellect. 

Before the publication of Augmenting Human Intellect, in a letter to Bush, Engelbart writes to request permission to reprint Bush’s essay, As We May Think in whole as an included reference in his report. Towards the end of this letter, referring to Bush’s article, he says:

I might add that this article of yours has probably influenced me quite basically. I remember finding it and avidly reading it in a Red Cross library on the edge of the jungle on Leyte […] I re-discovered your article about three years ago, and was rather startled to realize how much I had aligned my sights along the vector you had described. I wouldn’t be surprised at all if the reading of this article sixteen and a half years ago hadn’t had a real influence upon the course of my thoughts and actions. 10

What’s important to note here is how much admiration that Engelbart had for Bush’s perspective on what was the most important problem facing society and what was the best way of handling that problem. Ted Nelson had a similar level of admiration for Bush. Nelson, in a paper published 10 years before his Computer Lib / Dream Machines, defines the problem that Bush first wrote about in 1945 and his project that aims to solve it:

This work was begun in 1960 – without any assistance. It’s purpose was to create techniques for handling personal file systems and manuscripts in progress. These two purposes are closely related and not sharply distinct. Many writers and research professionals have files or collections of notes which are tied to manuscripts in progress. Indeed, often personal files shade into manuscripts, and the assembly of textual notes becomes the writing of text without a sharp break.

I knew from my own experiment what can be done for these purposes with card file, notebook, index tabs, edge-punching, file folders, scissors and paste, graphic boards, index-strip frames, Xerox machine, and the roll-top desk. My intent was not merely to computerize these tasks but to  think out (and eventually program) the dream file: the file system that would have every feature a novelist or absent-minded professor could want, holding everything he wanted in just the complicated way he wanted it held, and handling notes and manuscripts in as subtle and complex ways as he wanted them handled. 

Only a few obstacles impede our using computer based systems for these purposes. These having been high cost, little sense of need, and uncertainty about system design. 

The costs are now down considerably. A small computer with mass memory and video-type display now costs $37,000; amortized over time this would cost less than a secretary, and several people could use it around the clock. A larger installation servicing an editorial office or a newspaper morgue, or a dozen scientists or scholars, could cost proportionally less and give more time to each user. 

The second obstacle, sense of need, is a matter of fashion. Despite changing economies, it is fashionable to believed that computers are possessed only by huge organizations to be used only for vast corporate tasks or intricate scientific calculations. As long as people think that, machines will be brutes and not friends, bureaucrats and not helpmates. But since (as I will indicate) computers could do the dirty work of personal file and text handling, and do it with richness and subtlety beyond anything we know, there ought to be a sense of need. Unfortunately, there are no ascertainable statistics on the amount of time we waste fussing among papers and mislaying things. Surely half the time spent in writing is spent physically rearranging words and paper and trying to find things already written; if 95% of this time could be saved, it would only take half as long to write something.

The third obstacle, design, is the only substantive one, the one to which this paper speaks. 11

He notes that automating the management of information with the use of computers would allow humans to work in ways that we have never thought possible and that up till this point (1965) there has been three main problems preventing the realization of this project. The first being that the cost was prohibitive. In the time of this writing computers were only known as monstrous machines that would fill entire rooms and were only a means of taking in complex sets of instructions described in cryptic punch card formatting. It was hard to imagine at the time the vision that Nelson had in mind because nothing like it had ever existed. Even people who had regular access to the computers of the time could only imagine what it would be like to have constant access to a personal computer. Such a thing just didn’t exist. 

Nelson in the following paragraphs goes on to quote several paragraphs of, As We May Think, by Bush which describes the imaginary invention of the Memex machine, Bush’s proposed solution to the Library Problem, which bears many similarities to the kinds of solutions that Nelson is describing. In recalling the Memex, Nelson wants to remind the reader of the 1960s that the technology is now ready to realize a working model of Bush’s original proposal.

When Nelson first published Computer Lib / Dream Machines in 1974 it had been 14 years since he had first started work on his thinking and dreaming of what it would be like for society to live with computers in the home, and the dream was still one year from beginning to be realized. In Computer Lib / Dream Machines, he repeats many of the same ideas that were outlined in his 1965 paper but this time with a voice that was designed to appeal to the masses. He lays out his ideas in a casual magazine format and separates the information into two distinct categories divided in half by a Janus style binding. One side of the volume is called, Computer Lib, this is a crash course on what computers are capable of and considered to be the technical material in the book. His idea with this section is to address the second problem that he describes in his 1965 paper, the sense of need. The problem that it is fashionably perceived that computers are only good for big business and complicated mathematical problems. In the introduction to the  Computer Lib side he describes the intended audience as everyday people and makes a break in the third paragraph to write in all caps: EVERYBODY SHOULD UNDERSTAND COMPUTERS. Also in the introduction he writes:

Computers are simply a necessary and enjoyable part of life, like food and books. Computers are not everything they are just an aspect of everything and not to know this is computer illiteracy, a silly and dangerous ignorance.

To understand the visionary nature of these kind of statements I feel compelled to reassert that this introduction was being written two years before the first home computer was made available. For Nelson to say that computer illiteracy was a dangerous ignorance seems reasonable now, but at the time that he was saying it, most of the people on earth were computer illiterate. The reason that I am trying to emphasize the visionary aspect of these projects. Is that they were unprecedented, that they were based so much in anticipation and imagination. Much of their shared dreams have since been realized and been incorporated into global culture to the point where now it seems natural to say that computer illiteracy is a dangerous ignorance. But these kinds of statements were based on assumptions and expectations of events that had not yet transpired. They find their foundation in the personal philosophies of the authors who were scientists and academics and no other technological innovation since the printed word has shares the same scope and ambition. 

The framework and foundation of the home computer revolution and the emergence of the Web is built from the consensus of a handful of thinkers who were from a very homogenous background. While many of their expectations and desires for the roll that computers would play in society at large have come to pass. In many aspects these visionary thinkers played a crucial roll in shepherding in the digital era of human knowledge. But still there is utility in questioning their original expectations and assumptions now that computers truly are a part of life just the same as food and books. For instance, now that the Memex system has been for the most part realized through various technologies and utilities  (PC, laptop, blackberry, email, Web, etc.) have we really solved the library problem? Do we really have the ability to process all of the information that is being created by the human race? Studies show that  we are now producing information at rate that makes the Library problem that Bush and even Engelbart and Nelson were talking about seem laughable. Contrary to their fundamental expectation, that once computers could handle all the dirty work of document processing and symbol manipulation we should expect to see more time for deep contemplation and creative thought, we are now used to an alternate reality where it’s common to hear of people complaining that there is not enough time in the day, that there are too many distractions, or that the average span of attention is decreasing. 

Between Bush, Nelson, and Engelbart, their shared philosophy was so influential and pervasive that it could can be thought of as the very foundation of the home computer revolution. At the heart of this philosophy is the original proposition that Bush made that essentially frames machines as suited for the role of mindless automation and the human as left to do what it does best which is creative thought. That understanding still persists today and is essential in our understanding of how personal computers and the Web relate to human society.  It is crucial to our understanding of human computer relationship that we maintain the belief that the personal computer works as a sort of universal calculating, information storage and retrieval machine. Crossing that boundary causes major confusion and skepticism. This is because of the way that the home computer and the web were introduced to society and has since evolved. Bush, Engelbart, and Nelson all saw that at the heart of the Library Problem was essentially a philosophical problem but one, that somehow they all seemed to agree, seemed fit for an inherently one sided technical solution. If we are producing too much information, the problem must be that we need to find a way to process more information in less time. This later turns out to be not the complete solution, because as we gain the ability to access more information, we also gain the ability to create more information, creating a never ending cycle of ever more growing quantities of information. We need to develop a deeper understanding of how we relate to computers, the project of computer literacy is incomplete until we learn to control and modulate the flow of information in our lives.  

The Dreamachine

According to Brion Gysin’s biographer, John Geiger, the Dreamachine was a collaborative invention between the artist Brion Gysin and scientist Ian Sommerville. The Dreamachine consists of a circular tube with perforations cut into it rotating at 78 rpm with a light source in it’s center. The ‘viewer’ uses it by positioning their face close to the device and closing their eyes. If everything works as intended the ‘viewer’ will see images ( for example: fractal patterns, geometric symbols, vivid fields of color) come into their mind, and if their mind is relaxed and open enough these images can elaborate into full imaginary scenes, such as you would expect to see in a dream. This experience is said to be distinct from dreaming of hypnotic trance because the ‘viewer’ is fully lucid and conscious the whole time, all that is required to terminate the effect is to open one’s eyes. According to Geiger, the inspiration for the Dreamachine came from an experience that Gysin had: 

On December 21, 1958, Gysin was traveling by bus to La Ciotat, an artist’ colony on the Mediterranean, near Marseilles, for the Christmas and New Year holidays. As the bus passed through an avenue of trees, Gysin closed his eyes against the setting sun. He recorded the experience in his journal: “An overwhelming flood of intensely bright patterns in supernatural colors exploded behind my eyelids a multidimensional kaleidoscope whirling out though space. The vision stopped abruptly when we left the trees. Was that a vision? What happened to me?” He immediately wrote Burroughs with an account of his fall out of rational space. Burroughs replied knowingly: “We must storm the citadels of enlightenment. The means are at hand.” In his letter to Sommerville, Gysin asked, “How can we make it at home? I mean, this is the problem. How can we do it with just what we’ve got?” 12

It was Ian Sommerville working from Cambridge in 1960 who first came up with the working plans and Gysin who came up with the name, Dreamachine. Geiger notes that the idea for the Dreamachine originates from the research of of W. G. Walter in the 40s. “Burroughs introduced Gysin to Walter’s 1953 book, The Living Brain, which featured a chapter entitled ‘Revelation by Flicker.’” 13

Gysin saw the Dreamachine as a way of artificially producing an inventory of perception. In his book The Process 14 he says:

I have seen in it practically everything that I have ever seen–that is, all imagery. For example, all the images of established religions appear: crosses appear, to begin with; eyes of Isis float by, and many other symbols…

The Dreamachine was a way of using technology to augment the human senses in a way that was not readily accessible with out it.

Two years before the Dreamachine was developed, Gysin had another discovery, the Cut-Up. 

While cutting a mount for a drawing in room No. 15, I sliced through a pile of newspapers with my Stanley blade and thought of what I had said to Burroughs some six months earlier about the necessity for turning painters’ techniques directly into writing. I picked up the raw words and began to piece together texts that later appeared as “First Cut-Ups” in Minutes to Go. 15

Gysin introduced this idea to Burroughs who took it to be a serious technology of language and set out in an in-depth exploration of the technique that resulted in the final edit of many of his novels including; The Soft Machine, The Ticket That Exploded, and Nova Express. In a lecture delivered to Naropa University Burroughs explains the use of tape recordings in the Cut-Up technique:

When you experiment with Cut-Ups over a period of time you find that some of the Cut-Ups and rearranged text seem to refer to future events. I cut-up an article written by John Paul Getty and got: “It’s a bad thing to sue your own father.” This was a rearrangement and wasn’t in the original text. And a year later one of his sons did sue him. I mean it’s just purely extraneous information and it meant nothing to me. I had nothing to gain on either side. We had no explanation for this at the time, I was just suggesting that perhaps when you cut into the present the future leaks out. Well we simply excepted it and continued the experiment. The next step was Cut-Ups on the tape recorder and Brion was the first to take this obvious step. The first tape recorder Cut-Ups were a simple extension of Cut-Ups on paper. There’s many ways of doing these but here’s one way: You record say 10 minutes on the recorder, then you spin the reel backwards or forwards without recording, stop at random and cut in a phrase. Now of course when you cut in a phrase you’ve wiped out what’s ever there and you have a new juxtaposition. Now how random is random? We know so much that we don’t consciously know that we know, that perhaps the cut-up that we put in was not random. The operator on some level knew just where he was cutting in, as you know exactly where you were and what you were doing exactly 10 years ago at this particular time. Most of you couldn’t, although there are a few freaks who can, make that knowledge consciously available. And the same way, while you’re doing the tape, on some level, you know just exactly where you’re words are. So Cut-Ups put you in touch with what you know and do not know that you know. Now of course this procedure on the tape recorder produces new words by order of juxtaposition just as new words are produced by Cut-Ups on paper. Well we went on to exploit the potentials of the tape recorder; cut-up, slow down, speed up, run backwards, inch the tape ( that means work it back and forth across the tape pad ), play several tracks at once, and cut back and forth between two recorders. 16

Notice where he says, “when you cut into the present the future leaks out”. The Cut-Up technique was an extension of the technology of language that enables Burroughs to go beyond what would be possible without it. Using the same sources as before the Cut-Up is able to synthesize new material that comes from somewhere that is inaccessible to the writer on his own.  Burroughs, in talking about the use of the tape recorder, begins to describe the power of the Cut-Up as merely a device that enables the user to unlock deeply burred association in his subconscious. The Cut-Ups was a way of creating and documenting associative links between a text of some sort and the operators subconscious. In all of their comments, both Burroughs and Gysin were adamant in their position that the Cut-Up and the Dreamachine as technologies were merely devices that unlock deeply buried abilities of the human mind. These were technologies that were designed to augment the human intellect. Even the technology of the Cut-Up itself was only a rediscovery rather than an invention and both Burroughs and Gysin have made comments to that effect. 17

Through experimentation with the Cut-Up techniques used in conjunction with tape-recordings Gysin made a third discovery, the permutation poem, in which a single phrase is iterated several times and where in each iteration the order of the words is rearranged. Together with Ian Sommerville, they created a computer program to automatically generate permutations of word combinations. In 1960 the BBC commissioned Gysin to record material for broadcast which resulted in among other things, “Pistol Poems”, which was a cut up of a gun firing at different ranges with Gysin reciting a permutation poem over it. The permutation was a way for the individual to get closer to the absolute meaning inherent in the word in a cabalistic sense. 

Language is an abominable misunderstanding which makes up a part of matter. The painters and the physicists have treated matter pretty well. The poets have hardly touched it. In March 1958, when I was living at the Beat Hotel, I proposed to Burroughs to at least make available to literature the means that painters have been using for fifty years. Cut words into pieces and scramble them. You’ll hear someone draw a bow-string. Who runs may read, to read better, practice your running. Speed is entirely up to us, since machines have delivered us from the horse. Henceforth the question is to deliver us from that other so-called superior animal, man. It’s not worth it to chase out the merchants: their temple is dedicated to the unsuitable lie of the value of the Unique. The crime of separation gave birth to the idea of the Unique which would not be separate. In painting, matter has seen everything: from sand to stuffed goats. Disfigured more and more, the image has been geometrically multiplied to a dizzying degree. A snow of advertising could fall from the sky, and only collector babies and the chimpanzees who make abstract paintings would bother to pick one up.” -Brion Gysin, 1963 18

He means here that there should be no reason that language as a technology should be so far behind all other human technology in terms of explorations. In his permutation poems he sought the flattening out of language in the absolute sense. That each word can take on all meaning through sheer iteration and permutation. And one of the logical strategies that occurred to Gysin and Sommerville was to let the computer handle the monotonous business of the permutation itself.


This paper starts out by surveying how personal computers and the Web have impacted the process of creative thought. I have tried to point out in general how the quantity of information creates a fundamental contradiction to how we produce meaning and by extension understand originality. In the second and third section I introduce two simultaneous but independent cultural developments. The Dreamachine and all of it’s related innovations from the Beat Hotel in Paris and that of the Dream Machines of Silicon Valley. Both of these developments took place during the same time period. Both of these developments share some related heritage through their connections to cybernetics. And most of all both of these developments are connected through a common goal of augmenting the human intellect through the mediation of technology. But while information technologies and gadgetry are so pervasive in our society that they have become naturalized  the kind of technological exploration that was happening in the Beat hotel has largely gone by the wayside.

In the world today it can be said that the single most important technological innovation since the printed word that has impacted the human intellect has been the personal computer and the Web. Both of these technologies are born out of contributions in large part by Engelbart and Nelson who were both directly inspired by the personal philosophies of Bush. A crucial part of the original vision that all three of these men shared was the augmentation of the human mind in it’s totality. In the process of building the technological infrastructure that our society has come to rely on today, some major aspects of that original vision were forgotten. 

When we look at the state of our society as it relates to the information universe that more and more of us become connected to every day, there is much cause for concern. It is starting to become clear that our society is experiencing side effects that may be attributed directly to our access to seemingly limitless amounts of information. These include depression, lack of attention, the fragmentation of social groups into incoherent subgroups resulting in increased isolation of the individual, over specialization and the loss of meaning through disambiguation and the overlap of specialized subgroups. 

The original mission of the Engelbart, Nelson, and Bush was to alleviate many of these symptoms that we are only feeling stronger today. I would argue that it is because we have failed to fulfill the original vision in whole. We have successfully implemented the first half of the plan, which is to integrate the use of machines for all of the storage, retrieval, and calculation of information that may be automated. But we have failed to continue to explore how technology can mediate the other more elusive aspects of the human mind that Burroughs, Gysin, and Sommerville were trying to explore. That is not to say that people haven’t continued to work on these problems, it’s just that we have not to this point invested the time and money into any such programs on the same scale as the Personal Computer or the Web so that we might expect to see the same kind of progress that we have seen in these other areas of technology. This is obviously because a computer as a calculator or a filing system is something that can easily be understood in terms of generating a monetary profit whereas a computer that facilitates a transcendental experience is something that is not so easy to reconcile with a capitalist market. But I would argue that our symptoms are caused by an imbalanced augmentation of the intellect. That it is necessary for us to seek out ways to augment our more esoteric faculties if only so that we may balance out the total augmentation of our mind.

  1. Georges Bataille; Inner Experience (1988) SUNY Press, p83
  2. J. F. Grantz, et al.;The Diverse and Exploding Digital Universe; (2008) IDC
  3. http://en.wikipedia.org/wiki/Post_modern/a>
  4. R.Barthes; Mythologies; (1972) Hill and Wang p.109
  5. ibid. p.128
  6. ibid.
  7. K.Werbin; Superconnected Entropy: Social Networking Sites as Isolated Systems; (2007) New Network Theory
  8.   ibid p.6
  9. D.C.Engelbart, Augmenting Human Intellect (1962) Stanford Research Institute
  10. D. C. Engelbart, Letter to Vannevar Bush (1962), From Memex to Hypertext: Vannevar Bush and the Mind’s Machine, J. M. Nyce and P. Kahn ed., Academic Press, 1991
  11. T.Nelson, A File Structure for The Complex, The Changing and the Indeterminate (1965) ACM 20th National Conference
  12.   J. Geiger, Nothing Is True – Everything Is Permitted: The Life of Brion Gysin, 2005, The Disinformation Company, p.160
  13. ibid p.161
  14. Brion Gysin, The Process, 1967, Overlook Press
  15. Brion Gysin, ‘Cut-Ups: A Project for Disastrous Success’ in A William S. Burroughs Reader, ed. John Calder (London: Picador, 1982), p. 272.
  16. From a lecture given by WSB at the Jack Kerouac School of Disembodied Poetics at Naropa Institute, April 20, 1976
  17. 1 Knickerbocker, Conrad, Burroughs, William S., ‘The Paris Review Interview with William S. Burroughs’ in A William S. Burroughs Reader, ed. John Calder (London: Picador, 1982), p. 263 and p.272
  18. http://www.ubu.com/sound/gysin.html