Showing posts with label Review. Show all posts
Showing posts with label Review. Show all posts

Friday, August 28, 2009

Review: Haraway, "Cyborg Manifesto" (1985)

Haraway Reader coverI'm coming a little late to this text, but I found it to be a fascinating read. Originally published in 1985 in Socialist Review, "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century" initially sprang from a debate in feminist studies, but it quickly became the catalyst—at least in the humanities—for a new way of thinking about how the individual and society interact with machines. Noah Wardrip-Fruin and Nick Montfort write in their introduction to the essay in The New Media Reader (n.b. page references below are from this version of the text),"Haraway's cyborg preference has led some readers into uninteresting interpretations, in which it is assumed that Haraway's project is an attack on radical feminists such as Mary Daly" (515). I'm not so sure that such interpretations would be "uninteresting" to feminist studies scholars, but their larger point—that the influence of Haraway's essay has outgrown it's feminist roots "and may indeed be the starting point for current progressive scholarship on science and technology" (515)—is well taken.

In the essay, Haraway argues that the focus on dualisms—between "mind and body, animal and machine, idealism and materialism" (519)—as the basis for progressive resistance to injustice was no longer useful. According to Haraway, "a slightly perverse shift of perspective might better enable us to contest for meanings, as well as for other forms of power and pleasure in technologically mediated societies" (519). That shift was to establish the cyborg as a mythos for this resistance. In the remainder of the essay, Haraway argues that since the "cyborg world" was free of these dualisms, it would be open to the possibility of being "about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints" (519).

What was most interesting to me about the essay was that Haraway defined the cyborg as not merely the combination of human and machine, although this is the most common popular use of the term. Instead, she claims that

a cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction. (516)

While Haraway frequently refers to the combination of person and machine as defining the mythology of the cyborg, the crucial move she makes in the essay is to demonstrate how the processes of language have already made the cyborg a social reality. Of course, she writes, "modern medicine is…full of cyborgs, of couplings between organism and machine" (516), yet machines aren't only fashioned from cogs and gears, or circuits and switches. Society is a machine, as is language, and Haraway argues that social theory must take into account the degree to which our humanity is intertwined with physical and social tools, using the cyborg as the metaphor for understanding the connection.

A tree visualization of Haraway's use of "cyborg" in "Cyborg Manifesto"

One particular way in which we can see this connection between language, machine, and body is the trend in the sciences to translate everything into readable code. Haraway notes that "biology and evolutionary theory over the last two centuries have simultaneously produced modern organisms as objects of knowledge" (517) and that the technologies of communication and biological manipulation "are the crucial tools recrafting our bodies," for "communications sciences and modern biologies are constructed by a common move—the translation of the world into a problem of coding" (524). In other words, not only are we literally colonizing our bodies with machines, we compose them as texts as well, thereby rendering them more susceptible to refashioning through language.

The Human Genome Project and self-administered DNA tests are just a few of the examples of the ways in which "reading" the code of our bodies is changing the ways in which we think about ourselves. According to Haraway, language has played a crucial role in way in which

late twentieth-century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally designed, and many other distinctions that used to apply to organisms and machines (518)

while language—or "communications breakdown"—is the key to stress, the "privileged pathology" of the cyborg (524).

Haraway insists more than once that the cyborg isn't interested in history or looking backward. However, if we accept her conclusions about the role of language and other technologies in creating cyborgs, then we have to admit that we have always been cyborgs. Convincing evidence in the study of distributed cognition suggests that our cognitive functions are not contained solely in ourselves, but are rather spread throughout our environment, particularly in our tools. Language is one such tool, and Haraway's work suggests that language is always embodied. She writes that our "bodies are maps of power and identity" (534), and language has played a crucial role in the ways in which that power and identity is enacted.

Tuesday, August 05, 2008

Review: Lessig, Free Culture (2004)

In Free Culture: The Nature and Future of Creativity, Lawrence Lessig argues that U.S. copyright laws have traditionally functioned to protect the freedom of culture to flourish. In its most common form before the previous century, copyright protected creators’ ability to control the publication—or its copying, not derivative uses—of their works for a limited period of time so as to ensure that there would be an incentive for the production of creative work. To ensure copyright, copyright holders had to mark their works with either the copyright symbol (©) or the word “copyright,” deposit a copy of the work with the government to ensure that it would remain available after the copyright term expired, and to either register the work with the government or renew the copyright if it was considered valuable enough to do so. Whatever works failed to comply with these provisions were considered part of the public domain and could be freely copied, distributed, or modified by anyone.

Copyright legislation in the twentieth century largely eliminated these provisions, however. Culminating with the Copyright Term Extension Act (CTEA), the U.S. government has continually increased the term of copyright such that very few cultural products have entered the public domain since the 1930s and completely eliminated the requirements that works be identified as being under copyright or registered with the government. Lessig attempts to show how these changes in copyright laws, coupled with the new behaviors for creating and disseminating creative materials brought about by new media technologies, have created a situation where the cultural capital of our society is increasingly controlled by the few elite individuals who have the power, resources, or inclination to navigate copyright regulations. According to Lessig, the basic structure of the internet makes practically all media usage “copying,” thereby making many kinds of usage that were previously out the range of copyright law—such as sharing mixtapes or clips of videos—now subject to it.

The most refreshing aspect of the book is that Lessig avoids the extremes of this debate: he is neither for blanket immunity from copyright restrictions nor restrictive legislation. He manages to avoid these extremes by not moralizing on copyright or taking only the damages that can be associated with copyright into account in his analysis. Rather, he focuses on copyright’s connection to the production and availability of cultural artifacts like music, movies, and books, arguing that while copyright legislation should punish uses that inhibit creativity—uses like downloading MP3s as a substitute for purchasing music—it shouldn’t limit the ability of culture to flourish through uses that don’t affect current copyright laws or which cause little damage to copyright holders. For example, he argues that VCRs were originally opposed by the movie and television industries because those industries assumed that the machine would hurt them financially. However, as VCRs became ubiquitous, it was discovered that their use promoted more sales of movies and television shows, and that copying off of television had very little economic impact on industry. Similarly, while Lessig argues against using P2P sharing to pirate music, he argues that the technology shouldn’t be crippled to avoid this problem because it can be used for other legal, culturally beneficial purposes.

As a rhetorician, I was interested in Lessig’s method in this book, particularly he examination of the ways in which “piracy” is defined in the copyright debate, as well as the history of the word’s usage to refer to practically any new media development that upset old ways of doing business. However, I was most interested in the middle section of the book where Lessig describes his fight in the Supreme Court to have CTEA declared unconstitutional. In this section, Lessig dwells for an extended period of time on the type of argument he made before the court—one that was overly logical, in his later opinion—versus the one he should have made. He claims that the reason he lost is that he failed to argue passionately to the court for why the CTEA caused damage to culture; instead he focused only on the constitutional issues involved in the case.

I found this section fascinating for a number of reasons. In the most general way, I was struck by how it demonstrated that even in what would presumably be an environment in which reason alone would determine the outcome of an attempt at persuasion—a Supreme Court case—, Lessig shows how the logos of his argument wasn’t enough to be convincing, and that he needed to include some pathos as well. More personally, I was able to empathize with his dejection after losing the case. I, too, have been in a situation where I realized belatedly that I had completely blown an argument by ignoring some simple, basic principle of rhetoric.

I also found this passage interesting for another reason: its complete lack of reference to the art of argumentation and persuasion. After his description of his argumentative failure, Lessig refers to rhetoric in the pejorative sense, equating it merely with style, flash without substance (p. 250). This passage only illustrated further how much rhetoricians have surrendered the public debate over argument and language to other disciplines.

In short, although the book is getting a bit old, Free Culture still seems fresh in its take on the copyright debate. I’m glad I finally got around to reading it.

Monday, April 14, 2008

Review: Varela, Thompson, and Rosch, The Embodied Mind: Cognitive Science and Human Experience (1991)

Varela et al Embodied Mind coverI first encountered The Embodied Mind when I took Peg Syverson’s class “Minds, Texts, and Technology” during my first semester at UT in 2005. I remember being somewhat overwhelmed by it then: the authors—Francisco Varela, Evan Thompson, and Eleanor Rosch—pose a radical challenge to then-current (1991) conceptions of cognitive science. Tracing the field of cognitive science through two stages, cognitivism and emergence, the authors explain that neither of these approaches take into account the role of bodily experience in the process of perception, arguing that this experience is a necessary precondition for cognitive functions.

According to the authors, the first stage of cognitive science, cognitivism, arose in the middle of the twentieth century as an outgrowth of cybernetics. While Varela, Thompson, and Rosch felt that cybernetics was initially a rich conversation between a number of differing views of the mind and how it functions, cognitive science came to be dominated by the cognitivist paradigm. These cognitivists described cognition as merely symbol processing in the brain, processing which was enabled by the mind’s creation of representations of the outside world.

cognitivists believe the mind processes representations of symbolsHowever, cognitivism had a problem. According to Varela et al., researchers were unable to find biological examples of the mind’s symbol-processing, a lack which caused them to shift the location of this processing to the subconscious. In short, cognitivism required the separation of consciousness from cognition, a move which led cognitivists to posit the existence of an autonomous self—an ego or soul. However Varela, Thompson, and Rosch note that when one looks for the ego or self, the only thing that can be found is experience. They therefore claim that cognitivism failed because it tried to describe experience strictly through the means of analysis, without focusing on bodily experience.

connectionists argue that the millions of interactions between neurons lead to the organization of consciousnessEmergence, or connectionism, attempted to deal with some of the problems posed by cognitivism by suggesting that the phenomena of mind emerges out of the numerous simple, biological processes that make up the brain. Because connectionism is sub-symbolic—that is, it doesn’t require symbol-processing in the mind—it was represented an advance over cognitivism because it was able to explain both symbolic behaviors and non-symbolic behaviors. Cognitive scientists found it attractive because it is close to human biology, produces workable models, and fits the dominant scientific paradigm, namely, that there is a real world out there that some subject can discover through cognition, a paradigm which emergence shares with cognitivism.

However, Varela, Thompson, and Rosch argue that neither cognitivism or emergence can deal with the failure of science to find the source of the self, and that both flounder when they attempt to account for the role of the outside world in cognition. According to them, western science, except for some notable attempts by Minsky, Jackendoff, and Merleau-Ponty, has chosen to completely ignore these questions.

One result of this failure to bring these two worlds together is what the authors call the Cartesian Anxiety. The Cartesian Anxiety is the separation of mind and world—subject and object—into competing subjective realties, leaving us with the feeling that there is either a stable world, or there is only representations; that is, realist and subjectivist assumptions. This is a problem for both cognitivism and connectionism because both rely on a pre-given world that is represented symbolically or sub-symbolically.

As an alternative to this approach, the authors argue that the interaction of individual perception with physical reality “brings forth a world” that is dependent on the both, rather than being independent of either. One of their primary examples of this bringing forth a world is the study of color vision. The authors demonstrate that the perception of color is dependent on the physiology of an organism (pdf) to demonstrate that the experience of the outside world is brought forth by the organism in concert with that outside world; the “world” that is experienced is dependent on both.

Consider, for example, these optical illusions.

color tile optical illusion
The squares that appear blue in the example on the left are actually the “same” color as the squares that appear yellow in the example on the right. The reason that they appear to be separate colors in these two images is a result of the interaction of our three-dimensional color vision and the colors which surround them in the image. This particular illusion, which makes one color appear to be two, is brought forth by our physical bodies interacting with the physical world.

Results like this one prompt Varela, Thompson, and Rosch to suggest a third stage in the development of cognitive science: enaction. According to the authors, enaction posits that perception depends on bodies and that cognition is the result of recurrent patterns of perception. Only enaction is able to account for cognition without extracting the mind for actual experience. An enactionist model of cognition, then, would view the mind as existing as the result of these patterns of perception, rather than as a symbol processing machine or an emergent phenomenon that reproduces a stable outside world.

One final note. Up to this point, I haven’t really mentioned one of the authors’ major arguments in this text: that Buddhist models of conceptualizing the self and experience are superior to those of Western philosophy. It is this connection to Buddhism that suggested enaction theory to the authors. I didn’t spend much time discussing this connection here because, personally, I think enaction can stand by itself as a theory of mind. However, I’m sure that for many readers, especially in the cognitive science community, this connection could be a deal-killer that discredits the book’s entire argument (hat-tip to Jim Brown for noting this problem).

Monday, March 17, 2008

Review: Holland, ed. Remote Relationships in a Small World (2007)

I ran across Remote Relationships in a Small World a month or so ago in a publisher’s catalog, and I thought it looked pretty promising, seeing as my work is gravitating towards studies of social networks and mobile communication. The collection of essays are intended, according to editor Samantha Holland, to provide new research on social relationships conducted online. This, I assume, is the source of the title, which suggests that the digital age has both made the world smaller by allowing for instant communication across time and space, but at the same time that time and space is real and has a real effect on the relationships conducted online.

The three chapters that jumped out at me were Holland’s chapter (with Julie Harpin) on the use of MySpace by British teenagers, Janet Finlay and Lynette Willoughby’s study of the use of WebCT forums and blogs in online learning, and Simeon J. Yates and Eleanor Lockley’s study of male and female cell phone use. (The complete TOC can be found here.)

Holland and Harpin presented the results of a pilot study of teenage British MySpace users, following the usage habits of 12 teenagers. Their results seemed to confirm the work of boyd and Ellision (2007) in that the teenagers they studied tended to use MySpace to communicate with people they already knew socially. Additionally, the authors found that, unlike the typical stereotype of the digital loner, the social network was a “hive of sociability.”

Similarly, Finlay and Willoughby’s chatper didn’t break any new ground. They found that a minority of the (mostly male) students using their course forum and individual blogs would post offensive messages, and that this behavior tended to alienate other users. After their mostly textual case study, the authors concluded that for an online learning space to be a real community of practice there needed to be scaffolded interactions with the community so that users could become socialized to it, a feat which was not possible in their 12-week course.

Finally, Yates and Lockley examined the use of cell phones by men and women in a number of different contexts: at home, on the train, and in other public places like restaurants and coffee shops. They found that the men in their study tended to send shorter messages than the women, and that the longest messages were sent in conversations between two women. Like Holland and Harpin, the authors found that the participants in their study tended to not use their phones to contact or converse with strangers, but rather to keep in touch with people who were close to them, both physically (neighbors) and emotionally (friends and relatives).

I found this collection to be a bit of a mixed bag. While the studies I mentioned here were interesting, and had interesting conclusions, I found myself wishing they were a bit more rigorous. This was particularly the case with the Holland and Harpin and Finlay and Wiloughby chapters. While each was interesting, neither broke new ground, and both seemed to merely share the overall theme of the online texts they collected. Admittedly the Holland and Harpin study was a preliminary one, but, that being the case, I wonder why it was included in this collection.

The Yates and Lockley chapter suffered from the opposite problem. The authors used a large number of measures—surveys, observation, diary studies, focus groups—but the analysis and discussion of these measures seemed abrupt to me. I would have liked to have seen them choose some of the data to focus on with more depth and detail, rather than have them present this cornucopia of data.

That said, I found the book to be useful, not the least for the support, however tentative, that the studies included in it lend to the thesis that social communication is used more to keep in contact with people in existing social networks, rather than create new contacts. Somewhat ironically, these studies seem to suggest that our online relationships aren’t so “remote” after all.

Wednesday, February 06, 2008

Review: Tapscott and Williams, Wikinomics: How Mass Collaboration Changes Everything (2006)

cover Tapscott and Williams WikinomicsWikinomics, by Don Tapscott and Anthony D. Williams, is a book written for business. As the name implies, they are interested in the economic potential of Web 2.0, providing insightful analysis of the collaborative culture that has grown on the internet over the last decade and recommendations for how businesses can adopt beneficial practices of this culture. While I think the book could have been improved by a less accepting, optimistic view of this technology, it is otherwise a helpful introduction to many of the features of Web 2.0 that are relevant to businesses.

Synopsis
According to Tapscott and Williams, there are four principles of Wikinomics: openness, peering, sharing, and acting globally. By openness, they mean that businesses must become less proprietary, arguing that in the current economic climate, businesses can’t afford to hoard all of their valuable information. Rather, they need to make it open so as to increase their ability to research and discover new products, as well as to make use of new standards for sharing information. For example, few businesses have the ability to throughly research and analyze all of the data available to them; by making some of that data open, Tapscott and Williams argue that businesses can harness the benefits of having an increased number of eyeballs looking for problems and new opportunities that would otherwise languish in forgotten data.

With peering the authors argue that the hierarchical relationships that dominate most large-scale enterprises should be broken down and replaced by more flexible peer-production methods, like those used in open source projects such as the Linux operating system. Tapscott and Williams argue that by taking advantage of peer-production methods, businesses can leverage the self-organizing forces to make better decisions and create better products without expending more capital.

While the idea of sharing that Tapscott and Williams advocate is likely to be as counter-intuitive as their other Wikinomics ideas, it is less radical and less dependent on new technologies than peering. The authors argue that, instead of keeping intellectual property like patents proprietary, business should open up their IP holdings for licensing by outside entities. By doing so, businesses can have access to more revenue through licensing fees, and also create new product lines by collaborative relationships with other businesses. Like peer-production, sharing allows for new relationships to form that would likely be ignored by businesses focusing on core products and competencies. These new relationships would benefit from self-organizing, emergent effects that lead to new opportunities without demanding heavy investment in research and development or new personnel.

Finally, the increased reach of the web, coupled with faster connections and greater computing horsepower, have enabled what were previously global interactions—a conversation with a supplier in another country, for example—to feel local and instantaneous. For this reason, the authors argue that businesses need to act globally, taking advantage of their ability to coordinate with suppliers and employees across the world.

Open cultures
I found that the most compelling part of the text was its focus on the culture of open enterprises. Tapscott and Williams continually emphasize that to take advantage of the changing economic and technological landscape, businesses must be ready to change their internal cultures and adopt the cultural practices of the community—Linux, Wikipedia—they wish to emulate. According to them, open, peer-production networks have distinct cultures that have to not only be respected by business, but also be leveraged if they wish to make use of the benefits of those networks. In my research of Wikipedia and other peer-produced systems, I’ve been interested in the ways that peer-produced texts depend on the creation of a community of value; that is, the creation of a culture that provides content creators with a return on their investment of time in the project. I think Tapscott and Williams correctly identify this feature of open source projects and argue for its acceptance by those who would use these methods of production in their business.

If I had to make a complaint about the book, I think it would be that the authors take for granted the benefits that “Wikinomics” will bring to business. While it is somewhat normal for those advocating new technologies to gush about their benefits and turn a blind eye to their potential drawbacks, I did get a little tired of the repeated insistence that these new economic forces were going to completely transform business practices for the better. I agree that there are tremendous benefits to the ideas that Tapscott and Williams outline here, I would have enjoyed a more critical approach to their benefits and drawbacks as well. However, that one quibble didn’t ruin the book for me, and I think it is still a valuable introduction to the theory of open source production.

Friday, January 25, 2008

Review: John Scott, Social Network Analysis (2000)

John Scott Social Network Analysis coverI recently picked up a copy of John Scott’s Social Network Analysis: A Handbook when I was researching methodologies I could use in my study of Wikipedia and social networks. Scott, a sociologist, provides both a history of the development of social network analysis and an introduction to the basic terminology and concepts related to the discipline. While this second edition doesn’t seem to have been updated to account for the developments in the field since the first edition of 1991, I found it to be a more than adequate overview of the concerns and methodology of social network analysis.

(I have to provide a caveat here: I am not a sociologist and I’m completely new to social network analysis, so any criticisms—or praise—of this book on my part may be completely off-base. For example, I have no idea how social network analysis has developed over the last 15 years, so I’m not really competent to judge if Scott has left out any important additions or modifications to the theory and methodological practices used in social network analysis. My comment above is based solely on my impression in reading the text that the majority of Scott’s references in the text are pre-1990. Anyway, you should take most of my claims in this post with a grain of salt.)

Overview
Scott’s book is organized into three main sections: the first and second chapters outline the history of social network theory, while chapters 3–8 introduce basic terms and the methods of network analysis. Finally, an appendix lists software tools for conducting social network analysis with short reviews of each software package.

example of sociogramIn chapter 2, Scott outlines the history of social network theory. According to him, it began with the focus on societal structures in the work of anthropologist Alfred Radcliffe-Brown during the 1920s and 1930s. Later, researchers, particularly a group centered at Harvard, combined elements of gestalt theory and the mathematical tools of graph theory to analyze these structures. One of the chief developments of this research was the formalization of the theoretical principles of network analysis, which helped to determine the basic methodology of social network analysis, and the development of the sociogram, a digram of the nodes and links in a network.

incidence matrix social network analysisThe remainder of the text serves as an introduction to social network analysis and the terminology and best practices used by network researchers. According to Scott, analyzing social networks is primarily a process of collecting and storing data. However, that data isn’t attributive data about a subject. Rather, it is relational data; that is, data about the connections between subjects. For this reason, when researchers collect social networking data, they should focus on these relations. The primary method of doing this is with incidence or adjacency matrices, where the former record binary information about the existence of a connection between two classes subjects and the latter record information about the number of connections between the members of a particular class. Further, these matrices can be directional, indicating that the connections between subjects do not necessarily flow both ways.

The analysis of social networks can be conducted using positional or reputational approaches. When using the positional approach, researchers are interested in investigating the social position that a particular subject occupies, whereas with the reputational approach, which is used primarily when there are no stable positions to investigate, researchers have their subjects suggest other subjects, and the connections between those subjects are mapped.

Final thoughts
Throughout the text, Scott emphasizes the difficulty that many researchers have with social network analysis because of its foundation in matrix algebra. However, there isn’t much math in the book, and what math is there is explained in an understandable manner. He acknowledges that most researchers will rely on software tools to do the number crunching for them, so he spends more time explaining the research rationale and grounds for using particular approaches to the study of social networks, emphasizing that individual projects will require different approaches and that researchers should understand what their tools are measuring, even if they aren’t sure how they are measured. As a rhetorician, I found Scott’s repeated insistence that researchers have a clear idea of their research goals before choosing the tools they will use so as to make sure that those tools are the best for their particular design to be refreshing, and it was easy for me to see how this type of analysis would be a good fit for ecological studies of social phenomena.

Overall, I found the book to be an accessible and readable introduction to the techniques of social network analysis.

Wednesday, January 16, 2008

Review: Adam Greenfield, Everyware: The Dawning Age of Ubiquitous Computing (2006)

cover of Everyware by Adam GreenfieldI’m starting to review texts for the computers and writing course I hope to teach in the fall (RHE 312), and I recently reread Adam Greenfield’s Everyware: The Dawning Age of Ubiquitous Computing (2006). The primary innovation of Greenfield’s book is, rather than there being just one model of ubiquitous or pervasive computing (ubicomp) there are in fact many “ubiquitous computings,” and that these ubicomps will combine to form an all-encompassing paradigm of distributed computing, communication, networking, and information gathering/retrieval which he calls “everyware.”

One unique feature of this book is Greenfield’s reaction to ubiquitous technology. It seems like the typical reaction to new technology (see genetic modification and nanotechnology) is to question the ethics of using/adopting it and to call for a halt to this use/adoption so its long-term effects can be contemplated. Greenfield, however, simply declares that everyware is inevitable, and that our best efforts will be spent in working toward altering its eventual appearance, rather than trying to prevent it.

In Section 3, “What’s driving the emergence of everyware?”, Greenfield examines the reasons for this inevitability. His argument is essentially twofold: first, he claism that at the moment our technology became digital, and devices were able to communicate with each other in a common language of ones and zeros, that the communication and interoperability on which everyware depends was a foregone conclusion. Further, Greenfield argues that the architecture and capabilities of many current technologies—RFID tags, the continuing development of computer chips, wireless computing—all contain the latent possibility of everyware. As he puts it, a technology like RFID “wants” to be connected to everything so as to provide a bridge between atoms and bits. Second, he argues that everyware is inevitable because it is both embedded in our collective imaginations in the form of science fiction and a workable solution to many of our looming problems, such as dealing with the move of the baby-boom generation into old age and the need of corporations for ever-increasing economic expansion.

Another thing I liked about the book is that Greenfield captures the complexity and contradictions inherent in ubiquitous systems. Everyware is divided into seven sections, and each section is made up of “theses” introduced by short, declarative statements (“Thesis 05: At its most refined, everyware can be understood as information processing dissolving into behavior.”), which are then explained, expanded on, or defended over the course of a few pages. While the sections gather the theses into larger arguments, I think they allow Greenfield to not force an extreme level of coherence on a topic whose edges are still coming into view.

To me, the two most interesting sections of the book are the first two (Google Books has the complete TOC). In these sections, Greenfield defines everyware and explains how it is unique from other instantiations of computing. As I noted above, he argues that everyware is not the same as ubicomp, but is rather a paradigm which contains many ubicomps. According to Greenfield, the everyware paradigm is the interconnection of various technologies and processes so that the computing experience will no longer be centered on a desktop or laptop machine, but will be a continuous experience of information retrieval (and encoding) via technology embedded in our surroundings. As part of this process, the bridging of the divide between data and the lived world will be key to the development of everyware.

One of his most striking pronouncements is that everyware will transform behavior into information, or, as he puts it, everyware represents the “colonization of everyday life by information technology.” Greenfield recognizes the emergent qualities of the combination of ubiquitous computing and society. Everyware is different from regular computing in that it is centered on the user and the environment. It is contextual and experiential, rather than a group of tasks like information retrieval. It does not have users—although it might have subjects. It is relational, and, as such, it has the possibility of yoking together a number of technologies and systems that we are already familiar with so that they become greater than the sum of their parts. Further, these connections will result in outcomes and effects that will be completely unforeseen.

In the book’s second section, Greenfield describes how everyware is unique. According to Greenfield, the combination of ubiquitous technologies on which everyware is based, fostered by digital connections and their inherent relational structures, will lead to emergent outcomes. That is, it will not be possible to completely foresee the results of this technology interacting together, and this fact virtually guarantees that the result will be unique. (Which would make pausing the development of the technologies everyware depends on so we can contemplate their use virtually pointless.)

Conclusion: Everyware is a good, non-technical introduction to the issues affecting the development of ubiquitous computing. I think it would be a good text for introducing undergrads to the opportunities and problems associated with new, distributed computing models.

Tuesday, November 27, 2007

Tim Berners-Lee on “the Graph”

Web-god Tim Berners-Lee posted a theoretical blog on the social graph early last week.

messy cablesHe argued that networked computing has gone through three stages. First there was the net, the network infrastructure of the internet. Once users were free from having to physically connect individual machines that they wanted to network, they were able to ignore the network, or the cables, and focus on the computers themselves. According to Berners-Lee, when users could focus on the more important half of this binary, they were able to get more use from the net, and their ability was even more enhanced by enabling the reuse of underused or forgotten resources.

The next level of networking was the web. Because it combined the resources of different machines and connected them together seamlessly, the world wide web allowed users to focus not on the computers, but on the documents. The document of choice, of course, was the web page, which, like the computers of the net, became the focal point of web browsing.

html graph of complex rhetoricAt the third level, Berners-Lee points out that users began to realize that it is what the documents are about, not the documents themselves, that was important. This level is similar to the semantic web, and Berners-Lee calls it “the graph.” (Actually, he calls it the “Giant Global Graph,” riffing on the WWW.) As users are able to capitalize on the graph, he argues, they will be able to exercise more power from their computing tasks, just as the innovations of the net and the web made computing more powerful at those levels. However, to make optimal use of the graph, designers will have to allow for the information stored in their documents to be able to freely interact with the information on other pages.

Letting your data connect to other people's data is a bit about letting go in that sense. It is still not about giving to people data which they don't have a right to. It is about letting it be connected to data from peer sites. It is about letting it be joined to data from other applications.

It is about getting excited about connections, rather than nervous.

Berners-Lee suggests that this can happen if designers make use of available semantic web tools. He sums up his vision in this penultimate paragraph:

In the long term vision, thinking in terms of the graph rather than the web is critical to us making best use of the mobile web, the zoo of wildy differing devices which will give us access to the system. Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That's what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary.

via Read/Write Web

Tuesday, September 11, 2007

Google Books reality check

My enthusiasm for Google Books has waned somewhat since my previous post. First, I had a rocky relationship with the service over the weekend. I spent some time adding books to my library, which was for the most part good, clean fun. However, once the books were in the system, there didn’t seem to be any way to sort them for display. Books are automatically displayed in the order in which they are entered into the system; it is impossible to sort them by author or title. I guess the “organizing” in “organizing the world’s information” doesn’t mean what I think it does. The tagging feature is also a little too robust: even though the tags are separated by commas, the system automatically formats the tags when they are saved, eliminating multi-word tags. You can work around this by placing strings that contain more than one word in quotation marks, but that gets awkward. Also, the tags are case sensitive, but I don’t know why. I’m not sure that I need to differentiate between “rhetoric” and “Rhetoric,” for instance. Together, these features make the tag system a bit unwieldy; who can remember if they capitalized the second word in a string when they are tagging their 300th book?

Next, I was trying to liven up my course page for Literature and Mathematics by adding the title page from Flatland, but for some reason it won’t display. Apparently this feature still has some bugs to iron out.

Finally, today I found two less than glowing reviews of the service in First Monday. In “Inheritance and loss?: A brief survey of Google Books” Paul Duguid argues that Google Books relies on the quality control of the libraries whose books it is scanning, but this quality isn’t necessarily inherited to the system. Second, in this podcast (transcript), Siva Vaidhyanathan discusses the ways that the Google Books project threatens the Fair Use doctrine of copyright. Overall, it seems like the folks at Google still have a lot of issues to work out with this particular offering.

Saturday, August 26, 2006

Fists and irreducible complexity

Update: This post is a partial review of Malcolm Gladwell’s Blink: The Power of Thinking Without Thinking.

cover malcolm gladwell blinkA key question in rhetoric and communication studies is how people are persuaded to act. Sometimes the act in question is overt in that it is the completion of some action; other times, the action could be implicit, in that it is the acceptance of some idea or line of reasoning as being true (or false). This latter group of actions are variously referred to as decisions or making up one’s mind. (I don’t consider these categories to be all that rigid. Consider them convenient shorthand for some temporary ideas—an argumentative place to hang your hat.) In Blink: The Power of Thinking Without Thinking (2005), Malcolm Gladwell argues that many of our decisions are made without our conscious input, that they are the result of unconscious processes that occur independently of our considered, conscious thought.

Many of Galdwell’s proofs for this idea come from the ideas of complexity theory and have interesting applications to rhetoric. In the first case, Gladwell references Paul Ekman and W. V. Friesen’s Facial Action Coding System (FACS) (here is a link to a brief explanation of FACS, and a FACS chart is pictured below). Ekman and Friesen documented over 10,000 configurations of the facial muscles, of which they found about 3,000 that were meaningful. This result comes from the layering of actions in the facial muscles, actions which grow exponentially as more muscles are worked together (or in sequence). The result seems very much like a strange-attractor type problem, where out of many countless meaningless results—what Gladwell calls ‘the kind of nonsense faces that children make’ (201)—a few stable meaningful results arise.

Facial action coding system FACS
In Gladwell’s discussion of ‘thin-slicing’, his name for the ability to find emergent (my word, not his) properties of a system in very small samples of that system—say, ten seconds of a couple’s conversation is sufficient to make a highly-probably determination of that couple’s future, or a similarly small sample of Morse code is enough for a trained listener to be able to identify the operator transmitting that code. According to Galdwell, this code pattern from the second example, called a fist, ‘reveals itself in even the smallest sample of Morse code’ and ‘doesn’t change or disappear for stretches or show up only in certain words or phrases’ (29). This fact would seem to indicate that the fist is not irreducibly complex, that is, that the message is not the shortest possible way of describing the fist, for the fist shows up even in very small samples of the message.

In complexity theory, the irreducibly complex is equivalent to the random. Take the example of a random string of numbers. This string is the prototype of an irreducibly complex message because it cannot be expressed in a reduced form. The shortest method of reproducing a string of random numbers is the string itself. Language theorists like Jacques Derrida seem to argue that all symbol messages are irreducibly complex in this way, that they cannot be expressed in any shorter form than what they are, for to shorten or summarize them would be to make a different message by leaving out key information.

The fist example seems to indicate, however, that some significant portion of symbol messages, those parts that are roughly equivalent to style, are able to be reduced and maintain their identity. I’m not quite sure what the implication of this result is, but I find it interesting, especially in the context of analog and digital communication. Though Morse code is essentially a digital medium, the fist only appears as an analog aftereffect of the digital message. Similarly, Nicholson Baker’s advocation for the preservation of library card catalogs is an example of a digital message that is willing to discard analog aftereffects that are deemed unimportant.

Now, it is obvious that the digital portion of a message is also not irreducibly complex. New methods of compression might make it possible to transit the same message in a shorter form. The counterpart in communication theory, I suppose (and someone feel free to correct me if I’m getting all of these theories in a muddle), is that the iterable nature of symbol systems allows for shorthand communication of messages.

If messages are made up of both digital and analog components, neither of which are, by themselves, irreducibly complex, what then, in the Derridian sense, is the irreducible part of the message? I wonder if it is the interplay between these two elements, the connection between the analog and the digital, that is random and irreducible.

One of Gladwell’s arguments in Blink is that it is in our interest to discover where our unconscious decisions arise from as a means of determining whether or not they are to be trusted. On that note, a final thought: the analog portion of the message is much more difficult to counterfeit than the digital, though such imitation is not impossible. When all information is digitized, it is able to be copied—falsified—endlessly, much more easily than analog messages. This is because digital messages lack the global, emergent features of the reducibly complex, like the Morse code operator’s fist. As digital information is still carried via analog devices (analog telephone lines, for instance) it is possible that this portion of the signal can be analyzed for identifying features. One solution to our current concerns with digital security might be finding a way to reconnect the analog and the digital, making individual messages more difficult (perhaps impossible?) to counterfeit.

Thursday, August 24, 2006

Confections (s)ugare

Update: This post is a partial review of Edward Tufte’s Visual Explanations: Images and Quantities, Evidence and Narrative.

cover: Edward Tufte Visual ExplanationsEdward R. Tufte describes a confection as a group of visual elements assembled to describe or enhance a written argument in his 1997 bookVisual Explanations. A confection, he argues, is distinct from both collages, which are intended to convey messages not associated with written arguments, and diagrams, which convey messages but whose elements lack the disparate nature of a confection. A straightforward photograph is, according to Tufte, not a confection; but two photographic images superimposed upon one another to create a fantastical mélange that cannot be photographed would be a confection. In this sense, a confection is an arrangement of disparate elements so as to make an argument. (One example is the title page of Hobbes’s Leviathan, pictured below.) It is this arrangement, the confection’s fantastical placement—either in space or in time—of otherwise unrelated visual elements, that makes them theoretically interesting.

title page of hobbes's leviathan
Let me explain why. Accepting the above definition, I ask: what does it mean to say that a confection makes—either implicitly or explicitly—an argument? That is, is the argument made by the confection independent of its fantastical arrangement, or is the argument dependent on this arrangement?

I’m thinking two things: 1) since the characteristic of confections that makes them so—the arrangement of disparate elements to make an argument—is true of all graphical arguments, be they diagrams, graphs, straight photographs, and drawings, does it not also follow that all graphical output is confectionary in some degree? And doesn’t this realization of the fantastical in all graphics lead to certain conclusions about their trustworthiness and veracity? More on this after: 2) Isn’t the very construction of a causal argument confectionary in its selection and arrangement of elements that are not adjacent in space and time?

This, I think is a key point, for what makes a graphic a confection makes it an argument. Realizing this fact—that what argues is confectionary—provides a graphical explanation that undermines claims of objectivity or absolutism on the parts of even the most clever conclusions.

Wednesday, August 02, 2006

Decentralized Systems

Update: This post is a partial review of John Holland’s Hidden Order: How Adaptation Builds Complexity and Mitchel Resnick’s Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds.

Both John Holland (Hidden Order, 1995) and Mitchel Resnick (Turtles, Termites, and Traffic Jams, 1994) argue that it is difficult to discern the behavior of a system from the behavior of its parts. Through the use of computer modeling—cellular automata in Holland’s case, the StarLogo programming environment in Resnick’s—both attempt to begin to understand the nature of these complex systems.

cover of Holland's Hidden OrderHolland defines complex systems as a product of “the interactions” between their relatively simple parts (3). The result of these interactions, which are often relatively simple themselves, is that the “aggregate behavior of a diverse array of agents”, or the “parts” of the system, “is much more than the sum of the individual actions” of those parts (31). That is, the ordered behavior comes as a result of the particular way in which objects interact, rather than from any kind of centralized oversight, which is presumed to be the source of most ordered behavior. Holland gives the example of a city as a complex system that “retain[s]” its “coherence despite continual disruptions and a lack of central planning” (1). Now most cities obviously have central planning architectures in the form of governments, but those centralized authorities often find it their job to combat or enforce city behavior that does not originate directly (at least in appearance) from their decisions. Where do city-level features like traffic jams, ethnically- or economically-segregated neighborhoods, and homelessness come from? Rarely can they be directly attributed to central planning. Rather, the interactions between residents—which are often dictated by central planning organizations—and other structures in the environment help to form and maintain the city and its “personality.” This aggregate behavior results in “an emergent identity” that, though continuously changing, is remarkably stable (3).

cover of Resnick's Turtles, Termites, and Traffic JamsSimilarly, Resnick explicitly focuses on these “decentralized interactions” and the systems that result from them (13). He provides five “Guiding Heuristics for Decentralized Thinking”: 1) “Positive Feedback Isn’t Always Negative”, that is some kinds of positive feedback, in the economy for instance, can lead to increases, rather than decreases, in order; 2) “Randomness Can Help Create Order”; 3) “A Flock Isn’t a Big Bird”—systems do not behave like a larger version of their components; 4) “A Traffic Jam Isn’t Just a Collection of Cars”, or decentralized systems are more than the sum of their parts; and finally 5) “The Hills Are Alive”—the environment and context of a decentralized system are key components of its behavior (134).

Resnick sees the decentered view of the world as necessary to changing deeply-entrenched centralized ways of thinking. According to him, this decentered view became apparent in the work of Freud and his description of the unconscious, and other decentered metaphors have been slowly gaining ground in other fields since then.

This view of the world as being primarily the product of decentered behavior has interesting applications for rhetoric, for persuasive situations are as decentered and interaction-dependent as the systems studied by Resnick and Holland. Resnick notes that as decentered thinking—in the form of chaos and complexity theory—has gained traction, “scientists have shifted metaphors, viewing things less as clocklike mechanisms and more as complex ecosystems” (13). Rhetoricians since the sophists, however, have known the complex, dependent nature of communication, where “decentralized interactions” like the complex interactions of rhetorical appeals and “feedback loops” of self- and community-reinforcement (13) are well known.

These connections imply that rhetoric is well-suited for application of decentralized thinking. Certainly there is a tendency even in rhetoric to over-emphasize centralized behaviors to the detriment of decentralized ones—see the work of Peter Ramus. As rhetoric continues to move closer to a sophistic understanding of the power of persuasion, the models of Resnick and Holland have the possibility of shedding light on rhetorical situations, providing a language to explain behaviors that, though recognized, might have been previously unexplainable.

Saturday, July 29, 2006

Top-Down

Update: This post is a partial review of Stuart Kauffman’s At Home in the Universe: The Search for Laws of Self-Organization and Complexity

cover image of Kauffman's At Home in the UniverseIn At Home in the Universe Kauffman cites the Cambrian explosion—the appearance of an abundance of new life forms during the Cambrian period—as an example of a phenomenon that is difficult to explain using only the theory of natural selection. Natural selection suggests that gradual changes over time slowly accrue, allowing for the development of fitter organisms. This idea, however, is difficult to map onto the relatively rapid appearance of many different body plans during that period. A more selection-friendly period, Kauffman notes, is the rebound from the Permian extinction, when “96 percent of all species disappeared” (13). After the Permian the divergence in body plans, or phyla, basically ended. While there were “many new families, a few new orders, [and] one new class” that appeared at that time, there were no new phyla.

Kauffman refers to the Cambrian explosion as a top-down event—the rapid appearance of many wildly divergent kinds of organisms—while he calls the rebound from the Permian extinction—where there were many changes in the makeup of different organisms, but no new body plans—a bottom-up event (13). This movement from top-down to bottom-up events is typical, according to Kauffman, and is also seen in technological innovations, where an initial period of discovery is followed by an explosion of variations that later settle down into a few distinct, usually optimum, plans. The “branchings of life” that this particular view exhibits follows what Kauffman feels to be a lawful pattern—”dramatic at first, then dwindling to twiddling with details later”, what he calls a “complexity catastrophe” (14, 194). This catastrophe explains how “the more complex an organism, the more difficult it is to make and accumulate useful drastic changes through natural selection”, for “As the number of genes increases, long-jump adaptations becomes less and less fruitful” (194-95).

This process of organization comes from the tendency of “complex chemical systems” to become autocatalytic, that is exhibit a “self-maintaining and self-reproducing metabolism”, and it allows the process of ontogeny, where at division cells differentiate for different purposes (47, 50).

boolean networkIn the first case, Kauffman demonstrates with Boolean networks how autocatalysis occurs naturally in chemical systems. The image on the right shows “a Boolean network with two inputs per node” where “colors represent the state of a node” as being on or off. Using a sparsely-connected network like this to model simple chemical reactions, Kauffman shows that “when the number of different kinds of molecules in a chemical soup passes a certain threshold, a self-sustaining network of reactions”, or autocatalysis, “will suddenly appear” (47). Kauffman argues that his behavior on the part of these chemical systems is completely expected, and therefore not mysterious, as many attempts to explain it imply. Kauffman’s Boolean networks show that “when a large enough number of reactions are catalyzed in a chemical reaction of system, a vast web of catalyzed reactions will suddenly crystallize”, a property that is completely expected (50).

These complex behaviors also explain ontogeny. The tendency of complex chemical systems to organize themselves into auto-catalytic systems leads to those systems settling into a few attractors (see Order for Free for a discussion of attractors in biological systems), a result that allows the cells to differentiate but only in limited ways. As cells branch out to become particular kinds of cells in the organism, their tendency to stay in the basin of the attractor keeps them from becoming disordered, yet allows them to continue to propagate themselves. Order, then, from both the top-down and the bottom-up, is “vast and generative” and “arises naturally” out of common chemical interactions (25).

Order for free


Update: This post is a partial review of Stuart Kauffman’s At Home in the Universe: The Search for Laws of Self-Organization and Complexity

In At Home in the Universe: The Search for Laws of Self-Organization and Complexity (1995) Stuart Kauffman argues that traditional notions of how order arises are at best incomplete. Traditionally, it is assumed that Darwinian forces—“Random variation, selection-sifting”—were responsible for all the order we see in the universe (8). However, Kauffman demonstrates that random variation alone isn’t enough to explain the origin of order. By themselves, variation and selection are susceptible to two problems which counteract their organizing properties: if it proceeds towards an evolutionary dead end it can become “trapped” there and, even when it doesn’t run into this problem it is prone to “error catastrophes” (184). Kauffman derives this information from statistical models called fitness landscapes that map all the possible ways a particular environment can evolve. When the landscape is too rough the environment becomes “trapped or frozen into local regions” preventing further development. This problem is not solved by finding solutions with less peaks and troughs, for “on smooth landscapes” selection “suffers the error catastrophe and melts off peaks,” a process which would leave the genotype “less fit” (184-85). When an error catastrophe occurs, “the useful genetic information built up in the population is lost as the population diffuses away from the peak” (184); that is, whatever fitness the population may have demonstrated would be lost because the mechanism of selection is not capable of surveying a fitness landscape to find the best possible niches for the organism. This leads to Kauffman’s realization that “there appears to be a limit on the complexity of a genome that can be assembled by mutation and natural selection”, and, in turn, that there is not just a “singular source” of order in the universe—natural selection—but rather there must be another source as well, one that limits selection into useful areas of fitness (185, 71).

Kauffman calls this other source of order “self-organization”, and he argues that because of it, “vast veins of spontaneous order lie at hand” (8). Self-organization is a product of “extremely complex webs of interacting elements [that] are sparsely coupled” (84). When enough of these interacting elements are brought together and their ability to communicate is limited—Kauffman has shown that one optimum number is two connections each—they can organize themselves into regular patterns. These patterns are analogous to the attractors in complex mathematical systems, and systems that exhibit the behavior of strange attractors provide the complex qualities that Kauffman describes.

Additionally, Kauffman notes that these attractors often occur on the border between stable and chaotic behavior or “poised between order and chaos” (26). It is during the phase transition between the stable and the unstable that systems display organized behavior, what Kauffman calls “order for free” (106). This order is what makes natural selection possible, for it limits the action of complex systems—which often display more states than could be cycled through in the life of the universe—from all possible states to a few attractors, making ordered behavior not improbable but expected.

Wednesday, July 12, 2006

Metaphor and reality

Update: This post is a partial review of Kenneth Boulding’s Ecodynamics and James Gleick’s Chaos: Making a New Science

Lately I’ve been thinking a lot about the metaphors that Kenneth Boulding uses to describe the natural world in his Ecodynamics (1978). One such metaphor is evident in his statement that knowledge, or know-how, is embedded in the structure of natural objects. The way in which Boulding expresses this idea is that “in a certain sense, helium ‘knows how’ to have two electrons and hydrogen knows only how to have one” (14). This is a case where ‘structure’ has the ‘ability to “instruct”’ (13). One benefit of this particular way of looking at knowledge is that it limits what is determined about a subject to what can be known. A fact of an atom of helium is that it is an atom of helium, and that fact can be stated in terms of know-how. (Boulding uses this method to show how unhelpful the idea of the survival of the fittest is, for it really is just a statement about the survival of the survivors.) This metaphor is particularly powerful because it allows for our understanding of communication to explain natural phenomena like the replication of DNA. Know-how is communicated from the existing structure through other materials that lend themselves to communicating that structure as well. By accepting this metaphor, statements about language, the realization that “communication . . . becomes a process of complex mutuality and feedback among numbers of individuals that leads to the development of organizations, institutions, and other social structures which affect” the outside world (16). The spread of know-how through communication—the “multiplication of information structures” (101)—leads to complex behavior and organization, in persons as well as in nature.

lorenz attractorThis phenomenon, communication through the propagation of order and know-how, can be seen in other natural structures. In Chaos: Making a New Science (1987), James Gleick identifies several of these phenomena like entrainment or modelocking, an example of which being when several pendulums, connected by a medium like a wooden stand that can communicate relevant information like rhythms, all swing at the same rate (293). Similarly, in the phenomena of turbulence, “each particle does not move independently”; in their interdependent interaction, the motion of each “depends very much on the motion of its neighbors” (124). I don’t think that it is much of a stretch to say that know-how is propagated through the constraints of strange attractors (the image to the left is of the Lorenz attractor) and similar phenomena. Chaotic phenomena behaves in a particular way because that is what it knows how to do.

Supposing we can accept this metaphor for behavior in nature, that it is a kind of communication where what is being communicated is knowledge, then it seems like it would be completely reasonable to use the language of rhetoric to describe natural behavior. The sensitive interdependence of the parts of a system, recognized 1) by Boulding in animal development where “the history of a cell in an embryo depends on its position relative to others rather than its past history, because its position determines the messages”—or information—“that it gets” (107), and 2) by the physicist Doyne Farmer, who, in describing mathematical equations notes “the evolution of [a variable] must be influenced by whatever other variables it’s interacting with,” for “their values must somehow be contained in the history of that” variable (266) suggests a rhetorical way of looking at nature. As Boulding acknowledges, everything depends on everything else, a point that rhetoricians have been making about the persuasive situation since the discipline was formed. This connection opens up the exciting possibility of rhetorical analysis of natural systems, where the tools of monitoring persuasion in language could be used to track the movement of know-how through nature.

Thursday, July 06, 2006

Complex Systems, hypnotism, magic

Update: This post is a review of Kenneth Boulding’s Ecodynamics: A New Theory of Societal Evolution

Bateson’s idea that communication is a replication of structure from one person to the next is also found elsewhere. In Ecodynamics (1978), Kenneth Boulding argues that the power of human communication comes from the ability of our brains—an ability for which he uses the metaphor ‘know-how’—to replicate structure across other brains (128). Boulding finds this tendency in structures like DNA, the structure of which attracts ‘a similar structure from its material environment’ and those new entities are able ‘form themselves, as it were, into a mirror image of the original molecule’ (101). Similarly, communication works by structures in one person’s ‘head’ ‘replicating’ themselves in the head of another. Or, to avoid the complications involved in going inside heads, it is the propagation of the know-how through the various structures for which it is coded.

This explanation of communication provides a partial understanding of hypnosis. In effect, hypnosis works through the physiological repression of the various means of suppressing this propagation of the code. Bateson describes this process in a circus animal, which he feels ‘abrogate[s] the use of certain higher levels’ of thinking; he argues that it is also the means of hypnotism (369). If the higher levels of intelligence are circumvented, either through the conscious will or through suggestive or physiological means, then there is no interference to prevent the code from replicating.

This realization leads to a biological explanation for effective communication. First, the code must be able to be received without interference. Second, it must have what Boulding calls the sufficient ‘material’ means to propagate (101). In the replication of DNA, this would mean the correct molecules and nutrients; In human communication, it would be effective channels by which the code could spread. Third, the code must be correctly encoded for the material in which it wishes to move. Both Bateson and Boulding suggest that magic represents an ineffective communication of this kind. If a person can be persuaded to do something by words or actions, the practitioner of magic argues, so can nature. However, nature is not designed so as to be able to receive communicative codes and is therefore not influenced by the majority of them (typically words or ritual behavior—changing the code so that it can be received by natural bodies, however, such as fertilizing a plant or seeding a cloud, does result in effect code propagation from humans to nature).

The similarities that are seen between code propagation across persons and in biological structures like DNA prompt both Bateson and Boulding to make a connection between minds and nature. Bateson points out that the basic unit of evolution—the interconnected system—is also the unit of the mind, which is not necessarily limited by the skull. Similarly, Boulding suggests that like environments, individual minds are connected through ‘writing, sculpture, painting, photography and recordings’ into a ‘single mind’ in that each individual mind ‘participates in the experience of other minds through the intermediary of communication’ (128). This fact leads to the interesting conception of the individual not as an individual, but as a part of a larger whole. Since most theories of communication are based on the first model, appropriating the second should have interesting effects on communication theory.

Tuesday, June 27, 2006

Pattern and truthfulness

Update: This post is a partial review of Gregory Bateson’s Steps to an Ecology of Mind

Bateson: Steps to an Ecology of Mind coverOn the heels of yesterday’s post, I should note that the popular model of communication—centered on content—that I opposed to Bateson’s model in Steps to an Ecology of Mind is not one that would be adhered to by all rhetoricians. I think a valid response to what I wrote would be that rhetoric has always paid a great deal of attention to the form and structure of speech, and that that form is universally regarded among rhetoricians as being very important to the reception of said speech. However, I feel speech is still widely considered to be primarily about some content and the form itself is considered successful if it serves that content. As I read Bateson, he is arguing for a different model. Though communication is about something, it is also about itself and the relationships between the people who are communicating.

In this light, in this post I will comment on Bateson’s explanation of these relationships and how they are communicated as well as on the ways in which certain kinds of communication are perceived as being more truthful than others.

In the first case, Bateson notes that symbol systems must be redundant—what Derrida referred to as iterable—in order for those systems to be understood by others. However, Bateson recognizes that this redundancy is often the primary reason for the communication. As he puts it ‘The essence and raison d’être of communication is the creation of redundancy, meaning, pattern, predictability, information, and/or the reduction of the random by “restraint” ’ (131-32). Although he notes that communication has ‘meaning’ and carries ‘information,’ it is far more often a means of conveying or emphasizing ‘pattern’ and ‘restraint.’ Whatever is outside of this pattern—‘All that is not information, not redundancy, not form and not restraints—is noise,’ which is ‘the only possible source of new patterns’ (416). Again, meaning is only possible in relation to the overall structure that has been established by previous communications. If new communication is outside that structure, it is either ignored as ‘noise’ or adopted within the structure as something new. Here Bateson provides a developmental model for the evolution of language not as the process of the creation of new information but of new structures with which to pattern that information.

Secondly, keeping with this focus on how messages are communicated, Bateson pays a good deal of attention to the format of speech, accompanied by body language and other ‘kinesic and paralinguistic signals’ (370). These signals become important when the communicator realizes the rhetorical nature of signs, that they ‘are only signals, which can be trusted, distrusted, falsified, denied, amplified, corrected, and so forth’ (178). Because the individual recognizes that the sign is falsifiable, the importance of the paralinguistic skyrockets. The reason Bateson gives for this phenomenon is that because these paralinguistic elements are often involuntary and are out of the control of the untrustworthy other, they are more truthful. However, in ancient rhetoric (I’m thinking specifically of Longinus) it is generally accepted that once any particular behavior—in this case, whether linguistic or not—is accepted as conveying a kind of truthfulness, that truthfulness can be falsified. Bateson recognizes this fact (203), but his discussion of the matter brings up an important point for communication theory. Since the content of messages is often what is in doubt in communication, the only way to verify that content is through the assumption of or reference to structures that are perceived to be truthful, be they bodily signals or the assumption of different modes of speech that carry this information. An interesting example is the technologizing of communication. When communication is divorced from the often untrustworthy individual it is many times perceived to be more truthful. Bateson notes this phenomenon with newspapers, and it is similarly seen with some kinds of government or scientific documents which are designed to efface their authors and project the authority of some abstract entity.

Monday, June 26, 2006

Communication as relationship

Update: This post is a partial review of Gregory Bateson’s Steps to an Ecology of Mind

Bateson: Steps to an Ecology of Mind coverIn his references to communication, Bateson provides an intriguing alternative to the prevailing model of the persuasive process. That process is largely considered to consist of an author/speaker presenting some superior content, and that content, through its effective presentation, persuading an audience of the author’s thesis. (This is certainly not the only model for communication, but I would think it is the one that most people—especially non-academics—would adhere to.) Rhetoricians would focus on the importance of the presentation, that is, all of the choices the author/speaker makes in presenting the content, but primacy would be given to the content. This formulation privileging content would not only be considered the best one, but possibly the most ethical one as well.

Bateson presents an alternative. Although he does not ignore the importance of content, his model of communication focuses on the presentation, what he refers to variously as “structure” or “patterns,” of that communication, rather than what is being presented, and these structures carry information about relationships. Referring to his own speech at an academic conference, Bateson remarks that his real topic “is a discussion of the patterns of [his] relationship” to the audience, even though, as he tells them, that discussion takes the form of his trying to “convince you, try to get you to see things my way, try to earn your respect, try to indicate my respect for you, challenge you” (372). In other words, the acceptance of the content is dependent upon how well he as a speaker is able to affirm that relationship. In another example, he points out that if one were to make a comment about the rain to another person, hearer would be inclined to look out the window to verify the statement. Though he notes that such behavior can seem confusing because it is redundant, this redundancy is not a question of the first speaker’s truthfulness, but rather an example of an attempt to “to test or verify the correctness of our view of our relationship to others” (132).

This patterning or structure, the need to comment on the relationship between speaker an audience by 1) conforming to arbitrary rules (another kind of structure) like those set up in an academic conference or 2) reminding ourselves that a person is a trustworthy source of information, is, to Bateson, representative of “the necessarily hierarchic structure of all communicational systems” which manifests itself in communication about communication (132). For Bateson, this realization has at least two outcomes. First, in learning it means that what is learned by the subject is often not the topic under study per se, but how “the subject is learning to orient himself to certain types of contexts, or is acquiring ‘insight’ into the contexts of problem solving,” that is, “acquir[ing] a habit of looking for contexts and sequences of one type rather than another, a habit of ‘punctuating’ the stream of events to give repetitions of a certain type a meaningful sequence” (166). Second, this means that these structures of “contexts and sequences” replicate themselves in learners, who then become audiences, and those audiences respond to recognition (perhaps the wrong word) of that structure. As Bateson says, “in all communication, there must be a relevance between the contextual structure of the message and some structuring of the recipient” for “people will respond most energetically when the context is structured to appeal to their habitual patterns of reaction” (154, 104). In these situations, what appeals to or persuades the audience is not the content, but the recognition of the familiar structure in the communication in question.

I think this focus on the structure of communication and the way in which that structure replicates itself across individuals highlights the technological dimension of speech—the way in which “meaning” is dependent upon the form of language—and has some interesting applications for the question of ethics and truthfulness in rhetoric, a topic I will comment on in my next post.