Manifesto for Teaching Online – Aphorism No. 9 – “Visual and hypertextual representations allow argument to emerge, rather than be stated- Part 4.”

It’s not crazy to worry that, with millions of people connected through a medium that sometimes brings out their worst tendencies, massive, fascist-style mobs could rise up suddenly. I worry about the next generation of young people around the world growing up with internet-based technology that emphasizes crowd aggregation, as is the current fad. Will they be more likely to succumb to pack dynamics when they come of age? (Lanier, J. (2011) You Are Not a Gadget: A Manifesto, Penguin, London, p.63,64.)

“Each autopoetic, social system proceeds via its specific operating distinction whereby distinctions hooked upon distinctions. The resulting network of distinctions evolves and reproduces the conceptual structure of the respective autopoetic system. This conceptual structure is organized and primary distinctions which Luhmann calls guiding distinctions or lead-distinctions that are always presupposed and recursively brought to bear within the ongoing process of communication. For example; the lead-distinctions of the legal system is the distinction between norms and facts. The lead distinction of science is the distinction between theory and evidence. The lead distinctions of the economy is the difference between price and value… a fundamental diction of Luhmann’s social systems theory [is] the diction of self-reference (internal reference) and world-reference [external reference] … Each system’s unity rests upon a primary binary opposition which Luhmann calls the code of the system… everything within the domain of application must take on one or the other value, what is not true must therefore be false, what is not legal must therefore be illegal, and what is not making a profit is therefore making a loss etc. p.205-217.” Patrik Shumacher (2011) The Autopoiesis of Architecture: A New Framework for Architecture

“Empowered by the personal computer, liberated by virtual reality, the individual becomes the God of his or her own universe. The sight of someone wearing a virtual reality headset is the ultimate image of solipsistic self absorption, their movements and gestures meaningless to those outside (Woolley 1992:9).”

When I look at the glossy coffee table web pages of today, I am often drawn to reflect on my own nascent and messy crash landing which placed me somewhere in the worlds of virtual reality, hypertext and the WWW. How basic and clumsy seemed the pre-web browser interfaces. How old they look, but they are still intimidating. They are still capable of making you feel excluded, that you are not one of the hacker elite.

It’s easy to be critical in hindsight, of course. The visible and tangible effects of any human made device, product or intervention are merely tips of the icebergs. Underlying all forms of construction – statements, devices, software, applications, services, pictures, icons, hypertext links, arguments, messages, exam questions, laws and rules, lie entire worlds of premises, precedents, work, innovations, meanings, people, rationales, decisions and actions dictating why this, in particular, is that which comes to light, that which emerges, to be experienced by the citizen, student, consumer and user.

“Critical theory argues that technology is not a thing in the ordinary sense of the term, but an “ambivalent” process of development suspended between different possibilities. This “ambivalence”
of technology is distinguished from neutrality by the role it attributes to social values in the design,
and not merely the use, of technical systems. On this view, technology is not a destiny but a scene
of struggle. It is a social battlefield, or perhaps a better metaphor would be a parliament of things
on which civilizational alternatives are debated and decided.”(Feenberg, 1991:p.14).

The multiplicity starts to make itself familiar when an object like a phone is seen as a complex prospect and undertaking. If washed up on a desert island these are not something you would not bootstrap together from a few local resources, coconuts, twine etc. Even if there were a supply of iridium and gold and all the other chemicals that make a mobile phone, then you would need to make the machines that fabricates them. You would still need the knowledge to operate them. Then you would need the broadcasting equipment. If that is not enough then who would you call on your island whose only other inhabitants are a few cannibals who have no need, no wish to pay to speak to you? A network without subscribers is useless. Without ‘the social’ telecommunications and all media are redundant.

The multiplicity presents a different view of the linear development of say, technologies, that include a dynamic non-linear histories where emphasis may shift from one particular group in the biography of a product, or one set of precedent technologies, at any given time in its life cycle, to another. As Igor Kopytoff suggests in a chapter:

“that what is significant about Alien objects – as it is with the adoption of alien ideas is not the fact that they are adopted, but the way that they are culturally redefined and put to use.” Kopytoof, 1986. p67)

Different and sometimes competing groups and influences define the shaping of a product from idea or impetus, design, experimentation, production, publicity, dissemination, advertising, and distribution, appropriation, consumption, use and finally disposal. The scope and scale of this influence will vary depending on the nature of the product, whether it comes to be a success, whether it ever sees the light of day at all.

More recent social studies of science and technology highlights that no identifiable human intervention comes about or exists in a vacuum; there are always bilateral influences, as well as ramifications and implications both when devices are designed and built, and again as they roll-out and are implemented. They emerge through the agency of many other people apart from investors, entrepreneurs or inventors, and include a diverse range of social and technical precedents and these include science and technology feature writers and journalists, and many other ‘users’ who can determine just how much struggle there will be in design. In Bruno Latour’s Aramis, Love of Technology (1996) there is highlighting of how the dynamic interrelatedness of physical material and social conditions influence the development of a public transport system in France. This was a project that sustained for 20 years before finally being abandoned. The case, presented by Latour as a kind of murder-mystery “who killed Aremis?” highlights how a number of technical, socio-political, and economic factors shifted which led to many re-designs and the losing of direction and political will for the project. In addition, in order for a project to succeed, an engineer has to stimulate interest and convince the public. They must not just make but justify and market innovation and technology.

All of which leads to the question: is technological reality rational? Consumers, like technology, are invented, displaced, and translated through chains of interest. Latour argues that the technology failed not because any particular actor killed it, but because the actors failed to sustain it through negotiation and adaptation to a changing social situation In an earlier work Laboratory life (1979), Latour and Woolgar take a view that a laboratory is like a manufactory. The raw materials used are chemicals, glass objects, small animals, Petri dishes, software and so forth, and the output or product is journal articles and scientific “facts’. What happens is a process of conversion, where the ‘raw’ materials are converted into knowledge and information. Michelle Callon another coming from the actor-network school of thinking draws attention that during episodes of controversy in science and technology, the identities and roles of the relevant actors are not fixed and stable, but are uncertain and problematic.

Between the first idea or prototype, say of the television, and the reality of you collapsing into your seat at night and switching it on, are a veritable legion of actors and agency making possible including why it is desirable in the first place. Looking out on the vast socio-technical and creative system such as that captured in the credits of a single movie, that has made is all possible. With television we can distil down two main groups – the first is broadcasters with close second, advertisers and other investors such as television license holders, although there are many others such as media buyers and graphic designers etc., are needed to complete the circuit that creates the conditions for the other to thrive. But this can be picked apart into how is financing this system? How does the money circulate? What is the experience? How is this provided? Who provides access, to what, where and when?

Who uses Facebook? Who uses Google? It depends on what within, and how, you interrogate out the data. Is it those who sit there inputting their personal data so as to broadcast to friends and colleagues what they like, which by extension also means in a consumer society who I am? Are we defined by our searches or our likes or our actions? There are apps which cover each of these areas. Or is it those that pay for the company to exist, such as the advertisers and those that use it as a platform for their games?

Essentially your ‘virtual self’ which for many has existed since they were born on medical records, social security, credit files and so forth are joined by social media as a means to identifying who you are. Who you are on these sites will be relevant to the companies interests and to that of government departments. There are more than the ‘you’ that you reveal publicly or to many of your online friends. It’s how they and corporations, even governments will make sense of you. The advertising model of most mass media, online and offline, is built upon a foundational idea that you are a perpetual consumer, or at least you are a perpetual consumer locked inside in the hive, and that you will use the internet partly for social, partly for games, partially to look for things to buy. Otherwise you are a social security benefit or tax fraudster. If they can target you more precisely and accurately based upon your likes then there are enhanced chances that you will buy it, be happy and buy even more things, from that site. That’s as virtual as we have got in 20 years ago, but somehow it seems a real let down. It is more ‘tamagotchi’ or Sony AIBO Robot Dog than alternative realties?

With Facebook timelines feature, there is an attempt not only to show where you are with your likes now but also where you were. This present a dynamic view of that rational/irrational beast – that eager, zealous and persistent consumer, today a panama hat, tomorrow a new mobile, the next day a cool set of sunglasses, a Bracelet making kit as a child’s gift… and so on.

15 years ago it was a power ranger toy, today the latest hip hop tracks courtesy of MTV and Cartoon Network. It is conceivable that powerful correlations may be build from such longitudinal data aggregated and analysed showing different levels of predictability based on evolving ‘likes’. And just like life, you can add to it but cannot take it with you when you go. You cannot [yet?] download your consciousness into another body yet and you can’t grab all your facebook data and export into Google+. There is no negotiation about ‘you’ unless someone spots you for consciously making false claims and comments. In some sense this is like peer review in the scientific journals, which ought to sort out anything that is obviously wrong, while the actual publication of anything untrue is swiftly followed by rebuttal supported by experimental evidence. But where things are true anonymous then their is no accountability, no social convention, no need for reasoned response based on reflection, open to whatever mood we may be in at the time the stimulus is received. This could between mild irritation, disagreement and argument, and shear bloody outrage, feelings of extreme frustration, anger and murder.

I will report on this in this final section and give at least one example how argument can arise in controversies regarding science and technology, and especially when the added spice of lots of traditional media also stepping in to help, hinder or ignore science and technology. C.P. Snow in ‘Beyond the Two Cultures’ remarks:

“First language, the material of literature, is man’s primal technology… Second, both science and technology generate their own texts in print, plastic, and electronic forms. Third, literature reflects and shapes the psychological, social, political and economic ramifications of science and technology. The relationship between science, technology and literature is thus endlessly reciprocal, frequently aesthetic, and profoundly cultural. (Slade and Lee, Beyond the two cultures; Essays on Science, Technology and literature, 1990; p ix;)”

In the promotion of technologies and their potentials have been a few recurrent rhetorical devices which have helped place the technology where the producers would like it. Many if these have become standard narratives which are very potent if not persuasive.

In science there have been claims and counterclaims regarding the veracity of some observed phenomenon. A famous case, and one we were given in graduate school class in the Sociology of Scientific Knowledge, was Blondot’s ‘N-Rays’ which ‘discovered’ at a time when a vast proliferation of ‘rays’ were being discovered, were at first considered real and factual. However, an American investigated the claims which had been backed up the French academy of sciences, and found that the observations were indeed false. It was not a conscious hoax but rather wishful thinking on behalf of the investigator and those who supported him in putting France’s name on the map and help them compete with the German ‘discovery’ of the x-ray.. In technology however, products, software, services, either work or they don’t work. They can work as they are claimed or they do not. They cannot be disproved but they can be shunned, passed over or ignored. Especially when there are choices and options, people will be let down by difficult to use devices or services or applications, or where there are too many restrictions on use such as a phone with poor reception, low battery life, or poor mechanical functioning. How a device, application or service is reported by publicity is expected to relate to the product as bought. To do otherwise is not only opening up to charges of false advertisement, of which their may be legal penalties, but also you run the risk of spoiling lip service, which translated into today’s language means memes and viral messages warning other people of your poor product/s. Your brand gets damaged. If it is false claims about yourself then your reputation may get damaged. Of course you may be in a country where consumer protection is low and where powerful and affluent companies literally get away with blue murder.

IQ tests, as developed by Alfred Binet around the same time as Blondot, adapted for use in America by H. H. Goddard, and were incorporated into the administration of immigrants at Ellis Island who spoke next to no English. Unsurprisingly they showed that (for example) 87 per cent of Russian applicants for immigration were “feeble-minded”, lending an excuse for them to be deported. (The science of getting it wrong, THES, 4th July 1997)

New products follow narrowly defined discourses in advertising, mothers and children with soap, happiness and glamour with beer success and confidence in life with a car and so forth. In technology, they may highlight the technical prowess of the device, or they may highlight something of the use contexts and conditions which connote who, why, what, where and when the device will be used and who it will be using it. A pervasive argument for the personal and home computer goes like this: ‘Computers and particularly, a personal home computer, is good for your child’s education’. Another is ‘The computer is the new hearth of the middle-class home’. Both of these ideas, unpacked, are very powerful persuasions in terms of fuelling arguments to buy a personal or home computer in the first place, they suggest the ‘new normal’. Both are pivotal concepts which featured in my PhD. research [which I will share with you later]. The initial focus of this work was on the development of domestic virtual reality, but later this was shifted to study the more fruitful sociology of innovation and knowledge management in interactive television systems development. What is interesting to note is that the digitisation of the range of domestic devices – phone, camera, fridge etc. – as well as increasing interest in ecological and green issues, is a a move to the geenral public being more au fait with technical specifications in their knowledge of technical products. Consider some the discussions regarding the ‘battle‘ between manufacturers of ‘3-D television’.

The computer as a device has a long and well documented history going back to at least Pascal in the 19th century with his calculator and Babbage in the 19th century with his difference engine. I am not really doing it justice here, nor that of the full and arcane world of advertising and PR, but that’s for another place. What is relevant is that Babbage’s device was born of an age of gears, of cranks, levers and pulleys which dominated the visceral world of industry and so his machine looks like that. But he was also addressing a common problem as he put into a communication in 1822 to Sir Humphry Davy, who was then President of the Royal Society in the U.K.:

“The intolerable labour and fatiguing monotony of a continued repetition of similar arithmetical calculations, first excited the desire, and afterwards suggested the idea, of a machine, which, by the aid of gravity or any other moving power, should become a substitute for one of the lowest operations of human intellect.”.

By freeing up engineers and scientists to focus on theory and advanced mathematics. In some sense this is analogous to the start of the creative age, in that:

‘Our sons and daughters will not hew, forge, mine, plough or weld. They will serve, design, advise, create, compose, analyse, judge and write.’ – Charles Leadbeater Britain’s Creativity Challenge (2004)

This buoyant optimistic vision joins that which emerged in the 1950s in discussions of automation. There were of course fears that employment would be negatively effected, and these trace back to Luddites. It links to other ideas such as the elusive idea of a ‘leisure society’ that we will produce more but work less, all enabled by machines. In a 1966 article in TIME magazine looking ahead toward the future – and what the rise of automation would mean for average Americans concluded:

By 2000, the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy. With Government benefits, even nonworking families will have, by one estimate, an annual income of $30,000-$40,000.

In 1966 dollars, it would be like $100,000, $120,000 a year now. How to use leisure meaningfully will be a major problem.The premise was simple – with better technology – companies would become more efficient – they’d be able to make more things in less time. This thesis was supported by John Maynard Keynes: who in a 1930 article, “Economic Possibilities for our Grandchildren”, Keynes predicted capitalism could deliver a 15-hour work week for the masses.

The positive spirit had it that automation, in the home through labour saving devices (electric dishwasher, washing machine, and many other devices), and in the factory as computer controlled machines, would free an elevated workforce to pursue creative and educational interests.

The reality is that ‘creative destruction‘ the ways in which industry evolves under the effects of innovation, means that news jobs arise from new industries, techniques and processes and old jobs metamorphose into new kinds of work.

Whereas the wooden and brass marvels of steam churned out cheap consumer products for a growing domestic market, Babbage’s machine would churn out numbers and data. His letter whets the interest of the Royal Society and he was awarded £1500 from the public purse. He spent 10 years modifying, redesigning and enhancing, and after a further £37000 of public money was no further in completing a working design. His next project was his ‘Analytical Engine ’the first general purpose’ computer concept.

The Analytical Engine would have been roughly the size of a train engine, comprised of an incredibly complex intermeshing of thousands of clockwork parts – the smallest imbalance in any of which would have caused the machine, at the very best, to shake itself to pieces. Since this device was not dedicated to producing calculation tables,this device appeared to lack a use. Nevertheless it is widely cited and influential as a key part of the development of the idea of computing, a working machine never materialised. Charles Babbage watched as the Scheutz Difference Engine took out a gold medal at the Exhibition of Paris and, a few years later, was commissioned for the Registrar-General’s Department of the same government that had abandoned his original research.

Calculating machines really come into their own in the 1920s in the age of business where technology, communications, and business were driving the genesis of the information age. Industrialisation, mechanisation, urbanisation and standardisation were largely settled or settling prior to World War 1 and now were becoming the way of life where people were reaping the benefits of their implementation and diffusion. Sherwood Anderson remarked “Minds began to be standardised as were the clothes men wore.*

*Quoted in Simon, Z. (2001)as quotedin Clifton, D. ed., (1989) Chronicle of the 20th Century, Mount Kisco, New York: Chronicle Publications (p.631)

The war lent imperative is on for computation contributing to the war machine, where we have the early histories of the electronic computer in Eniac used to calculate artillery shots, or breaking enemy communications codes. Largely due to cost and size they remain in the industrial-military-university-business realm for the next 30 or so years with some notable exceptions. If not working on scientific calculation, then they were conducting clerical work such as calculating pay-rolls. Going by the uses of these devices it beggars the question with respect to precisely what are their uses for at the personal level or within the domestic space? The notion of ease of use, or usability, and the perceptions of usefulness was a spectre which haunted computing from its earliest days until now.

The period following the Second World War saw a seamless move politically to concerns regarding communism. The U.S. from the late 1940s to the mid- to late 1950s was from the period of McCarthyism, synonymous with the term ‘witch-hunt’, both referring to mass hysteria and moral panic regarding ‘infiltrations’.

This had many unfortunate impacts on business which were targeted as having hired ‘ communist’ engineers and Holywood screenwriters and so forth. It impacted greatly upon nascent Amercian computer development.

Such sentiment not only laid the foundations for further military action in Korea and later Vietnam, but also crucially the space race. This was a period of unprecedented growth both in terms of the economy and in terms of lifestyles, and in the growth of American R&D and science and technology.

During the 1950s stretching into the 1960s there was a sharo rise in the amount of social studies looking at the prominence of science and technology in shaping modern society. These accounts can be typified by “social analysts generally aimed at understanding, explaining and effectively reinforcing the success of the sciences, rather than questioning their basis.” (Collins and Evans, 2002)

Such thinking pervaded society at all levels in many forms, culminating in 1958, the U.S. government believed that their schools had failed to provide enough good scientists to compete with their Cold War enemies. This concern led to the National Defence Education Act, passed in 1958, which greatly decreased the emphasis placed on art education in schools. But this also must be understood and weighed against broader societal innovations in American society at the time. For instance, 1950s ‘new classical’ music had been taken apart by the experimental processes of the likes of John Cage, and Jazz music was entering new phases with the free jazz style of the likes of Ornete Coleman. Free jazz uses jazz idioms, and like jazz it places an aesthetic premium on expressing the “voice” or “sound” of the musician, as opposed to the classical tradition in which the performer is seen more as expressing the thoughts of the composer. Further ironies lay the fact that the 1950s also gave rise to the rise of an identifiable youth culture with a distinctive dress code, behaviour and music in rock and roll. Meanwhile, the art movement termed Abstract expressionism dominated the 1950s American art world. Jackson Pollock emulated the free style expressionism of the high end music with his abstract work. Mark Rothko had laid down, in a strange parody of recent attempts to realise ‘invisible computing’, an overarching philosophy of the style in 1949 where in an article he spoke of:

“The progression of a painter’s work, as it travels in time from point to point, will be toward clarity: toward elimination of all obstacles between painter and idea. Between idea and observer.”

Such a view is interesting as it speaks of where computers have been moving in one strand of their development. First there was an imperative to miniaturise and decrease costs of computers. Later, there ha been an interest to develop them both as tools and devices of experience, and to make them at the same invisible, transparent almost in terms of their mediating role in creativity and tasks. There has also been an interest in harnessing their power to perform useful tasks embedded in everyday objects and technologies. People have always been hindered by interfaces, by having to learn how to type to write on a computer, how to manipulate photos and create edited video, to make music – to eliminate mediation and the interface completely.

Mark Rothko, Orange and Yellow(1956)

But crashing computers and lack of memory and broken keys still get in the way. They get in your face and remind you you are dealing with machines. Short-circuiting the acquisition of the idea with the person carries implications, of which we may not yet be aware. Like is their emergence of argument when all queries end with Wikipedia. Is it not the intention, the hidden curriculum, of such a device to kill argument and critical thought, fragmenting any search we will ever make bow under the hegemony of the crowd of faceless nameless experts, providing palatable, crisp clear authoritative answers? Is there any defence of misfit, or mixing facts or getting it wrong? Is there any creative merit is pursuing this? What is emergence but a conglomerate of statements brought together to press a point?

Civil liberty was also a major theme of the 1950s, which was influencing a youth which ranged across an entire spectrum from conservationism – the rise of the suburbs and middle-class households – to radical anti-establishment stances who took on more nomadic lifestyles ‘on the road” . The original “Beat Generation” writers met in New York. Later, in the mid-1950s, the central figures (with the exception of Burroughs) ended up together in San Francisco where they met and became friends with figures associated with the San Francisco Renaissance. In the 1960s, elements of the expanding Beat movement were incorporated into the hippy counter-culture. It is summed up in the 1997 advertisement from Apple.

“Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. . . . While some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world are the ones who do.”

How would such a person be recognised for thinking outside the box in the middle of a field in Africa? For such an individual it is hard enough to persuade that they can think inside the box and exhibit the makings of ‘intelligence’ in the first place.

It is interesting to note that it was only later that counter-culture ideologies came to be associated with computing certainly influencing Steve Jobs. In the academe, it should be noted that the year 1956 was also a turning point in what came to be known as the Cognitive Revolution, where disparate fields – psychology, anthropology and linguistics were redefining themselves and computer science and neuroscience as disciplines were coming into existence – came together to develop the cognitive sciences. The advent of the computer was influencing cognitive psychology in several ways. The very idea of an intelligent mechanism helped to frame new questions on the human mind’s information-processing capacities. An ambivalent stance to whether one was dealing with human-biological or machine-digital information processing was being adopted, with the concomitant idea of the computer as a cognitive enhancer of the human mind.

By the 1950s, with popular interest generated both by technology and science through the arrival of networked television and the proliferation of science fiction into mainstream society (i.e. cars that looked like space craft and boasted increasing amounts of automation). Robots and computers also began to seep into the wider public consciousness, along with questions of what they could do and how they worked. Futuristic visions had been at play before this most noticeably culminating at the world’s fair in 1939. The turn of the century and the second industrial revolution driven by electricity had spawned a number of highly significant innovations and many attempts by others to explore the potentials of new technology, thinking and new materials. What David Feenberg calls the technological sublime, a kind of awe and religious belief in technology, where he argues that this experience has acted as a thread throughout American history stitching together the otherwise [still] diverse and [still] divided elements of American society?

A number of companies explored ways to capitalise on this by bringing electronics and computers to the home in kit form. This was a critical development in the history of computing and especially so when it is considered that the development of kits built by hobbyist amateurs, both individually and within the social context of clubs, pre-empted the interest and market in personal computing.

The importance of this social group of hobbyists just cannot be more strongly emphasised in paving the way to personal computing, and to where we are today. They are not unique in the sense that much earlier in the century radio amateurs also made very significant contribution to the uptake of that technology. These early adopters act to scaffold obscure functions and uses to more general or palatable uses for the general population, rather than it being a top-down push coming from large corporations (which are needed in the long run to lower the costs of unit production and thereby engender mass take-up and innovation of content and peripherals).

According to Steven Levy in Hackers (1985), the most famous, The Homebrew Computer Club, also formed the backbone of some 20 personal computer companies including most famously Apple. The club started to gain popularity after the Altair 8800 personal computer kit came out in 1975, and of course, later in 1976, the Apple I.

There was a highly significant jump between 1976 and 1977 when te Apple II was marketed. By the time that the Apple II was released the mission of domesticating the personal computer was in full swing with the advertisement proclaiming, “The home computer that’s ready to work, play and grow with you.” However this claim came with a mass of arcane computer specifications which clearly still were aimed at enamouring the sardonic pure-play computer geek. Education was not missed out, although the first ads were aimed at the school and college markets. If anyone was in and out as to the what can a computer do? Then they were answered in apples 1981 ad which used neat design, graphics to portray its diversity of uses.

Making it more a cooker then what is clearly an experimental hobbyist machine and enthusiasts machine, was in full swing. Video and print advertisements showed clearly the computer comfortably embedded into everyday life.

Over the next 30 years these amateurs pioneered the educational and later through teenage games designers commercial benefits of home computing. [a couple of interesting papers Leslie Haddon gives an overview of such amateurs in his 1988 Ph.D. research captured in a couple of papers here and here]

Edmund Berkeley first described ‘Simon’ in his 1949 book, “Giant Brains, or Machines That Think” and went on to publish plans to build Simon in a series of Radio Electronics issues in 1950 and 1951. The idea of Simon touched such pioneering computer scientists as Ivan Sutherland, who went on to influence the development of interactive graphical interfaces. Berkeley concluded his article anticipating the future:

“ Some day we may even have small computers in our homes, drawing their energy from electric-power lines like refrigerators or radios … They may recall facts for us that we would have trouble remembering. They may calculate accounts and income taxes. Schoolboys with homework may seek their help. They may even run through and list combinations of possibilities that we need to consider in making important decisions. We may find the future full of mechanical brains working about us.”

In 1955, Berkeley teamed up with Oliver Garfield and began producing the first Geniacs. “one of the most remarkable kits ever released to the public” (Popular Mechanics, Oct. 1958; p.27) according to The Geniac was one of the first kits to offer this in 1955. retailing for less than $20 in 1955 (approx. $167 in today’s dollars). It provided:

“…everything necessary for building an astonishing variety of computers that reason, calculate, solve codes and puzzles, forecast the weather, compose music etc… So simple to construct that a twelve-year-old can build what will fascinate a Ph.D.”

Later advertisements had it that it was: “fun to use and play with, and teaches you something new about electrical computing and reasoning circuits.”

The use of ‘ect.’ after such a diverse list of function, ending with it composing music’ seems something of an understatement. The Geniac kit shipped with a wooden frame and a set of six predrilled Masonite discs that served as rotary switches. The user programmed the computer by wiring the switches in a certain way, and then gave the computer input by positioning the discs. Assuming that the program was set up correctly, the user would see the result flash on a series of miniature light bulbs. David Deming reminisces about his personal encounter with one of the kits.

“The very existence of the complicated technical materials notified us that there were vast worlds of information and learning to explore. The first task in climbing a mountain is to take note of its existence. In 1960, children were expected to rise up and meet standards set by adults. Self-esteem was something you attained by achievement.” (David Deming 2012)

Another recipient, George Johnstone, was less enamoured with his kit.

“I’d been had. There were no vacuum tubes, no transistors, or capacitors, or resistors-the colorful components I’d found from eviscerating dead radios and TV sets. All I’d gotten for Christmas was a handy-dandy kit for stringing together mindlessly simple circuits of switches and bulbs. The nuts and bolts were to be placed in various holes on the square wooden panel and connected one to the other by wires running underneath. The little metal jumpers were to be inserted into holes in the Masonite disks, the ends bent over to keep them in place. When the disks were attached to the panel, with more bolts and washers, they could be turned to and fro so that the jumpers touched the heads of the bolts, forming connections that caused the lightbulbs to flash on and off. It was all just switches-simple enough for a child to understand.” George Johnstone, 2003

Edmund analog computer -Click for detials

The partnership between Berkeley and Garfield did not last long. The two had disputes which culminated in a lawsuit The outcome led to the development of the Brainiac (brain-imitating almost-automatic computer) kits were conceived, designed and marketed by Berkeley. The Geniac and the Brainiac (both very similar designs and selling for the same price) are highly significant as they are a link to digital creativity as it is being realised today. There were a long line of successive products brought to market under the auspices that they would inspire and educate the young. The cost of a Genaic was such that it placed it firmly as a ‘toy’ which could only be afforded by the middle-class (average salary was approx $2200). Also the complexity in terms of its construction also mitigated exploration by those who believed in its potential and had the propensity to focus on it.

Through the 1960s till today there have a been a succession of kits, components and whole systems which have been portrayed as ‘good’ for educating children and aimed at controlling the home. The list includes, The Digi-Comp I (1963) (made originally entirely out of plastic but a cardboard version is available here). A video which explicates its use is here. It retailed for less than $5. Honeywell tried to enter the domestic market with the ‘Kitchen Computer‘ (1966) which carried a $10,000 (in 1969 dollars) price tag. Its aim was to provide and calculate for recipes. Alcosser et al. contributed their book “How to Build a Digital Computer‘ in 1969. Science Fair Digital Computer Kit (1977). There were multiple kit computers released but of them one stands out as pivotal and that is the Altair 8800, introduced in January 1975, was the first computer to be produced in fairly high quantity. It was the technology which galvanised groups like the Homebrew club.

Stan Resnicoff is a children’s book writer, interactive designer and director of New Concepts for Mattel Toys details his contribution to the the Mattel’s Children’s Discovery System (1981) was one of the first fully electronic computers aimed directly at kids (from 6-11 age group). It loaded software from interchangeable ROM cartridges, and showed output on a fancy 16-by-48-pixel LCD screen. A total of 18 expansion cartridges were available on such traditional school topics as math, science, language, and history. Each cartridge came with its own overlay that would fit over the Discovery System’s keyboard to customize each experience. The Discovery System was well-received by the press in the early 1980s, and its success triggered the first wave of all-in-one electronic educational computers–ones that didn’t require programming or assembly.
picture

Like many people in the early 80s I had friends who’d spend hours programming the famous Sinclair ZX 80. The ZX-80 was sold in kit form for £79.95 or ready-built for £99.95. It was a home computer produced by Sinclair Research and manufactured in Scotland by Timex Corporation, and was launched in the United Kingdom in March 1981. In its advertisements [Byte , March 1981) it was billed as a ‘family learning aid’ with the accompanying picture of a father and son having ‘fun’ while learning.

The most amazing thing it seemed to do was create a white circle drawn on the TV screen, or give a floor, wall, and volume calculator, or Hangperson, and they, the avid programmer would argue, well, that it was fun. Why would you pay £99.95 for that? I was aware that many better off pupils at school had managed to cajole their parents into buying them calculators or borrowed their father’s and passed arithmetic with flying colours [before they were banned]. Indeed, you seem to have to trade off – ‘fun’ for ‘learning’ so typical of the unsaid idea of what you do at school. I fiddled with it some but got bored very quickly. It lacked any form of engagement. seemed very limited with uninteresting, predictable results to anything you might do, although it did get some favourable technical reception. Perhaps the problem was that I was a ‘born user’ rather than a programmer at heart. Also, I had already fair exposure to analogue synthesisers which represented a much more gratifying synthesis of art and electronics. Inexpensive models became available in the late 1970s. A tiny turn of a knob in the complex configuration of switches, dials, meters, patch leads and circuits could instantly present a completely different sound or texture. There was an immediacy about this technology which appealed in which ‘the [creative or musical] argument emerged rather than had to be stated’.

Certainly as it was in using the materials of visual and plastic arts (mixing paint on a palette before seeing what happens when it is applied to the canvas, or having to input a strange configuration of pegs or codewords in order to see something obvious). Incidentally synthesizers also came with large technical manuals usually outlining the basis of synthesis, and templates which showed pre-defined sounds.

Returning he launch of the successor was the ZX81 was catalysed in part by the British Broadcasting Corporation’s plan to produce a TV series, to be broadcast in 1982, aimed at popularising computing and programming. They lost a significant battle with Acorn Computer Ltd.

I had been using computers such as the Apple II E since the early 1980s; the School for Independent Study had an Apple computer lab entitled the ‘orchard’ which also had a Primos mainframe terminal. And so it was only later in the mid-1985 that I really began to spend a massive amount of time gazing in to the screen of an Atari 520ST running Steinberg pro-24. It is probably for this I can blame the forward head syndrome and the square eyes that I now boast. The early computers were built in the 1950s in large institutions – the military, universities, and corporations, but they had already diffused at least since the 1960s into science fiction and were they were shown to be capable of doing much more than simply calculate. The drama is based on the central character, the ideal computer user. This user has to overcome a series of barriers to find perfect interaction with the computer.

My relation with the internet and really begins with the use of the Lynx V.2.0 in 1993 using a PC with access to one of the Edinburgh University UNIX mainframes. Between 1985 and 1993, the screens sadly still looked pretty much the same although I am sure computer scientists would argue, but WYSIWYG. In all, you still felt like you were in an alien word to which you didn’t belong. This gives you an idea what Google looks like on Lynx today, bearing in mind that Google didn’t exist in 1993. Everything is on there and I even ran a search for this site.80.

Lynx was, is, a non-graphical hypertext browser. It now looks ancient and outdated and I am sure wields up some nostalgia for a select few. But if you consider that Lynx weighs in a very slight 2 MB, any criticism also needs to be considered against the massive use of resources, memory and RAM, which more recent applications command. It is also coming from a person who is fortuitous enough to be pretty able-bodied. A non graphical browser like Lynx really comes into its own for people with motor skill disability that uses a Braille display, or a mouth operated input. It also serves others who may simply not have sufficient computer power to view a graphically rich web. It lends itself to other forms of interface other than the mouse.

The real thrilling thing about access to the early web was access to authorities and experts online which allowed for much argument and debate through what was called the Usenet. In these early days you could still email Bill Gates and get a reply. This offered newsgroups, online communities of individuals who were discussing thousands of topics. By the Autumn 1992 there were 207 multi user games based on 13 different kinds of software on Internet. MUD‘s provide worlds of social interaction in a virtual space, worlds in which you can play a role as close or as far away from your real self as you choose.

All the technical and mechanical details aside, it enabled me to interface with academics and practioners working the field, talk and argue with them to develop my thinking especially with respect to my study.

Marc Andersen’s realization of Mosaic, based on the work of Berners-Lee and the hypertext theorists before him, is generally recognized as the beginning of the web as it is now known. Mosaic, the first web browser to win over the Net masses, was released in 1993 and made freely accessible to the public. The adjective phenomenal, so often overused in this industry, is genuinely applicable to the… ‘explosion’ in the growth of the web after Mosaic appeared on the scene. Starting with next to nothing, the rates of the web growth (quoted in the press) hovering around tens of thousands of percent over ridiculously short periods of time were no real surprise (p.42). Mosaic Version 1.0 was released on April 22, 1993, followed by two maintenance releases during summer 1993.

The original research proposal for my Ph.D. was looking at virtual reality (VR), in particular how it would come to be accepted and used as it diffused into homes as a mass market device. It was built on claims that it could supersede television as a major domestic technology. “My idea of virtual reality is that, in the long run, it’s going to shut down television. “(Lanier, 1990 ” p.44-54)

Many wild claims circulated regarding this technology, but this is hardly surprising? VR is by no means alone in suffering from incautious claims. We come from a culture where after a long period of successful industrialisation of society (normal science work to use a Kuhnian metaphor), the public and industry consciousness has been punctuated by the occasional radical or discontinuous innovation (paradigms). Such claims come on the back of depictions of new ways of life, or new styles of operating in daily life. Industrialisation drew people to the city and its social, technical and economic organisation gave rise to new ways of doing things, living and working. Electricity promoted many innovations in the period leading up to or shortly after the turn of the century, within this culture new technology innovations and heroic inventors have made exceedingly good type. The likes of Bell, Edison, Ford, and Marconi were good publicists for their work and projects; they understood only full well the role of publicity and media in terms of generating interest in their products from public, businesses and investors alike. They were in everyway as adept as Barnam-style showmen as they were good at business or engineering solutions. They knew how to present and project their ideas or build what would now be couched as their ‘personal brand’.

And so we are all familiar with the weaving of stories by developers that foster a perception of their technology as the early stage of a potentially vast industry. They need to do this to create a public ‘buzz’ and also to attract funders and often managerial support within their own organisations for projects. The promise of mass markets will always seduce early adopters and project possible funders when words and names like television, radio, telephone, car, aeroplane, are cited. Not only had they spawned successful technologies, but also extensive network infrastructures and a multitude of optional and necessary ‘add-ons’ [such as fuel and service stations, airports and shopping, broadcasting services and advertising industries], which themselves become catalysts for environmental, societal and economic change. less And of course some of the mess we are in now.

On their inception, these major devices were low unit, fragile, frail, vulnerable, and poorly operating in their nascent stages of development, we should perhaps think of John Logie Baird’s televisor. In this sense they were pioneering, frontiersmen, leading the way to the new world, and ultimate the moon (sticking on the American flag as the crowning moment) and beyond in a rampant colonisation of space, at least in science fiction.

The rhetoric of VR was placed firmly into the discourse of paradigm, epoch and context of such illustrious mass success stories, whilst at the same time promising liberation of the human imagination and senses far exceeding any of its technology or media predecessors. It also came about through the very real, tangible and phenomenal rises of the games console and VCR both of which had augmented the television viewing experience and become mass market products themselves – the video recorder, games consoles and satellite decoder. Also, by the late 1980s the personal computers of Apple and IBM were dropping in price and fast diffusing into homes and more and more people (but still predominately male, and techie orientated) were using modems and dial-up connections to access online facilities. VR regardless of the bravado claims was a real technology with many serious and commercial developers who were trying to develop it. University departments were involved, and with respect to my own study, beyond the rhetoric all the leading games companies, were busily involved in trying to develop price sensitive devices as add-ons to their games machines. Amongst these was Sega’s Virtua Sega a 32-bit colour 3D virtual reality helmet for video games ($150-200). Sega were also producing the Activator, a full-body interactive controller for Genesis ($80). Nintendo was also aiming to produce virtual reality games for the home market in the near future. From an august; 1993 Press release:

Nintendo, the world leader in video games, and Silicon Graphics, Inc., the world leader in visual computing, announced in August 1993 an agreement that will transform video entertainment by developing a truly three-dimensional, 64-bit Nintendo machine for home use. Nintendo’s Project Reality is the first application of Reality Immersion Technology, a new generation of video entertainment that enables players to step inside real-time, three-dimensional worlds….The product, which will be developed specifically for Nintendo, will be unveiled in arcades in 1994, and will be available for home use by late 1995. The target U.S. price for the home system is below $250.

A Sony press release dated 6th Jan 1993 outlined its intention to develop VR.

The proposition was clear. With such giants involved and driving development, VR certainly seemed set to enter the public domain and my research project was on. Melding of the network with this novel sensory immersive interface would allow for the Lamarckian jump in the evolution of computer technology that would fulfil the fiction of William Gibson’s matrix, and offer us the opportunity of a ‘consensual hallucination’, an extra-layer of interaction with people and institutions whose implications which would traverse both worlds. But it was: “Inclusion and unconstrained realities” which really fuelled the imagination and hype. Allecquerre Stone remarked that “The inhabitants of these virtual communities thoroughly internalize the Homo ludens mode of sociality – working from narrow-bandwidth cues, acting as if they inhabited common social territory (Stone 1992: 620-621). He was convinced that what was happening marked a major change in “mental geography.” And it drew much debate regarding what the future in the new millennium would hold. Maybe it is too early to say precisely what cyberspace will look like in the future; the outcomes may include new forms of democratic, totalitarian, and hybrid governments. Optimism about the information revolution should be tempered by a constant, anticipatory awareness of its potential dark side (Ronfieldt 1992:243). William Gibson’s Neuromancer (1984) can be read in a number of conflicting ways, for instance as a harbinger of new realities or pointing to the rise of corporate and global capitalism.

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts…A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding… (Gibson, Neuromancer, 1986, p. 67)

The interpolation of meaning and activity, the cultural circuit,(cf. last post) is critical to the present debate which suggests that arguments emerge through hypertext and images. Total immersive virtual reality was, is, perhaps the ultimate interface linking humans to data, information and knowledge, excepting direct neural implant or the downloading of consciousness into a computer. Either way, the manner in which it can be depicted and experienced is limitless, bound only by a person’s ability to comprehend and understand what they are experiencing and having the cognitive and physical skill to manipulate it to suit their purposes. Like many others interested in science and technology and the future, I had first heard read of virtual reality reading a magazine in the Summer of 1990. Mondo 2000, a pre-cursor to more famous Wired magazine. The editors had announced on the cover: “The rush is on. Colonizing Cyberspace.” A bold statement, and of course we now understand it in terms of present population of the world wide web, but then it included VR as an interface to data, knowledge and services online. Suffice to say just at this juncture that Michel Foucault used the term colonization to mean the coming to dominance of certain ways of viewing the world.

John Walker of the company Autodesk launched the “Cyberpunk Initiative.” He invoked Gibson in a white paper called “Through the Looking Glass: Beyond User Interfaces.” In it he proposed a project to produce a “doorway into cyberspace.” It would happen in sixteen months. The project’s motto was “Reality Isn’t Enough Any More.” And so the meme of virtual reality had really been let loose, first in the company, and shortly afterwards into the public domain when on June 7, 1989, the computer-aided design software company Autodesk and the eclectic computer company VPL announced a partnership a working system.

It was almost a year before the BBC’s “Horizon’ flagship science program devoted a program more or less under the same title, delivering it from a more marginal group of fringe elements interested in such topics to a wider public. I experienced the technology first hand when units were brought to the UK and shown at Virtual Reality 91: Impacts and Applications. This was an early conference on VR in the UK, held in Aldich, London, in June, 1991. I attended on behalf of my company, whose interests were audio-visual communications and consciousness. On arrival and at coffee breaks I was struck by the eclectic demographic of the visitors as much as the subjects that were discussed. There were as many people coming from diverse sectors, financial, IT, media, and education. All had been drawn to the event by the rhetoric and publicity that was following the technology. They were looking to invest, were curious or had ideas of how to employ it.

However, the actual experience of virtual reality was rather flat. Everyone queued up for some time as if at Disneyland, and after an exceedingly long wait, a head mounted display was donned providing stereo vision of a rather bleak, faceless and disconcerting experience, an alien, sparse, unpopulated world with a heavily pixlelated black and white tiled platform against a endless infinite pink coloured background. A data glove covered a hand to provide some control. Then you entered cyberspace. It looked and felt like you had been inserted into early 8-bit computer game, a broken down one at that. There were some box-like shapes on the platform and I was invited by the attendant to go down to the platform by pointing the glove. I shot right through it and became immersed in an infinite world of pink. I was lost and it required help by the attendant to get back to seeing anything at all. In reality, the experience of virtual reality was a let down. The prohibitively high cost of the technology (and the conference I should say) was off-putting. One of the other unit had already broke down and no-one could repair it on site, suggesting that it was vulnerable. But it was early days; but 1991 also saw exposure of VR to a much wider audience, particularly with Howard Rheingold’s book, Virtual Reality(1991).

This did not stop the rhetoric. During the conference itself, William Bricken, reiterated a thought provoking idea which he had apparently brandished at the 1990 SIGGRAPH conference, “Psychology is the physics of virtual reality.”

The argument or proposition that we are only bounded by what we can imagine, and what we can imagine is is bound only by what we experience and what we have experienced, was a fascinating prospect. But compared with what I had experienced in the brave new world in the foyer outside the lecture hall, it was going to require a lot of imagination and possibly vastly improved computer power and programming to make it compelling.

On reflection, the conference highlighted clearly to me that there were in fact, two distinctive virtual realities – the first was a computer interface technology aimed at partially enveloping the senses and lending access to a rather primitive experience desperately crying out for development. Then there was a second which could be described as ‘virtual-virtual reality’ – this largely consisted of musing on the potentials of this technology, what it could be used for, what impacts it could have on various spheres of life, work and existence. This second dimension was subject to scholarly interest, and generative of a new kind of discourse mostly borrowed from key ontological and epistemological ideas, and other fields, such as psychology and cognitive science. People were parsing ‘the virtual’ to avenues of philosophical thought such as Lyotard’s postmodernism, Baudrillard’s Simulations, and the work of phenomenologists such as Heidegger or Merlou Ponty.

Their was also growing sophisticated and abundant literature massing around the topic as academics and intellectuals from across disciplines scrambled Klondike fashion to the topic, and articles and features in print and press were common. It certainly presented a grand narrative and compelling prospect this idea of a virtual or augmented reality. It easily lent asking of us yet again to consider just what is ‘real’ in the first place, what makes it so, and what is the nature of embodiment and our experience of presence and being there

Questions as old as antiquity and difficult to argue and resolve. I think here of the work of Michael Hiem. Michael Benedikt in Cyberspace: First Steps (1992) The book also includes chapters by another architect, a philosopher, a sociologist, and an anthropologist, as well as a number of computer people. Aukstakalnis and Blatner’s late 1992 book Silicon Mirage opens by examining how human senses operate, and moves on to how virtual reality should be designed to be sensitive to these issues. Meanwhile the press continued with it plaudits citing new uses and ideas for technology, general utopian ideas that ‘Almost Anything is Possible- Virtually’ (Susan Watts, The Independent, Mar. 29, 1992). Just as the alphabet and the printing press changed the way people thought, virtual reality will shape our notions of community, self, shape and time.- Pimentel & Texeira (1993; p.24).It was interesting to note how such ideas in the press, and even movies (The Lawnmower Man movie had been released), could be influencing [or not] the technical development of virtual reality technology (hardware), virtual world design (the software), and at the same time shape the public perception of it as it diffused into the market (the first arcade VR system had also been released). For most people it was still more virtual than actual. However, this was set to be changed as the games company Sega were intent on producing a head mounted display and data glove for the Genesis games console, and it was widely held that this would spearhead adoption and domestication. It would take its place as a ubiquitous home device and this would in turn generate imitators and competition and further developments.

So the argument of Virtual Reality as technology destined for the domestic space was built upon three main premises: 1) a novel interface which would enhance the gaming experience and; 2) a price which would support mass market sales 3) the additional functionality of connection to other gamers and worlds using internet access, the sharing of worlds, making virtual worlds social.

It should be noted that the internet as I have mentioned, was only just emerging as a mass force. but was gathering momentum with increasing dial-up connections, and the interface which would link to it at this juncture, was imagined to be VR, not only a graphical web browser with its pictures and text.

I had joined a few of the discussion threads particularly on sci.virtual worlds, the name of a newsgroup which was discussing all manner of things to do with virtual reality. It was easy to spot polarisations between those concerned with hardware development, software development and those chasing VR as a phenomenon. It is easy to spot that there was a kind of ‘objectivist’ and ‘realist’ – hard-technology’ perspective versus a ‘speculative’ and ‘futurist’ philosophical and social -soft’ view. I put my hands up that I was part of the latter.

But there were also a third group. While claiming inclusivity in being technology versed – they actually acquired HMDs and high-end computers from grant money, and had a dedicated programmer as part of a full research team, they were researching virtual displays with reference to how they affected the human perceptual gear. There results were showing that there were issues regarding any sort of protracted use of HMDs. The worst of it was that they were colleagues in the Dept. of Psychology, at Edinburgh where I was based. Through discussion, I got the feeling that at least some members of this group were very sceptical, not only of those in the ‘soft-social’ camp [me], but also those that were struggling to develop the technology in the fledgling VR industry in the U.S. and U/K. Now there are those who would argue that scientific scepticism is very healthy, and especially so when others appear carried along in the allure of a trend.

But in the same life there was an impression that they were doing ‘real’ science in the public interest. This compared to the ‘gung-ho’ and ‘reckless’ commercial interests of the industry developers, and the nonsense, fuzzy, woolly, interpretative stuff of the ‘phenomenon’ brigade. In order to pursue their interest in producing technology, they consciously or unconsciously were avoiding detrimental side-effects of use. It was their mission to not only reveal this to these developers, but to warn, via Journalists, the public at large.

The following is article was printed on the Front Page (!) of the Independent on Sunday, accompanied by a 6″x9″ photo of someone using a VIRTUALITY hmd.

“Sega game could cause eye damage – By Steve Connor and Susan Watts

A new toy that allows children to play computer video games in ‘virtual reality’ could permanently damage their eyesight.

The equipment – a headset which beams stereoscopic images on to both eyes – is already in use in such hi-tech amusement arcades as London’s Trocadero. Sega, the Japanese computer games company, intends to launch a home version in the United States later this year, and in Europe next year.

Tests of virtual reality headsets on adults produced visual problems which scientists believe could be far more serious in young children. One fear is that the toys could lead to permanent squints. Two groups of researchers, one at Edinburgh University and one in the Ministry of Defense, have detected side-effects in adult volunteers who wear the ‘head-mounted displays’, which produce an illusion of reality by giving all-round, three-dimensional vision of moving objects. Such equipment is already used by the military and by commercial designers who want to see their work in three dimensions.

The Health and Safety Executive has set up an investigation of the headsets because of fears raised by a research team led by John Wann, a lecturer in human perception at Edinburgh. ‘Our results suggests it seems particularly unwise to introduce them as a toy for children,’ Dr Wann said. ‘If they are spending more than a few minutes with these headsets, there are serious considerations for their eyesight.’

Mark Mon-Williams, an optometrist, said that people who used the headsets for 10 minutes showed similar visual disturbances to those who spend eight hours at a computer screen. ‘It’s amazing what you are asking your eyes to do inside the headset,’ he said.

Of 20 young adults who took part in a 10-minutes test, 12 experienced side-effects such as headaches, nausea and blurred vision. Mr Mon-Williams said that a particular concern is that the headset puts a lot of strain on binocular vision, which is fully developed in adults but is more liable to break down under stress in children under 12 years, causing squints.

The Edinburgh findings are supported by researchers at the Army Personnel Research Establishment at Farnborough, Hampshire. In a test, 61 per cent of 150 volunteers reported symptoms such as dizziness, headaches, eyestrain, light-headedness and severe nausea.

Mr Mon-Williams said the main problem stems from the headsets severely straining the eye muscles, leading to slightly cross-eyed vision. A slight misalignment of the two images in each eye produces a visual disparity that the muscles try to correct.

Andrew Wright, software product manager for Sega in Britain, said that the new product would be tested extensively before coming on the market.

Other health problems associated with virtual reality are beginning to emerge: a form of travel sickness is affecting people who spend too long in virtual environments. Symptoms such as nausea and disorientation are brought on by the slight time-lag between people moving their head and the scene they are immersed in ‘catching up’.”
_______________________________
I would say that Sega and the rest of the VR games manufacturers have some PR work to do. I will watch the Independent for any responses to this article.
J. Hill
Note: My affiliations with the University of Edinburgh are totally unrelated to those of the research team cited in the article.

What followed was a rather intense exchange between the protagonists and antagonists. It makes an interesting read which I will leave to you to follow if your wish here.

7 th spet. SCI: Independent on Sunday report on VR

14th spet. – INDUSTRY : Two worlds in VR ?

14th sept. – MISC: VR & Human Factors

It is interesting to contrast this case with that of ‘the Newson report’ (reviewed in an earlier post]. While the Newson report rested upon ‘the learned opinion or views of ‘authority’ this case very firmly rested upon ‘rigorous empirical scientific investigation’. These are moral panics. The question is how to raise concerns without making a statement, one which may be exaggerated or misinterpreted by the media. In both cases these academic researchers and health care professionals certainly appear to be working in the ‘public interest’, and it would be difficult to level a charge at them that they were acting in their self-interest in terms of raising their profiles in the public and professional domains. The similarity of the two lies in the manner in which they came to public attention through mainstream media. Their opinions and findings made the front pages of the newspapers, where they joined tropes and narratives that were being woven and were ongoing.

In the first case, that of Newson, was the terrible case of a small boy kidnapped, tortured and killed by two older boys, which had put the fear of death into the public and especially anyone who had or has had children. The public were looking for answers, and were only too willing to turn to those in authority on matters to do with children and their development to provide them. You cannot help but get the feeling from this case that these experts were only too willing to oblige, so much so, they forewent the orthodox conventions of science to explore precisely what they were saying as hypothesise, or even to draw upon an extensive and well-founded review of the literature, and drawing out compelling arguments to back up their claims. Instead they came on the back of conjecture and perpetuate the myth that media influenced violent behaviour. And that new media offering interaction and immersion could only exacerbate matters.

In the second case, researchers who were involved in exploring virtual reality came to focus upon problems to do with the perceptual gear through immersion in VR. It is assumed that proper tests and test methodology were followed and significant results appeared. Then there is a jump. Somehow the results of the experiment appear on the front page of a Sunday broadsheet supplement. The article was contributed to by a journalist who was largely responsible for buoyant optimism regarding virtual reality up to this time. In the ensuing confabulation the researchers were attacked for the generalisations which characterised the piece. The main rebuke from the academic and industry researchers were that the specifics were not mentioned – that they had used this particular model of HMD, and this kind of virtual world. The main criticism of the research lay in the fact that it only pertained to this configuration of technology. However, the argument from Edinburgh seemed to back up the idea that it was in fact a general gloss, that these effects would be experienced with any other HMD, and by extension certainly the prospect of cheap HMDs which were being developed by the likes of Sega. Now would this research be useful for those developing the technology? Did it make any recommendations for alterations or even alternatives systems which could alleviate these problems? No it was a fairly damming indictment which said that this technology was bad for your health. Full stop. The colonisation of cyberspace via VR would be stemmed, the ship was leaking More than this the ‘child’ card was played again, not only were they more subceptialve to having their percpeutal gear compromised, this revelation came to the fore from a genral concerns that they were speidng to much of their waking hours playing games anyway.

The controversial nature of this research needs to be contextualised against the fact that much of the writing on the subject to this point had a tone of highly charged optimism in terms of what you could do with this technology. Its reputation preceded it as those involved in developing the hardware and software struggled to attract funding as did undersity researchers. They were also struggling against the technology on a number of fronts. From a public interest perspective there most certainly needed a drawing attention to controversial

What was clear, and what I am trying to present here is the fact that these arguments were useful. They could have happened offline in a conference setting or panel discussion, but they happened online, and they had happened in the press – new media and old media. They did not come about from links, they did not come about from internet research, but they were enabled by the internet.

In fact, whether VR develops or not depends on funding. How it develops will be affected by which projects get support, and how much support. This in turn, relies on the attitude of investors (and markets) to the technology. If these narratives are accepted both by investors and markets, VR could become a self-fulfilling prophesy.

It was not the only aspect casting a shadow over the technology. The Lawnmower man, which echoed the counter-culture, life extension, brain enhancing, drug taking was hardly a utopian vision. This view would be hard to shake off due to Autodesk’s original enrolment of Timothy Leary in their early promotional video and the Wall Street Journal article which portrayed VR as electronic LSD. Then there was the unconventional appearance of Jaron Lanier and the fact that it came to public attention in a publication like Mondo 2000, which did nothing to shake off its fringe image. None of this suggested domestication, the positive argument that this device is good, could be good, and could be made with the right software, good for your kids.

Bob Jacobson in 1992 was still keen to counter the perceptions about VR’s association with drugs:

Externally, we still have to deal with a popular press that is lazy, inaccurate, and sensation-seeking, which two years after the Wall Street Journal coined the genre, cannot resist the “electronic LSD” story. This week, our own Seattle Times, usually the most stuck-in-the-rut paper in the Pacific Northwest, chose the e-LSD theme to try and break out of its Sunday-morning literary straitjacket. Only the readers didn’t know how hackneyed were the writer’s breathless warnings about virtual addiction; everyone else, from writer to publisher, had been informed. But the article ran anyway.” -Robert Jacobson, “Where In The (Virtual) World Are We? Building A Virtual Worlds Industry” at “Virtual Reality 92,” September 24, 1992

The aftermath of this study put breaks on the entire fledging industry, in 1992, it was growing fast. By 1992 there were 5 companies marketing complete VR systems, and 62 companies working on related technologies. There were also some mild commercial successes. British company W Industries were quick to release the first commercial VR entertainment system, “Virtuality” and formed a partnership with Horizon Entertainment to market in the U.S. In London, Copenhagen, Stockholm and Oslo A game realsed in January 1992, had multiple users are linked through four networked Virtuality units. In this new game Legend Quest, located in Nottingham, the players fight as a team against the villains created by the computer. By 1993 they had more than 350 units already installed in 17 countries at the time of the Edinburgh controversy (Brill 1993). The virtual reality games have simple 3D graphics and are experienced by means of headset. In spite of simple graphics these new games are a potent expansion of the old arcade machines from the amusement centres.

The result of this scientific controversy, was a kind of slowing down of the development of virtual reality, the technology, and certainly a downgrading of the idea in the public mind. Any news regarding the development or release of the Sega HMD were quiet. Of course work didn’t end on the development of immersive VR – which is still ongoing today. But I still do not see either USB HMDs in the local computer shops here in Cambodia, not do I believe they are in PC World back home in the UK. Nor are datagloves available. No, it is relatively cheap PCs and laptops which are certainly powerful enough to run those programs but it is still QWERTY keyboards and mice, and some touch screen interfaces on the IPAD and Andriod pads and phones. There were much more exciting developments that did happen in what was referred to as ‘desktop VR’ in particular a highly innovative game entitled Doom (1993) which is the pre-cursor to all first person shoot-em-ups today. This game was innovative as it brought together a unique business model (it was released as freeware for downloading), it let one move in a virtual world from the first person perspective, and it allowed different players to enter the same world enabling collaborative and competitive modes of play, tools were made available so that players could build their own levels of the game and finally, players could chat to each other while playing. All of these features were radical and discontinuous innovations in the world of gaming and in terms of online communication.

Another development which perhaps arose out of the rise of the internet and world wide web and the residues of virtual reality and virtual worlds was the idea of making the physical world more virtual by linking objects to the internet, or making them trackable, or placing sensors, screens and computers in the environment. Mark Weiser’s unicomp. One more aspect that needs to be considered is that we are currently living and working within the Internet of Things and locative media. For some time now, perhaps starting with the Cambridge coffee machine cam in 1991, the number of devices connected to the internet has been increasing exponentially. Homes, streets, and offices are being networked. The reasons and purposes of this connection have been a bone of contention for some time, but it is supposed to be of benefit to people. When a Digital Multimedia fridge freezer was announced by the electronics manufacturer LG back over 6 years ago, people wondered why a fridge needed an internet connection. At the recent Consumer Electronics Show in Las Vegas, over 50 per cent of the gadgets on display had some kind of internet connectivity. In the image of the connected computer in the Network the surrounding universe and our bodies are becoming monitoring screens. The digital society develops the drifting self in a private imaginary time of parallel worlds. Each individual sees himself promoted to the control of a machine, isolated in a position of perfect sovereignty, at an infinite distance from his original universe (Baudrillard, 1993). We no longer exist as playwrights or actors, but as terminals of multiple networks (Baudrillard, 1988:p.16). The Baudrillardian theorising of the drifting fractal subjectivity in the global virtual process describes the same phenomenon as cyberpunk literature:


.
In some sense, as was predicated by research we did with the Design Council of the UK over 15 years ago now, humans are playing a smaller and smaller role in the exchange of information online. We asked the question “Why make things smart/”and we explored management of design issues with respect to various firms who were interested in developing devices, buildings, and objects which made novel use of IT, RFID and new reactive materials. In such a world, the need for interfaces diminishes, opposed to sensitive, automatic and hopefully intelligent style responses issuing without our conscious awareness. The number of connected devices reaching 50 billion by 2020, with such ubiquity deployment of such devices, one wonders who will sift the data looking for positive outcomes making our world more user-friendly, productive, peaceful or fulfilling? I mean my telephone sending an automatic tweet to the supermarket trolley with my grocery needs for the week may conflict with, [argue with] the checkout which indicates that my dietary requirements for keeping my insurance premiums low are being compromised.

Free wi-fi, even enforced by law [as in the case of Kula Lumpur in bars and restaurants] is aimed at taking commerce and information everywhere, and generating information from anywhere and everywhere. This when put against the building of the new NSA facility, carries serious implications which were already touched upon in a previous post regarding surveillance, but also increases the prospect for some interesting social science projects. In a sense such scenarios suggest how reality is becoming virtual, whole at the same time the virtual is becoming reality. This is a two-way mutually shaping process, between technology and its owners in which the human condition and individual persons are sandwiched. But all this I automatic, any arguments built from the big data produced from this, if usable, will leverage arguments in the board room and will lead to changes in products, services, policies and advertising.

The fact of the matter is that they never go much deeper than that, in a similar fashion that some students are not going any deeper in the literature than Wikipedia. It is a far cry from the visions of early hypertext pundits who viewed it as a radical new ways of weaving together narratives. Rather links are not taking one on a semantic journey from point A to point B, to point K, to point S, to point A. Rather, the use of search engines is taking you from point A defined by you in a keyword, to their selection of point B, to which you may return to point A, and redefine your search to go to point B, and so on. You can also have the same deal with twitter and tinyurls. We make most things familiar that we regularly use, they integrate or assimilate within lives and lifestyles, so much so that it is only when they are removed or breakdown that they are felt and become obvious again, and to return to the homeostasis we need to find a replacement or get them fixed. What was radical yesterday, like PCs, hypertext, spreadsheets, email, and even virtual reality becomes common ideas or technologies today, or passé and even redundant, or a continuing challenge [fully immersive and satisfactory VR has yet to materialize].

http://www-psych.nmsu.edu/~pfoltz/reprints/Ht-Cognition.html

Some have recently criticized good old paper books for their lack of connectivity. On a recent news report there was one face book user reporting that she used to read the New York Times from cover to cover but has no time now as she is playing a Facebook game which she is hooked on. Maybe I can understand this. Being interested in various fields of scholarly work, I get tired when I come across papers which seem to over do the citations. I am not exactly sure what is being said by the author. Is it to show me [and of course the editorial team] that a thorough trawling has been made of the relevant literature? Is it to suggest that I must source all these papers in order to grasp the unique significance of what is being offered here? Is it an invitation to get up to speed on a unique knowledge pathway which requires all these sources as foundational knowledge? It is rarely clear. At times when I have read the citations I have found the source has been lifted out of context, or only barely relates to the subject matter presented, or that many other citations could have went in its place or service the argument better. In many cases this form of communication only serves to obfuscate and make troublesome any attempt to get to deeper meanings, interpretations and relevance of what is being presented. The fact that often you can’t access the papers as they are held on a publishers database perhaps one that your institution and its library has not subscribed to. This makes it difficult to build perspective and let argument emerge. This way of working is different to that of the likes of Raymond Williams.

Nicholas Carr tells us that the internet is reconfiguring out neural systems. David Weinberger, tells us that the Internet is changing our concept of knowledge. Instead of solely relying on static books and experts, knowledge has become networked. Weinberger’s Too Big to Know, rethining everywhere (2011) embraces the notion that learning is a social enterprise and that traditional barriers to scholarship are being challenged by the Net.

[T]he system of knowledge that we had developed for ourselves is, in many ways, a system of stopping points because the medium of that knowledge was paper and books, and for all of their glory, the links in ‘em don’t work; the footnotes are broken. When you go to click on them, they — you don’t actually get taken to the next book. So books are a very disconnected medium.

Whether we want to know this or not, whether it is relevant or not, is defined by the seeker themselves and the ready to hand availability of such information. Nicholas Carr in the Shallows sees that books dwelt on subjects, gave over universes of who the subject was treated by authors rather than simply providing sound bites or fragments of opinions, even learned opinions. This universe is context, but how far does context go? How far should it go? It will go as far as access is granted and recorded data is indexed. Locative media, where GPS is linked to place and historicism is through web 2.0 apps is increasingly the amount of knowledge in the world of the world. So is the digitization effort of Google and other agencies who want, like Microsoft did with Encarta, and now with Wikipedia, to claim knowledge for their own brand, or for the commons.

There is a pub quiz night where I live, random questions are produced, Guess where they are produced from? The object is to test the residual knowledge of the participants in that age old game show fashion. Armed with the iPhone you can answer all you do not know… unless invigilation is taking place. But then this is a closed system, which may be used for plagerism or for simply taking the fun out of games like when my son, frustrated with not being able to solve a games puzzle learned to use the internet to find cheat codes. He would afford himself superpowers or invincibility and then the whole object of the game was lost and he would stop playing it.

The OED states that imagination involves “forming a mental concept of what is not actually present to the senses.” This is critical; it means that knowledge then concerns itself with what is present to the senses. But the mind is more than that. Wilhelm Dilthey argued that the individual’s “lived experience” – that is both empirical and imaginary experience – is necessarily tied to its social-historical contexts, and the meaning emerges from the nexus of relationships. Experience for Dilthey was not isolated and enclosed to the individual’s mental realm but was linked to socio-historical horizons or contexts. We have biographies, and we have knowledge and experiences which we have made sense of and drawn moral judgments of, and we call upon such memories to make sense of and interpret the relevancy of new ideas. These can be based upon empirical direct experiences, imagined experiences and media experiences. After a protracted period working on construction projects, I remember staring out the window and noting the manner in which a roof on the opposite tenement building was put together. This was something which would have been outside of my understanding before I had taken a roof apart, working with tradesmen who explained their craft. I am still in awe of the way and the complexity in which traditional dwellings are put together in Scotland.

The notion of alternative realities is captured in movies where someone daydreaming lives out the part. We can always imagine preferred realities, but our embodiment and our socio-historical levied experience will always bring us back to earth. Or will it. . “even virtual-reality systems deny the importance of engaging the senses in the physical world. One of the more extreme aims of virtual reality is to present sense data “directly to the brain,” circumventing the body’s normal engagement in the physical world.” Coyne 1997, p29) But does virtual reality offer us this over 20 years after it was introduced? No. Does the web offer us this. No. Does social networking. No.

Hypertext, can serve, or should serve, as a form of punctuation for the author and reader, in the same manner as a full stop, comma or colon lends us space to emphasize, reflect and breathe, then hypertext lends us the capacity to understand more regarding context and where the author is coming from, his premises and his inspirations. Anything else than this is mystification. I am not so sure that arguments for or against the proposition can emerge, as representations are statements, just as like the word “love” is slippery but still holds an integral coherency, which becomes more coherent the more context is provided “love of food”, “puppy love” “love lorn” ”lovely” and so on. Pictures are the same. While their may be a profusion of images coming onto the web now, I am not sitting there going through them learning or arguing. In this sense the link to the contextual or supporting material or the photo is a dead-end to open-ended interpretation, not at all fostering argument to emerge, or further action to be taken.

What we know is the departure point for what we can imagine, and what we imagine mutually shapes what it is we want to know about. There is linearity and an accumulation in such a process, but there is always room for serendipity. I remember going to the library, and searching of a physical book, I come across a few others that I would have probably never have seen, let alone consciously looked for. They could well turn out to be more rewarding and open up new vistas. The web using search engine technology can do that as well. If you see the progression as not ‘link’ to ‘link’, but ‘keyword ’to ‘keyword’, ‘idea to idea’, ‘body of knowledge to ‘body of knowledge’ – and all the permutations and interpolations in-between.

There is a kind of Popperian or Kolbian, procession in which you have to formulate what is you want to know and correct it in the light of experience. The early web was about finding links offline, such as in magazine or recommends from friends and colleagues, and through online lists. In a sense little has changed, with friends and those you follow, sending out items of note and interest with accompanying URLs in hyperlinks. To hypertext or not to hypertext, that is the question as an author. What is relevant or not relevant to contextualize is at the discretion of the author, there is an aesthetic consideration, too many, and the point of writing at all seems redundant, you might as well call yourself a search engine. Indeed the likes of Yahoo [short for “Yet Another Hierarchical Officious Oracle”] began as lists of favourite links collected and curated by Filo and Yang. As their lists grew, they realized they had to separate them into categories. This became more and more complex as more and more information became available.

Mosaic browser running early Yahoo

Only later did Yahoo! Join with the others and use search engine technology as a centre piece to finding information. The ideas of earlier attempts to use hypertext as a new non-linear means to present factual and fictional material has given way to one click technology, such as when you follow tinyurls in twitter. Its augmented twitter’s 140 word limit to lend a few paragraphs and pictures regarding a topic. The idea of departing from one point and going on some odyssey of links which have you returning victorious, wise and knowledgeable from whence you came like the Joseph Campbell’s hero simply doesn’t happen.

]

It may be as Kevin Kelly puts forward that one day it will be book where everything, every conceivable pieces of knowledge, links.

But it is still a Machiavellian mess where technology and followers are used to weave egos and narratives of the self more than it is to teach and learn. The very fact that there are personalities, gurus and celebrities whose ideas have resonated with the individuated masses, the alone together as Sherry Turkle says, does not mean that there is anything different going on than the traditional film and pop idols showing us round their mansions in Hello magazine.

The first real ‘graphical’ web page me and my Taiwanese officemate downloaded on Mosaic v.1 was that of a business studies lecturer who had mastered HTML, and had pictures of his slippers and stories about his marriage break-up. I asked Tony, my office mate is this available all over the world”, yes, if they can find the URL”. It makes me tired to think of all the hype about web 2.0 or whatever is new online when I think back to those pages. They were great, they represented this guy’s personality, he opened up things that we would never have known of him because of normal social conventions. And the message was clear, I, or you could do the same. I set about doing just that and had my first web presence up very early. I chose not to go so personal but it was a great calling card for international communication. The point I am making is that great websites are not going to be coming out of developing nations. Largely due to lack of cultural capital, which is much harder to develop, than web design or programming skills? This is not to say it won’t happen, but it takes imagination and an appreciation of the value of these things. Leisure, wealth and a degree of political stability are prerequisites for the freedom essential to creativity and for the use of artistic products as indicators of social status. Until then they will only be seen as icing on the cake.

I use links in my posts in a typical manner to introduce references or to provide ancillary information regarding a subject.http://www.google.com/url?sa=t&rct=j&q=taxonomy+of+hypertext+links&source=web&cd=10&ved=0CFwQFjAJ&url=https%3A%2F%2Ftspace.library.utoronto.ca%2Fbitstream%2F1807%2F14406%2F1%2FNQ49889.pdf&ei=4c1zT7yHBcmtiQfig5DkDw&usg=AFQjCNEuKktumlKaNdXZe7BmttONB97p2Q&cad=rja

“You are entitled to your own opinion, but you are not entitled to your own facts.” There are three parts of the tradional argument – the premises, the inferences, and the conclusions. The premises are the basis on which the proposition rests. In the two traditional types of argument, deductive and inductive. In the former if the premises are true, the conclusion must also be true. all the supporting evidence and reasoning for the premises and the inferences. The idea is that if the premises are true, then the conclusion must also be true as well. An inductive argument is one where the premises provide some evidence for the truth of the conclusion.

Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three.

But we don’t need memory any more now that we have Google and Wikpedia. Do we need argument?


About this entry