Manifesto for Teaching Online – Aphorism No. 4 – “‘‘Best practice’ is a totalizing term blind to context – there are many ways to get it right. “

I couldn’t agree more here. Paraphrasing the late Tyrell Burgess, pre-empting the intensity and stance of the consumer-user focus in millennium design, I hold that education, as far as possible, should start with the student, their needs and their requirements as learning must always work from there. They, the student, should, could and would ultimately define best practice. The student would be the centre of a kaleidoscopic social and technical constituency of support, knowledge and communication opportunities. In essence the student serves as both subject and object of, and in context and relation to, whatever an online course purports to offer in the way of a learning experience. The emphasis should be to emphasize and improve the ‘subject’ aspect – i.e. bring the student’s expectation and needs to the fore, and parse this with that of the expectations of their sponsors, society, including communities – professional and general, potential industry clients and prospective employers. It should also bury or reduce administration as much as possible.

This is the platonic ideal but we know all human-made structures, physical, institutional and mental, are subject to entropy. There is no dues ex machina as all are flawed, compromised and in need of a constant maintenance, a perpetual updating an unending innovation, motivation and energy. There seems to me a difference between established ‘best practice’ and ‘universal practice’ and evolving practice’ which I wish to dwell upon here. The idea of best practice comes largely from management [social] ‘science’, where it has a more accepted following coming from the limits of organizational research. It has spread to medicine – another imprecise science, and education another one still- Case studies across firms practicing certain approaches can highlight which practices work better than others. Instances of activities have been shown or at least argued to possess a set of transferable rules of thumb to be applied to affect desired outcomes. They differ from algorithmic thinking and recipes, which, if ruthlessly adhered to and rigorously followed to the ‘T’, will almost guarantee the expected result… a decent cake, experimental result, or state of mind.

[There is a debate on whether algorithmic thinking should be taught as a 4th ‘R’ along with reading writing and arithmetic; and why it should not dominate management thinking, or even in Science education]

Just because a set of practices are suggested, adopted or followed – that they become via tradition or institutional culture the way things are done round here – will not denote ‘best practice’. In design, the nature of addressing complexity can be characterized by Christopher Alexander’s idea of ‘building over time’ This includes the embodiment of design knowledge in pattern languages [http://www.patternlanguage.com/leveltwo/ca.htm] over time and through an iterative and repetitious social use, reified by a kind of Popperian problem-solving. Can these precipitations, outcomes, be termed as best practice?

More often than not they are probably the hegemonic received view, seen as the only way of doing things [like the teachers attitudes to using new technologies in the first post], but perhaps are techniques and manners of practice which concord with local materials, managerial and governance, attitudes to work, meteorological conditions, geography etc. What is not clear in Alexander is the method by which patterns and patterned behaviors and activities [and their possibilities] arise and occur (Alexander, 1979), but we can imagine it is through a kind of community or collective reflection, showing, apprentice process and doing. Richard Coyne acknowledged the need to make operational the alternative methods to rational models, such as Donald Schön’s reflexive practice (Coyne, 1995 p. 226). There has to be a basic social science to looking at instances of best practice, is it a habit or is it performance based? Is it locked up in process and method or is it in a final product – or proportions of all? There is a clear need to move the theory of best practice in design beyond critique and into practice where it can be evaluated and considered fairly in relation to other approaches. It will never be found in a lab, but on a job.

To start with the student, their needs and their requirements and work from there, one must recognize the student as a product of his exposure to society and culture, and to his or her own interpretation of this through shared reflection of their personal biography leading up to, not only commencing the course, but also what laid the foundation for choosing this course in the first place. We also have to understand something of where they want to go in the future and how they see the course as a roadmap with milestones helping them to get there. Taken to its logical conclusion this future, most idealistically, means a truly meritous and democratic society, itself within an integrated and unified global village, which exists to serve the individual, providing him or her with a means to life, from anything from easily available clean water, sustainable power and ultimately, a lifestyle which according to Ivan Illich, is ‘convivial’. That is, it should be lived in joyous collaboration with friends and colleagues. Conviviality would place high on Maslow’s hierarchy. Both learning and work such a utopia would be fun and fulfilling, just as it was promised by the pundits of the knowledge and information age, many of them coming from that era of counterculture and alternative political realities. Some would scoff that this is an romantic idea, dated like ‘peace and love’, [which today translates into oil, occupy and bilateral agreements].

Oh, convivial almost sounds cruel dangled in front of us especially against a backdrop of so many people locked into monotonous factory jobs or low-paid service positions or ‘McJobs’, but it is still seems attractive and desirable as an outcome and surely should be on the wish list come the revolution. But should we hold accountable those commentators and digital age prophets who had machines, machine intelligence, virtual realities and automation as the means for freeing up time for people, as they became the paragon of the magic loom that would bring improved effectiveness and efficiency to industry and more time to be human to us. Those 1930s science fair robots serving us, like our home helps, tea and cleaning the house have become more recently those nanobots helping our immune systems fight disease. No reason to have home helps and cleaners, machines will cater for this while we get on with higher order work. It brings to mind a quote I used in my Ph.D. thesis. Marion May Dilts (in The Telephone in a Changing World, 1941) tells us that earlier in 1879, Sir William Preece, the then chief engineer of the Post Office, was guarded regarding the potentials of telephony to change existing practices:

“I fancy descriptions we get of its use in America are a little exaggerated, though there are conditions in America which necessitate the use of such instruments more than here. Here we have a superabundance of messengers, errand boys and things of that kind . . . the absence of servants have compelled Americans to adopt communication systems for domestic purposes. Few have worked at the telephone much more than I have. I have a telephone in my office, but more for show. If I want to send a message – I use a sounder or employ a boy to take it.”

But unfortunately like many society technology predictions, this was not really how things panned out. Indeed the telephone did replace sounders and boys [who no doubt went on to be successful captains of industry, or merely shifted into another dead-end service job running around at someone else’s will – home health-care aids or landscaping, these jobs pay half of what a manufacturing job pays and a third of what a professional job pays. More than 40 percent of Americans are toiling in these jobs), but the idea offering up leisure time has not happened and most likely will not happen anytime soon. The visceral, blood and guts, embodied, situated, actual and tangible – human – aspects of the likes of Amazon.com’s operation, the warehousing and provision which exists at the end of their renowned 1-click technology, between the button and the customer receiving their discounts and packages, is enabled by low-skill, low-wage workers on conveyor belts who have targets to reach which making them nervous even to have toilet breaks. Convenience and conviviality is all in the interface, but behind that mask and the back office it is business as usual, rats running mazes, dog’s salivating and people performing monotonous repetitive tasks, machine like, for little reward.

But for those working in those quirky offices designing web sites for their big corporate clients, who exist like financial advisers, stockbrokers and banker to cut through the hyperbole, translate it into simple terms we can understand and show us what we are missing out on. Life also goes on – co-location. Essentially, people who are in a walled garden and who grow and tightly maintain power through and by networking. [I can still remember the hatred and murder in the man’s eyes on the nightshift at the Matchbox toy factory where I and my friend worked as undergrads – the reply to his 3 a.m.’s “how’s it going” was, “If you speak to me again I will knife you.” – everyone, needless to say sat eating their sandwiches in silence]

But the cool design brigade, they, at least on record, look as if their work is not differentiated at all from play, fun, conviviality and recreation. Its lovely to see pictures of the creative environment of IDEO’s offices, with those bikes hung on walls and dudes with electric scooters shelves of inspirational toys and plastic tubing, moving themselves and ultra creative ideas from office to office… and making money. They are the 1% in job terms; although highly visible, the workplace for most others globally is much grimmer. According with Illich and his followers, a society that does not create convivial learning and living is not living up to, nor fulfilling the potential of humanity, whatever that may be or entail. The people may have money, but essentially they may not feel fulfilled and happy, up Maslow’s hierarchy. However, the promise of a more innovative, creative class, the idea that a degree will guarantee you a job in the creative industries is in reality becoming an untruth. The design studio is more likely to be just another sweatshop and just another McJob.

The sobering reality of this comes home in a report from the economic and social research council. Here, Ian Diamond indicates that many of the assumptions about the knowledge economy are now being brought into question. Their study demonstrates ‘the need for a new ‘great debate’ about the future of education and skills and their relationship to careers, prosperity and social justice in a global economy.” Whole it may be true that companies are increasingly building capacity for high end activities including research, design, and product development, they are also doing this in the emerging economies. The idea that new products and services are to innovated in the Global North and West and merely made by people in the Global south and east may not stand the test of time.

“Moreover, the role of higher education will be subject to intensive political and educational debate as the returns to knowledge decline for many, and when income inequalities are increasingly seen to be divorced from ‘meritocratic’ achievement. This will lead to claims that education is failing to meet the needs of industry… If ‘permission to think’ is limited to a relatively small proportion of the European workforce, this raises fundamental issues about the role and content of mass higher education. p.18

The origin of the myth that Asian’s are only good at plagerising, copying and following algorithmic thinking at the expense of critical, creative and divergent thinking sets rests largely upon the differences I cited between education in the west and what passes for education the South and East [in post one of this series]. But this is changing as more and more quality institutions open up campuses and schools for the rising elites in places like Beijing and Shanghai couple to the fact that ff the 691,000 foreign students who enrolled in American universities in the 2009-2010 academic year, nearly 128,000 — or 18 percent — were Chinese. These educational adventures are as much ethnography for the Chinese students as they are in terms of them learning subjects (I really do wonder what they make of all the sick students fighting on a Saturday night in the UK?). Not only will they learn subject but something about the lives of those their country manufactures for. They will see all the long way down the value and distribution chains and learn from this.

This capacity for creativity and innovation is certainly not something genetically unique to Europeans and Americans. And nor is it to their institutions and companies, no matter how old and established they are. Best practice as product can be taken apart, re-engineered and reproduced, and so can, to an extent, best practice as process and method. But leapfrogging development to the stage of making product, developing efficiencies in method and process will eventually provide the learning necessary to move things along, and moreover, move in quite different trajectories that could reap all the benefits of this learning. Taking this into account and melding it with the fact that many transnational corporations move their headquarters and manufacturing divisions to anywhere that best suits them at anytime in terms of cost, talent and market, tax incentives etc. it would be very wrong in this flat world to think in terms of geography beyond then notions of strange and exotic fruit and vegetable, infrastructures, wood, ore and energy.

We have opened and scattered ‘best practices’ under the auspices of our utopian liberal and egalitarian idea. Of development and helping the underdog, only to have the assets which came at vast expense in time and resources being selected, taken off the shelf and copied. Being first to market [bearing in mind that many innovations never even make it there] is risky and expensive, you shoulder all the risks until the bugs are fixed, until the product and service is shown to be a market success and when your first mover advantage is mired in the cheaper [and sometimes more effective] alternatives come along.

Our cultures and education systems with its fostering of critical and creative thinking, the smattering of the American idea of ‘liberal arts’ although not in all institutions equally, has produced students who can hatch opinions and ideas. With our rich social, institutional and cultural heritage and more recently established youth and pop cultures, we have within the streets and social networking a rich cultural Petri dish from which to garner, pick-up and share ideas. We have been ahead in connectivity and in technology and in R & D, of new materials components and in understanding people. But this is not to last, the tipping point can change with respect to population size. The biggest growth economies this year are predicated to be mainly African. Although the UK still provides aid to India it will be superseded in terms of the economic by 2022. By 2035 the UK looks to be lagging behind economies such as Nigeria and Pakistan. Already this is affecting jobs. What is the point when a graphic designer, who completes a three-year degree, yet gets paid the same as a factory worker and is worked the same way, as part of or as machine? What happens when ideas are treated as piece-meal and piece-work? This is what happens already in the East. Yes, there has been the rise in the need of a creative class to furnish the look and feel say, of Chinese products, shopping displays, packaging, and advertising bunting. These all lend themselves well to plagiarism and copying, indeed walk into many of the large bookshops in Thailand and Malaysia and you will see massively well stocked art and design sections with large coffee table books showing prime examples from London, New York, Tokyo and Paris. Its a delight for a design pundit like me, but there is a message therein. There has been the rise in consumer and market researcher, even user experience ‘experts’- but the decent salaries for these creative workers is anything but privileged, and so is their working conditions and hours. And such work can be handled in every way apart from printing by designers based anywhere. It is not convivial in their workplace when unprecedented demands in terms of quantity, quality, effectiveness and efficiency rule the day, and where those able to work on second-order levels of creativity (i.e. the use of Photoshop and templates – not the creation of photo-shop and WebCT plug-ins and templates) work themselves to an early grave – or get replaced.

How this would translate to a social networked learning? I can think of more than a few ways. It arises, for instance, in the continuities and discontinuities of individual and group efforts in projects. What is the individual and original contribution, skills and experience an individual brings to a project? Can they develop or curate the right set of skills or knowledge to make a contribution in an unfamiliar role or task within or without a group setting? How agile, fluid or flexible are they? Can this be learned? Can new strengths and talents be learned by stretching oneself or going deeper into practice or study? How can this be enhanced by online?

The use of a common vocabulary and language between people is hardly, or totally, totalizing, but perhaps it is not considered as ‘best practice’, rather just put up with. It has been put forward as a living text- English for instance will continue to develop a diversity of patios, accents and dialects. It is typically viewed as enabling and empowering for those who use it, it reinforces their social net. The massive online populations of Chinese are blocked from Facebook and Goggle, and only has this encouraged the development of their correlates, such as Renren, but also mitigated future colonization of the American sites by language barriers from many other sites that they can access. Google’s efforts to offer translation and local services in local scripts have opened them up to many users who would not have previously been encouraged to use, all be it that the main driver for these innovations is targeted and localized marketing. Facebook, Google+ and other social networking sites would like to reduce people to events and button pressings, system-logged and codified and then entered onto a timeline, or mosaic or patch-work showing an evolution of interests, places, people or likes, or whatever else to paraphrase Kevin Kelly, ‘the machine, or product or service wants’.

This beggars a very deep and profound question I have been confronting for the last 20 years – what is the limit of user centered design? It raised its head during my undergrad research in London, later in my Ph.D. and again in a 2 and half year project the Edinburgh School of management and the Design Council of the UK entitled ‘Increasing Information Intensity : Towards Smart Products’.

It followed and haunted me into further work on innovation clusters in high-tech, and later work done at Imperial College on interactive television and digital rights management. In the Design Council project we looked at various products which were attempts by designers to just that little bit deeper into the workings and vicissitudes of people’s everyday lives. Design, as it has become more reactive and responsive largely due to developments in materials and ICTs and their incorporation in a variety of everyday items and devices, extends its reach into the realms of the personal and individual.

These relate what has also been termed cognitive technologies, ambient computing, and are used in a loose sense to “understand” user behavior, user intentions and personal contexts. Strictly speaking, they are systems that perceive the environment and take actions which maximize the chances of success. For instance, semantic processing of text messages sent by a user would allow the recipient to identify whether the sender could use voice communications in that very moment, she/he is in a professional situation, with friends, with family, planning to go to the cinema, to dinner, etc. An increasing number of “point and find” solutions have been proposed. For instance, the camera on the mobile device could take a picture, carry out an audiovisual search (“cloud computing”), match available information with the physical object and provide different types of information (“reality mining”, “augmented reality”) linked with the physical object.

Catering for this whilst also fulfilling the criteria of being non-intrusive and to a large part, invisible, is what characterizes best this kind of product or outcome. Unicomp, ambient devices, RFID tags, sensors, ‘jelly bean’ microprocessors, antennae technology and so forth, are representative technologies. The whole realm of data analysis, recording, registering and controlling – the machinations in the background – must perform useful but invisible benefits or positive outcomes for the ‘user’ – user is in inverted commas deliberately here as they are no longer ‘users’ they are simply people going around their everyday lives, they are restored, at least semiotically – to their everyday selves and lives again.

This pointed the way to technologies and devices that were not ‘in your face’ like this QWERTY keyboards and screen is in my face and manipulated by my fingers right now. In order for engineers and designers to build such systems relied on a more intense understanding of the people that they were building it for. Rather like the craftsman of yore who build custom furniture to suit an overweight client, they have to have more insight into habits, routines, attitudes, thoughts and other intangibles, including what makes things recede perceptually into the background. This is what I understand as the ramifications of Ivan Illichs’s idea of de-schooling society.

I mean you can start with an idea, a complete intangible, then work through a successive steps of tangibilities – napkin drawings, paper- mock-ups, various resolutions of prototypes until you get the final mass produced product – i.e. the tangible equivalent of ‘best practice’. But the success of this kind of product was when it evaporated experientially, as when the hand merges with the Hammer in the act of hammering [Heidegger’s commonly quoted example in discussions of this phenomena]. It seems to me there is a limit on just how far we can go, say, in taking a participative approach with students. Certainly with respect to syllabus, content and curriculum design. A course needs ‘ín your face’ aspects, obvious, obdurable, wicked problems to be confronted bravely head-on. Sometimes we do not wish to compromise, even in cultures where humility and compromise seem to be the only way to get on [the next stage is accepting corrupt practices]. Learning needs to be obdurable, not so easily acceptable, but not necessarily in respect to adopting a ‘dignity in labour’ attitudes to assessments, but in terms of stretching us, interrogating our sense of knowledge, and challenging us in a way to make us strong, tough I suppose, resilient and yet flexible. We need to invoke our experience and knowledge and expertise, the elements which, by implications mentioned above, create distance, between ‘them’ and ‘us’. The question is who, what, where and when defines it as ‘right’. Most of us accept today the notion that the divisions we impose upon human knowledge are a form of social construction and sold as ‘best practice’, not only to protect the standing and hierarchy of (to others) arcane expertise (i.e. academic writing), but also to permit the readier comprehension by the feeble human mind of the vast diversity of science in relation to nature and of art in relation to nature and humanity.

Advertisements

About this entry