Regenerational learning

Ivan Illich wrote Deschooling Society forty-six years ago. Illich was convinced that the practice of education was ultimately limited as a force for freeing the minds of humanity to make a better future for all (which he did ultimately hope could happen) because of the character of educational institutions. As Ilich put it-

“Universal education through schooling is not feasible. It would be no more feasible if it were attempted by means of alternative institutions built on the style of present schools. Neither new attitudes of teachers toward their pupils nor the proliferation of educational hardware or software (in classroom or bedroom), nor finally the attempt to expand the pedagogue’s responsibility until it engulfs his pupils’ lifetimes will deliver universal education. The current search for new educational funnels must be reversed into the search for their institutional inverse: educational webs which heighten the opportunity for each one to transform each moment of his living into one of learning, sharing, and caring. (He had similar concerns generally about the effectiveness of western social institutions that arose from modernity).

The end of Illich’s pronouncement emphasises the association of learning with sharing and caring, which is what this post touches on.

Illich writes of ‘educational webs’ replacing existing educational institutions. An analysis of Illich’s ideas on education from a primarily technocratic perspective might conclude that a deregulation and eventual dissolution of compulsory education would in and of itself be a sufficient condition for the kind of change that Illich imagined. An ‘educational web’ could refer to something like a Self Organised Learning Environment (SOLE) as proposed by Sugata Mitra. The SOLE concept was initially no more than a single, free to access, and web-enabled terminal mounted in a wall in a public area of a city in India in which the great majority of nearby residents would not have been expected to have financial means to access such technology nor to be provided with traditional educational opportunities that would be likely to offer comparable learning opportunities (since then the concept has grown into larger scale initiatives such as School in the Cloud). The original SOLE gave otherwise neglected would-be learners the chance to engage in learning (with some notably impressive results). Mitra’s experiment was an example of sharing (giving actually, if temporarily) which in itself was at least indirectly caring, but the experiment was also symbolically something of a challenge to the assumption of the desirability of education as an institutional public good and hence implicitly an argument for allowing a free market in education to replace mandatory state funded education. (Many arguments against this conclusion and detailed commentary on how it is being applied in practice are presented in the Hack Education blog.)

Arguments relating to the effectiveness of provision of education are complicated by the important fact that education is a service that is provided not only  to those who can appropriately be discussed in terms of autonomous economic agents (rational maximisers or not, depending on what economic orthodoxy one accepts) but also to those who do not necessarily make choices about the educational services that they consume- children.  Children’s families are (hopefully) the primary source of those children’s sharing and caring needs and whether consciously or otherwise families are the primary source of their children’s earliest learning. For a minority of children (currently somewhere around two-million in the USA for example) this arrangement is continued for many years of their education through homeschooling. Rejection of educational institutional provision in general for younger learners is a growing (yet still fringe) tendency that has been argued by some educational researchers such as Alison Gopnik to be based on sound principles.

What families decide to teach their children themselves varies between families. Teaching of early literacy within families is a long established and well recognised practice. Less typical but also quite well recognised is the stereotype of the extremely highly academically achieving child who has very well educated parents that have taken personal responsibility for delivering large parts of their child’s education (examples of such are easily found on certain reality TV shows). A high level of parental education is not necessarily a requirement for a high level of personal involvement in parental teaching of their children- Richard Feynman remembered his not highly educated father (a tailor) being a key role model in his intellectual development through incidents such as the one described below.

The next Monday, when the fathers were all back at work, we kids were playing in a field. One kid says to me, “See that bird? What kind of bird is that?” I said, “I haven’t the slightest idea what kind of a bird it is.” He says, “It’s a brown-throated thrush. Your father doesn’t teach you anything!” But it was the opposite. He had already taught me: “See that bird?” he says. “It’s a Spencer’s warbler.” (I knew he didn’t know the real name.) “Well, in Italian, it’s a Chutto Lapittida. In Portuguese, it’s a Bom da Peida. In Chinese, it’s a Chung-long-tah, and in Japanese, it’s a Katano Tekeda. You can know the name of that bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird. You’ll only know about humans in different places, and what they call the bird. So let’s look at the bird and see what it’s doing—that’s what counts.” (I learned very early the difference between knowing the name of something and knowing something.)   

I have no specific data about what topics most parents do and do not attempt to teach their children but I strongly suspect that mathematical and scientific teaching is considerably less common than teaching reading. A plausible reason for that difference would be that probably a lot of families would be less confident of their own mathematical and scientific knowledge than of their literacy knowledge and so would be less confident to teach it (and if their confidence is problematically low then this reluctance would be justifiable as a lack of confidence could result in ‘iatrogenic’ exacerbation  of learning difficulties for mathematics and science either through misleading and/or incorrect knowledge being transferred or by children noticing increased anxiety in family members attempting to teach them scientific and mathematical content). Educational (possibly pseudo-educational in some cases) tablet/smartphone apps for preschool and early school age children  exist in abundance and these include plenty of mathematical apps (science related apps are less common). Many of these apps seem to have been designed with an emphasis on ‘child friendliness’ in the sense of providing stimuli that children are likely to find engaging (whether cognitively engaging or merely instrumentally engaging is not necessarily clear). What I have not found evidence of is educational apps that are obviously designed for children to use with their parents.(It may well be that one of the selling points for some apps is that they occupy a child’s attention in order to substitute for input from a busy parent.)

I am currently reading (with much interest) Catherine Sophian’s The Origins of Mathematical Knowledge in Childhood. I started reading this book as part of an ongoing interest that I have accumulated in mathematical EdTech for learners that do not necessarily respond positively to traditional mathematics teaching. It is my opinion that actually almost all learners fall into this category and those that do not seem to be so are usually those who have not yet encountered traditionally taught mathematics content of sufficient abstruseness, in my opinion because traditional mathematics curricula have been designed to keep that content at arm’s length based on received wisdom of what learners are capable of understanding at different ages (for an extraordinary challenge to conventional assumptions of this kind I strongly recommend investigating Don Cohen, The Mathman who demonstrates convincingly that six and seven year old children can understand pre-calculus if it is explained to them in age appropriate formats). My ideas for an ensemble of related visual and game-based mathematics learning apps are based on the concept that there could in principle be a surprisingly high level of continuity and consistency in both user experience and cognitive terms between the earliest stages of mathematics learning and the stages at which even atypically successful mathematics learners are not expected to have advanced beyond by the time they have reached the culmination of their compulsory education. If I am correct and a programme of study with this range can be connected by a unifying underlying approach then I imagine (actually I didn’t imagine it- it was suggested to me) that both parents and children could engage with that programme comfortably enough for it to become a fairly accessible and approachable vehicle for inter-generational mathematics learning comparable with parental practice of helping their children to learn to read.




Contemporary educational institutional structures and procedures are to a considerable extent legacy systems of the mass education model created for the age of industrial growth. Mass education designed to produce assembly line workers prioritised individual competence at specific tasks. In a factory, on an assembly line, each worker has a specific task which they need to be able to repeat correctly and reliably. Management of workplaces was for a long time not formally taught but was learned by observing more experienced managers who formed connections in a chain that ultimately linked back to the founders of factories and designers of the assembly lines. Along that chain, wise managers had recognised that their workers sometimes thought of efficiency improvements that the managers had not and so had given financial rewards to workers for making these suggestions and in some cases made them junior managers/supervisors to incorporate them into the chain of management tradition.

Education historically served the factory production model by provision of two components. The first component was a basic competency component to ensure that learners could at least become workers who could reliably perform assembly line functions. Most learners were required to become assembly line type workers in order to operate these lines at high capacity while only a small minority of learners were required to become administrators. The fact that only a relatively few administrators were required and the leverage resulting from the greater impact that an individual administrator would have compared to an individual worker implied the rationality of selecting potential administrators from a cadre of the highest educationally performing learners to create a second component based on competitive ranking in order to attempt to identify which learners might be most capable of contributing positively to management traditions.

Workplace culture has moved on considerably since an education system was developed to supply it. Change has taken three key forms.

  • Routine repetitive tasks are increasingly automatable.
  • Production/service industries need to be capable of adapting to rapid technology and market driven changes to remain viable. New businesses may need to get up and running at short notice for which existing management traditions may not be applicable.
  • Changes to production /services driven by better knowledge about consumer preferences are increasing in importance compared to production changes being driven by production efficiency increases. Producers can better produce what consumers prefer as opposed to just producing a greater abundance of generically preferable items at lower cost.

These three changes can be seen in negative terms:

Industries are replacing the jobs that require basic competence with automated systems.

The management institutions that the education system competitively selects a minority of learners to join are failing to manage society effectively by not succeeding in recognising or responding to consumer preferences.

Put more starkly and in combination- the goods and services that people want are increasingly being delivered by a shrinking pool of most successfully adaptive businesses which are employing less workers. This tends towards a situation in which consumers eventually neither have realistic employment prospects to fund their consumption nor realistic opportunities to earn income by becoming producers themselves as they will be unable to compete with the ultra-successful minority of highly adaptive businesses which have accumulated most of the existing wealth. The production process will become amazingly effective technically but will financially collapse as consumers will not be able to pay for goods and services.  This problem threatens to drastically alter what in future will be required of workers and so to drastically alter what preparation their education needs to provide for them.

One effect of these changes is obvious and easily understood; education aimed at instilling ability to follow simple, repetitious procedures is likely to steadily decrease in applicability to the working world. Basic competence education centred on meeting various minimal standards in technical processes will become increasingly irrelevant. For example, literacy will eventually become unnecessary for many practical purposes as control and display systems will get better at using spoken language, imagery and gesture. Basic mathematical ability is unlikely to retain any greater vocational utility and will if anything become more quickly obsolete as it is easier to automate than natural language processing.

The effect of the three key changes to the form of workplace culture on competitively selective education is less simple. What can be simply said is that education for management will be increasingly ineffective the more that it remains based on producing learning that merely reproduces the ability to follow the processes inherent in existing management models as these models are not working well. Education for successful management will require learners not merely to adapt to existing management processes but to improve these processes such that they are more efficiently adaptable and better responsive to consumer preferences. Education for management will need to somehow achieve similar effects to that which were obtained in traditional industrial management by recognising and implementing the suggested innovations that members of the workforce generated; only now it is not only workers that need to be consulted but also consumers and other businesses.

I stress that successful managerial education will not need to be only a matter of producing individual innovators but of producing people capable of finding, recognising and communicating effectively with multiple innovators and evaluating the potential impacts of their ideas and the effects of interactions between those impacts. The crucial aptitudes required for these tasks will be the understanding of emergent complexity and the ability to collaborate.

How can an education system teach people to understand complexity and to collaborate well, and how would such a system assess learners’ abilities in this regard?

The collaboration issue is the one that most directly involves challenging the existing educational paradigm. The existing paradigm is based on competitive selection of individuals and this basis neither encourages collaboration nor makes assessment of it practical to achieve. Neither does this paradigm encourage innovation as for selection to be based on competition assessment must be standardised and teaching compliance with standards does not teach innovation.

The competitive selection paradigm is inherent in the structuring of many EdTech systems. SCORM (Shared Content Object Reference Model) is the standard which many learning management systems and content authoring tools are based and SCORM is structured around the tracking of individual learners’ achievements of various learning tasks. SCORM can accommodate some degree of group assessment but only in simple forms. SCORM is being superseded by the Tin Can/Experience API which differs from SCORM in that it permits learners to include records of activities which they themselves have decided constitute learning tasks and includes more options for learners to assess each other’s learning. Tin Can has the potential to operate more like a social network for learning than a traditional online school administration system.

Some interesting experimental elaborations of collaborative learning approaches do exist, such as MOE and QuestToLearn. Outside of formal EdTech and education practice there exists an extremely clear example of organised collaborative practice in the form of GitHub. GitHub is an open source software development community in which developers share projects and assist each other in completing them. The GitHub community provides a system for ranking its users based on how many projects they have shared and how much they have collaborated with other users in ways that those other users have favourably reviewed the collaborative contributions. This ranking system incorporates a bias of quantity over quality but relies on quantity to arrive at averages that are useful in judging quality- a contributor who has contributed world-beating code but only ever once and has contributed absolutely nothing since then may well be brilliant but it is really not possible to know whether or not they are with any useful degree of reliability.

Something like GitHub has yet to really take off in formal education. Part of the reason for this is that collaboration is incommensurate with competitive assessment methods but another problem is that GitHub projects deal with problems that do not necessarily have well-defined optimal solutions whereas problems in educational settings usually do have basically complete model answers and so the possibility exists of learners simply looking up relevant answers and seeming to know more than they do. This problem is hard to avoid considering that making progress in learning can be very difficult without reference answers to provide feedback to correct learners’ attempts and hence a significant amount of learning needs to be learning which can easily be imitated from model answers.  The practice of taking exams exists to prevent such fake learning. Exams cannot easily be incorporated into collaborative learning processes themselves, but it would not be too difficult to devise some exams based on a learner’s collaboration history which had sufficient conceptual dissimilarity to the literal content of the collaborations that the exams would be able to discern between learners who understood what their collaborative content meant and learners who were merely pretending to. Who would compile such exams though, when so many different individual bespoke exams would need to be generated for the diverse set of collaborating learners? The exams would need to be devised by educators who were members of collaborative communities to whose content the exams referred. In other words, those who can should teach, although most likely in association with professional educators that possessed adequate levels of practitioner knowledge in the domain being assessed and ideally assisted by the use of some type of assessment generating tools based loosely on existing automated testing technologies.

In the early days of mass education, it was common practice for teachers to reproduce the division of labour characteristic of factory work in the classroom. Teachers would teach a procedure to a select few learners who could most readily learn it and those learners would then act as assistant teachers who would teach it to sub-assistants and so on and so forth in an educational cascade. In principle this arrangement meant that each assistant delivering teaching was only teaching the current upper limit of what they knew how to do. I find it interesting that this approach implied that as well as procedures being learned by various class members, the class as a whole was learning to collaborate (albeit in a very top-down style). The formally designated teacher (the schoolmaster) was teaching well below their upper limit of what they knew how to do of course and the gap between the teacher’s ability to do and what they were teaching others to do was inconsistent with the spirit of this cascade principle. If though the logistical possibility had existed at the time of substituting schoolmasters with a selection of learner-assistant teachers with a suitable range of competencies, then the division of teaching cascade model might have been seen as the most appropriate model for efficiently delivering learning to multiple learners. Online collaborative communities offer the logistical possibility of implementing something not unlike this kind of learning cascade if these communities were to incorporate into their community reputation/ranking scores a metric for how much and how successfully community members had taught other community members. Ideally, unsuccessful teaching would not be rewarded in such a practice but some sort of commission system would result in rewards being given to community members who appropriately referred potential learners to other community members better suited to teach them. This kind of practice might allow the organic growth of distributed continuous assessment that could go some way to reducing the need for formal examinations, perhaps even to the extent that formal exams would only need to be used sparingly to verify the reliability of the assessment pattern that collaborative communities ascribed to their members (similarly to the way that educational centres delivering coursework based vocational qualifications assign grades to learners on these vocational courses but those grading decisions are then subject to a statistical sampling based verification process carried out by the qualifications’ designers).

Assessment (whether distributed or not) of learning for which ready-made answers can be found rather than independently generated relies on taking the content that has been ostensibly learned and interrogating learners under some sort of controlled conditions (such as requiring synchronous responses) about a range of related content that should also be understood if the ostensibly learned content has in fact been understood. If learning of some content has not in fact occurred, then this should be evidenced by a lack of recognition in a learner as to what constitute near and far degrees of similarity of related content. Another way of describing this is that if a learner has learned some content they should have learned the boundaries of its applicability and should recognise how much some variation in that content should result in predictable or unpredictable consequences resulting from boundary violations.

Nassim Taleb’s description of the four quadrants that apply to decision making explains this principle rather well.



This diagram deserves some explanation.

Decisions range from simple to complex. Learned content is about a relationship between some set of entities and this relationship has some degree of complexity. For some purposes learned content can be considered to only refer to this dimension and a learner need only be able to demonstrate that they can correctly assess the approximate complexity of the relationships inherent in some test content based on that content’s apparent similarity to learned content.

The entities involved in the learned content do not exist in isolation though and there are cases where this needs to be taken into account. Part of assessment may involve testing to see whether a learner recognises the importance of these entities being connected to other entities not explicitly referred to by the learned content. Recognition of the potential significance of factors that are not explicitly referred to is an indication of what Daniel Khaneman has termed ‘System Two’ thinking.

Comprehensive assessment of knowledge of learning content by making variations in the content is equivalent to testing to determine whether a learner recognises which quadrant the learned content best fits into and whether the variations being introduced would result in a different quadrant applying and what the implications of the change of quadrant would be.

This understanding is an important part of the second ability that education systems will need to be able to teach effectively- complexity modelling. It is not necessary and almost certainly not realistic for most learners to become familiar with the details of the mathematics of nonlinear systems. It is sufficient that learners gain an implicit understanding of complexity.

For example-


These images show the development of a pattern formed by a cellular automata system. In such a system each dot changes colour depending on the colour of its neighbouring dots. This is an example of low decision complexity. The type of randomness/interaction strength dimension in this example is quite fractal randomness/high interaction. The appearance of large uninterrupted single colour regions is not obviously predictable from the dot colour rules or from the initial colours of any particular dots.

Gaining an implicit understanding of the processes that this cellular automaton models would involve a learner starting the system with many different possible starting arrangements of black and white dots and watching to see what kinds of large scale patterns resulted. Various knowledge could potentially be gained, such as:

  • whether any particular starting arrangements led to particular kinds of patterns emerging
  • whether any particular kinds of patterns unusually often led to any other particular kinds of patterns
  • what any particular changes to the cell colouring rules resulted in (in terms of which  kinds of patterns became more or less commonly generated)

A particular kind of pattern would not need to be exactly defined and indeed might only be implicitly recognised by a learner. What would matter would not be a learner’s ability to explicate their understanding of the model but to have sufficient familiarity with the model to be able to assess what quadrant it would make sense to consider it to be in at different times and how altering sections of the pattern and/or its generation rules would change which quadrant it occupied.

The point of asking learners to become familiar with models of complex systems is that collaborative communities (and also competitive populations, and blends of collaborative and competitive populations- such as human economies) are complex systems. Recognising the sensitivity of some system to small changes in some part of it or to the rules affecting it would hopefully be analogous to a learner recognising the potential effects of the actions of some member of a real community on the development of that community and of other communities. Poetically speaking, the dots could represent the activities of individual people and the patterns of dots could represent larger patterns of people’s activities recognisable as social trends.

A curriculum of collaborative community based projects dealing with complex phenomena, assessed by communities and verified by teacher-practitioners- that would seem to be a plausible model of education to produce learning that would be useful for the twenty-first century.

One rule to ring them all…

Standardised assessment has immense power to make educational processes extremely efficient, but that power serves a hidden purpose. Standardised assessment acts to constrain the desirable actions of both learners and educators to those that fall within a set of quantifiably comparable measures. Educators may insist that they define these measures but they are able to define them only within the intrinsic limitations imposed by the requirements of quantifiable comparability.

The dilemma of the condition of having access to great power but at the price of individual freedom of choice is reminiscent of the plight recognised to varying degrees by characters in The Lord of the Rings when they considered the twist of fate that had put Sauron’s one ring of power in their hands. The ring can be used to wield great power by those who know how, but even those with such wisdom cannot use the ring to carry out their will. They cannot truly use the ring because at a more fundamental level the ring would use them. The ring would use anyone who bore it to carry out the ends for which it was made- to rule.

There are a few other (pretty loose) analogies between Sauron’s ring and standardised assessment. The way that the ring renders its wearer invisible evokes the anonymising and depersonalising effects of modelling an individual in terms of quantitative comparison measures. The physical nature of the ring’s master (whose will the ring channels) as a great roving eye with a gaze that the whole world is exposed before- that is somewhat suggestive of the underlying motivations for the use of standardised assessment. Also, the robust and enduring pervasiveness of standardised assessment in educational practice and the reluctance of educational policy makers to relinquish dependence on it (“My precious standards!”), to the extent that the elimination of standardised assessment has come to feel tantamount to the elimination of education provision itself- these too mirror certain characteristics of the ring of power.

I do not deny that standardised assessment has been tremendously useful in the building of great bastions of educational provision. The positives that have come alongside the negatives introduce another ring-esque comparison; works of wonderful beauty and goodness were wrought using the three rings of the elves, rings whose fate were bound to the fate of the one ring. All that was made with the elf rings depended on the one ring and was doomed to fade to nothing with its destruction. It is also perhaps significant that the chief glory of what the elves made with their rings was that the rings’ magic protected whatever was made with them from the ravages of time.

But is standardised assessment really some sort of terrible and insidious evil? If those who set standards are those who understand best that which is being learned, then doesn’t it make sense to conform to the standards that they set in order to acquire an equivalent understanding?

Objections exist to this argument. The objections are strongly related to each other and are really aspects of a single objection.

  • Those who set the standards may not understand best that which is being learned, particularly if that which is being learned tends to change faster than standards change.
  • Those who set the standards may not best understand how what is to be learned should best be learned by all learners, particularly if some learners’ learning processes tend to change faster than standards change.
  • It may not be possible to define standards which accommodate the different ways that all learners best learn without ceasing to be effectively able to compare the learning of different learners.
  • Making standards the educational bottom line may result in learners adapting to the standards such that these learners do not do what ultimately makes them learn best but rather what makes them adapt to these standards best.
  • Using standards in ways that are not completely prescriptive requires varieties of interpretation that introduces subjectivity that standardisation ostensibly rejects.

Clear and detailed elucidations of these objections are presented by Derek Rowntree in How shall we know them?  which have not become any less relevant in the forty years since they were first published.

An interestingly different way of thinking about assessment has been discussed recently in Measuring what matters most. The discussion relates to the inadequacy of the construct of ‘knowledge’ (in the academic sense, in terms of what is measured by standardised assessment) as a measure of learning. Knowledge is criticised for three key reasons:

  • Assessed knowledge is isolated from the context in which it was learned and in which it is supposed to be applied, as well as being isolated from learners’ broader identities. This isolation renders knowledge artificial in a way that reduces its accessibility to learners and so makes knowledge an unreliable guide to learners’ capabilities.
  • Much knowledge depreciates in value with time. What learners know at any given time may well be of considerably less importance than what learners would be capable of learning in their immediate future. The capability of acquiring new knowledge can at best be unreliably inferred from existing knowledge.
  • Learners may have tacit or inchoate knowledge that they do not recognise and which cannot be reliably measured using comparative quantitative standards.

The replacing of knowledge with the construct of ‘choice’ is proposed. Assessment based on choice examines not what learners can show that they know but rather what choices they make when a range of options are available to them. Choice based assessments would be ideally implemented by online game/gamified scenario based applications that automatically recorded various learner choices. Choice based assessment is a specific area of educational practice where EdTech seems to clearly offer the potential for using methodologies that would be very difficult to employ without technology and for which the technological methods are arguably significantly superior to other methods.

Choice based assessment might seem to have similarities to psychometric intelligence measurements of the so-called g factor (general intelligence) as such tests are designed to as far as possible require a subject to devise (implicitly or explicitly) solutions to the test questions as and when they are introduced and not to be able to rely on specific prior learning to assist answering questions- this is a nice example. Independence from prior learning is actually only true of fluid intelligence and not of crystallised intelligence, which are both components of general intelligence. Crystallised intelligence is what has been learned and stably remembered. Crystallisation of learning involves recognising and recalling concepts repeatedly and in varying forms and contexts which mutually reinforce each other.

Psychometric intelligence tests are very much standardised assessments but unlike conventional knowledge assessments psychometric measurements are not measuring learning but some facility that is correlated with the general ability to learn, aiming as far as possible to measure fluid intelligence. Choice based assessments do not attempt to measure knowledge but neither do they attempt to measure some generic choice making ability, rather they attempt to measure the appropriateness of choices made by learners that are available in some particular context. A learner may make much better choices about some things than others and the results of diverse choice based assessments would not be expected to correlate with each other in the way that psychometric intelligence measurements necessarily must to even be regarded as being valid measurers of fluid intelligence.

Choice based assessment is concerned partly with assessing fluid intelligence but also with developing crystallised intelligence. Combining assessment with learning reduces the problematic isolation related issues of knowledge based assessment, while the specific design decisions involved in generating choice based assessments can be tailored to the contexts for which crystallised intelligence needs to be developed.

To badly paraphrase the opening monologue from Trainspotting- Choose learning!

Model students

In a previous post I wrote enthusiastically about how Maker Culture has huge potential for transforming education (presumably via Project Based Learning methods).

Initially I imagined this transformation in terms of making acting as a gateway to learning science; learners would be intrinsically motivated to take part in making and would become interested in science as a side effect of that motivation. These learners would be more likely to become applied scientists rather than theorists and the effects of maker led science education revolution would be expected to be more like those seen in late 18th/early 19th century Britain (where the industrial revolution got going) than in France at the same time (where there was no comparable industrial revolution despite great progress in theoretical/mathematical physics). Undoubtedly the industrial revolution did vastly more to increase human productivity than the Academie de Sciences did but the theoretical advances academia made have had many unexpected applications and it would be concerning to think that such style of enquiry could be neglected by a maker inspired education upheaval.

Reflecting on this possibility though I think that it neglects the issue of human intrinsic curiosity about the world in general rather than specifically the world of human interaction. If human cognition did not adequately motivate humans to learn about some objective features of their environment then no matter how well humans learned to cooperate with each other they would not be likely to achieve much in terms of manipulating their environment to better suit themselves.

Understanding what makes people curious is quite problematic. Explanations that psychologists developed sorted themselves into either curiosity drive or optimal arousal concepts.

Curiosity drive based explanations basically maintain that people are intrinsically motivated to reduce uncertainties in their understanding of their world and this leads them to seek information that eliminates such uncertainties.

Optimal arousal based explanations suggest that humans are driven by boredom to seek new information when they have too little stimulus (this has similarities to flow theory).

Both types of explanations are problematic considered in isolation.

A curiosity drive can explain why people would want to find information that reduced their uncertainties but if people have an instinctive aversion to contemplating uncertainties then it is hard to understand why they would ever want to ask questions that they could not immediately answer and which had no pressing need to be answered- such people would be rather more like eloi than contemporary humans.

An optimal arousal instinct would imply that to some extent people would enjoy asking themselves questions which they were unable to answer. The motivation for answering such questions is less easy to understand. An answered question ceases to provide much stimulation so people would be expected to continually ask themselves new questions rather than demonstrate persistence in answering any given question. Humans whose curiosity worked like this would presumably be impractically itinerant mystics.

Some sort of synthesis of curiosity drive and optimal arousal concepts might well cancel out the difficulties of both them.

Jurgen Schmidhuber has devised the term compressionism as part of an attempted explanation of curiosity. The abstract of an article that he authored states-

I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.   

Compressionism is very technically elegant but I would guess that it is more applicable to artificial minds than human minds- humans’ demonstrable preoccupation with other humans implies that human minds are likely to have plentiful biases about what they find interesting that do not conform to a maximally general principle of compressibility of information (unless of course the functionality of the human mind is based on that very principle- but perhaps this is the sort of question that would only occur to an impractically itinerant mystic).

But ANYWAY!- people are not exclusively curious about other people (although they are disproportionately so) and interest in learning science for its own sake is not impossible. I think that it can also be comfortably be incorporated into a maker-science movement.

I think that the key to successful teaching of general science through making is models.

A model is a representation of something. In science education a model is currently something pre-made that needs to be learned. Models are often mistaken by learners (and sometimes by professional scientists) for reality itself. Attempts are made to stress to science learners that what they are being taught is not reality but models of reality and that all models are incomplete simplifications. As a learner there is something off-putting in being told that what you are learning is incorrect and that you are being taught it because you are incapable of understanding something else that is closer to the truth. Many learners are unlikely to readily see the distinction between not being immediately capable of understanding a better approximation of the truth and  not ever being capable of the same.

I suggest that for models to become more in practice the powerful tools for learning about reality that they can ideally be then learners should see how models are made. Actually I am saying that learners should try and make the models themselves.

Models could be made of organisms to attempt to replicate some of the functions of the organisms. This would lead to learners/makers hypothesising how the structures of the models that they made associated with the behaviours of the organisms that the models were supposed to mimic. The proper understanding of a model is that is an embodied hypothesis. The model that a learner makes may well not be identical to their hypothetical model because of the limitations of the materials that they build the model from (which may be less of an issue if the model is virtual rather than physical; and so is more specifically a simulation) but the model can help to consolidate a learner’s hypothetical model.

Less empathically familiar items than organisms can be modelled. Materials have behaviours (in response to stimuli at least) and so different materials can be modelled. In this way learners can experimentally hypothesise about the structure of matter. If this process is virtualised then it could be conducted as a type of construction game. The ways in which models fail to mimic what they are modelling could be almost as useful to learners as the ways in which the models succeed in their mimicry. Failures could demonstrate where something not yet recognised needed to be incorporated into a model.

Yet more abstract entities could be modelled provided they have characteristic behaviours. Einsteinian gravity is quite often modelled using a rubber sheet. Some sort of model of Newtonian gravity could perhaps be made using springs. It might look something like this.earthspring.jpgOr it might not, but either way it could be fun finding out.




The opposite of not even wrong

It is an unfortunate reality that most people do not like to find out that they are wrong about something. This (and more) is written about brilliantly in Kathryn Schultz’s Being Wrong.

People’s dislike of being demonstrably wrong is a foundational challenge for the process of formal education. When formal education commences, informal learning has already shaped the minds of learners to the extent that they come to formal education with various pre-existent ideas that are likely to be incorrect- not least of these ideas concern how learners themselves do and do not learn well.

If education was simply a matter of conveying information then teaching would be a lot more like computer programming (or at least like the training of machine learning based classifiers), which it decidedly is not. Education is more problematic than programming because a key aspect of education involves attempting to produce ‘un-learning’ of previously learned ideas which tend to limit further learning. The most striking examples of what requires un-learning is the use of various cognitive biases which seem to be intrinsic to human cognition, which Daniel Khaneman superbly documented in Thinking Fast and Slow.

Khaneman’s basic discovery is that human cognition appears to operate in two contrasting modes (System One and System Two).

System One cognition is fast and easily amenable to automaticity. System One cognition is inferential and associative and it operates by assuming that whatever information is most immediately available is sufficient for drawing correct inferences.

System Two cognition proceeds slowly and it precludes other substantial cognitive activity from occurring simultaneously. System Two cognition is analytical and rule-based and recognises that immediate information may be of low relevance while information that is not available may be of much greater relevance.

Kahneman’s key insight is that most people most of the time use System One cognition but at almost all times think that they are using System Two cognition.

People’s general inability to recognise their own cognitive short cuts has a profound implication for formal education, that attempts to teach ideas which to be understood require learners to suspend their convictions of the correctness of their models of the world will be subject to garbling translations into forms that allow these new ideas to remain consistent with learners’ existing models. Constructivism generally has recognised that learners translate what they are taught in terms of models that they construct but Kahneman’s findings imply more specifically how such models will tend to develop. Models will tend to develop around fixations.

The phenomenon of fixation can be illustrated in problems, such as this one:

Your aim is to make a necklace that costs no more than 15 cents using the four chains below. It costs two cents to open a link and three cents to close it:  



HINT: This problem defeats many people partly because the way that it is presented creates an unhelpful implicit fixation- that four short chains must be connected.

Perhaps knowing that this fixation is unhelpful will enable a learner to ignore it, perhaps not; you may want to try and solve it. The solution is shown at the end of the post.

My experience of educational practice leads me to believe that formal education as an institution has for the most part quietly admitted defeat at attempting to undo limiting fixations (as well as unintentionally producing a host of limiting fixations of its own) and instead taken the approach of trying to defeat ingrained misconceptions by supplying overwhelming numbers of correct examples of how to solve problems in the hope that repeating correct examples enough times will eventually inhibit learners’ propensity to recall their misconceptions. This is analogous to a medic treating the symptoms of an illness when they can see no way to effectively treat the cause of it.

John Holt critiqued the relentlessly unreflective pursuit of ‘rightness’ in How Children Fail:

Sometimes we try to track down a number with Twenty Questions. One day I said I was thinking of a number between 1 and 10,000. Children who use a good narrowing-down strategy to find a number between 1 and 100, or 1 and 500, go all to pieces when the number is between 1 and 10,000. Many start guessing from the very beginning. Even when I say that the number is very large, they will try things like 65, 113, 92. Other kids will narrow down until they find that the number is in the 8,000’s; then they start guessing, as if there were now so few numbers to choose from that guessing became worthwhile. Their confidence in these shots in the dark is astonishing. They say, “We’ve got it this time!” They are always incredulous when they find they have not got it. They still cling stubbornly to the idea that the only good answer is a yes answer. This, of course, is the result of the miseducation in which “right answers” are the only ones that pay off. They have not learned how to learn from a mistake, or even that learning from mistakes is possible. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case. The more anxious ones will, over and over again, ask questions that have already been answered, just for the satisfaction of hearing a yes.

What Holt described is a kind of perfect storm of uncorrected System One thinking (unrealistic guessing) made worse by learners successfully incorporating into their System One built schemas the implied lesson that education means giving the right answer.

Nassim Taleb observed something not too dissimilar in his analyses of the differences between how financial decisions were made by those who were and were not formally educated in mathematics and statistics. Taleb worked as a Wall Street hedge fund manager and derivatives trader during which time he both made and saw others making many high-risk trading decisions involving very large sums of money.

Taleb came to recognise that traders basically came in two types; there were traders who were formally educated about financial mathematical theories and there were heuristic using traders who had learned trading as a practical art through a combination of experiment and imitation of trading techniques practised by more experienced traders that had been observed to be successful.

Taleb’s observations of these two types of traders ultimately led him to conclude that there was one very important difference between their effectiveness as traders. The difference was not that theory based trading worked better than heuristics based trading but that heuristic traders were much better at recognising when their heuristics could not be relied on to produce profitable trades than formally educated traders were at recognising when their theories were not reliable guides to what was happening in trading markets. Formally educated traders were significantly worse at recognising when they were wrong than traders who had not been formally educated (this meant that they sometimes lost LOTS of money- like more than the sum of all the money that they had ever previously made). Note that this shows how very analytical, theory based thinkers could make the classic System One error of assuming that they had all the information that they needed to have and that what they did not know they did not need to know.

I have to wonder if this has got something to do with formal education in mathematics being consistent with the approach used in formal education in general of making learners care primarily about being right and not making them care so much about how wrong they were if they were wrong.

The way that assessment schemes are used in formal education typically assume a baseline of no knowledge for which no reward is given. Correct knowledge is rewarded in approximate proportion to the correctness of answers- some answers are more correct than others. Very rarely do assessment schemes try to accommodate the idea that some answers are more incorrect than others and even more rarely the idea that some incorrect knowledge may be significantly worse to have than no knowledge.

The physicist Wolfgang Pauli is said to have responded to some unappealing ideas suggested to him by other physicists by declaring that those ideas were ‘not even wrong’, apparently meaning that they were contradictory or unfalsifiable (or both, and perhaps inseparably so). An idea that was right because it could not be wrong was of no use.

I have a rough idea for an educational mathematics game (which would probably be called ‘Wronguns!’) in which learners try to generate the most incorrect mathematical statements that they can devise. Working out the scoring system might be tricky though.




The solution to the chain problem

The solution is actually to first completely open all the links of one of the chains and use the three links that result to join the remaining links.







I just attempted the online Wason selection task, which is a simple test of understanding the application of conditional rules. Although it is simple, most people get it wrong. It seems that cognitive bias makes most people misinterpret the meaning of the conditional rules.

Even though I had heard of this task some time before (even heard the solution and explanation of it) I still made the same basic mistakes attempting it that 75-80% of people do, and probably because of the same cognitive bias.

Also like most people, I got the correct answer when the selection task was presented in a familiar social interaction scenario. Much as I sometimes imagine that I am an atypically abstract thinker, there is a part of my thought process that exhibits a very typical ability to understand how a rule applies when what the rule applies to is whether human beings are or aren’t complying with social rules.

Why is this true of most people?

One reason could be that the social contexts of problems occur more regularly than other contexts. That is not necessarily true in terms of how frequently these sorts of problems actually happen though, but may well be true in terms of how often people tend to even recognise these problems as problems that they could solve. Another way of putting this is to say that people are more predisposed to think about other people’s thoughts and actions than about the relations between the states of inanimate objects. This implies that a person’s readiness to seek to learn about something may depend less on the nature of the thing to be learned about than it depends on whether other people are interested in learning about it- people want to know about what other people want to know about.

I am currently reading (and enjoying) Free to Make which is about maker culture. Maker culture is all about people who like making things, and not simply as a way to possess things but for the enjoyment of making them. Wikipedia states that “Maker culture emphasises informal, networked, peer-led, and shared learning motivated by fun and self-fulfilment”. The key terms in the quote seem to me to be learning, peer-led, shared and fun. Amateur manufacturing is not (currently anyway) a realistic basis for meeting the material needs of the people of the world, but it is a powerful means of teaching people very valuable skills and attributes- familiarity with a wide range of technologies, problem solving, resilience, inventiveness and collaboration.

I can conceive of maker culture as superseding the culture of school and academia in the business of producing the dynamic and creative knowledge workers that will supposedly characterise the workforce of the successful early 21st century economy.

I think that basically most people need to have a place for human-feeling meaning in what they are willing to invest themselves in. Academic teaching of science and technology does not tend to do this well. This is hard to avoid considering that understanding of the physical universe does require abandoning a teleological way of modelling the world and accepting one based on abstract rules and objects purposelessly following these rules. It does not surprise me much that studying such phenomena would alienate a lot of learners. If though the inhuman worldview of science is engaged with initially as a means to a much more human end, that of making something that a human chose to make (and so invested their own purpose in), and better yet making it with other people, and better still to show to yet other people- I can see how people who would otherwise never see STEM subjects as being part of their interests could easily become enthused.

I have strong hopes that the best part of the future of EdTech is in its potential to turn learning into something that people make rather than something that people do.