Playforming

Although I have not been doing any game developing for some time, I have been doing some reflecting on game-based STEM learning. One of my work colleagues demonstrated to me a chemistry game that they had been involved in developing, and I had also vaguely thought about PhD study (as if I had time!) to do with game learning themes, and made some visual notes and stumbled across some interesting thoughts.

I’ve been thinking about the performance aspect of game-playing. In STEM learning, there are some schools of thought in which performance is a dirty word. These pedagogical perspectives propose that STEM subject learning can all too easily be perceived as a battery of performance challenges, and that this is a recipe for cascades of inhibitory anxiety in western educational cultures in which performance ability tends to be perceived as related to ostensibly intrinsic, largely immutable, individual characteristics.

It is often noted by enthusiasts of game-based learning that learners tend to have much less performance anxiety in game contexts than in formal learning contexts, and demonstrate comparably greater resilience. When playing games, it seems that people are quite open to tackling challenges and overcoming them.

One fairly straightforward interpretation of this attitude contrast is that game-playing is a basically low-stakes activity; being a mediocre performer at some game is not generally perceived as a value judgement on a person in the way that a formal learning challenge might be. With nothing of great import to lose, why should there be anxiety? This hypothesis suggests a potentially major problem for the future of game-based learning. If it becomes an established and accepted truth that formal-like learning is possible through game-play, what was seen as harmless fun may become a serious measure of a learner’s sense of self-worth, with all the concomitant anxiety that such a measure implies. Play becomes work.

Another aspect of performance in game playing is the notion of the importance of unpredictability and the role of what I term high-pressure flow. The flow I allude to here is the flow-state made famous by Mihaly Csikszentmilhalyi, which is basically a state of optimal stimulation balanced somewhere between low stimulus boredom and high stimulus anxiety. The thought of such a flow state suggests to me two markedly different conditions. One kind of flow is a kind of detached relaxation in which one feels able to do what one wishes, but that activity may not be very well, active (at least outwardly). The word revery perhaps invokes what I aim to convey, in the sense of a feeling of deep understanding, of seeing connections between things whose connections had not before seemed apparent. This kind of flow (the low pressure kind) could be thought of as the preparatory basis of all sorts of creative (and synthesising) activity that would follow from the insights stemming from immersion in the flow state.

High pressure flow corresponds more to performance and pushing at one’s limits while focusing narrowly on some specific activity. This strikes me as being associable with the phenomenon of hormeosis. Hormeosis is the ability of organisms (and perhaps other dynamic systems) to ramp up their capacity to perform the functions that they normally perform, discovering that as a matter of course in workaday life these functions are not performed to anywhere near their peak capacity (as is only prudent, otherwise should these functions be even somewhat compromised then the organism would be unable to function adequately under typical circumstances). High pressure flow activities are characterised by high stakes error avoidance, with even quite minor errors tending to be largely non-recoverable from once made, requiring a return to some remote starting point and a repeat attempt to progress from there.

It seems plausible to me that high pressure flow is a feature of a lot of game playing because games that can be easily defined (and hence remembered and shared) are more likely than not to be based around some sort of fairly prescriptive goals or conditions rather than being conducive to free-form associations of ideas. When games are designed for learning, especially in STEM contexts, the specificity of the purpose of play is likely to be all the more pronounced (although games that are hugely free-form, such as Minecraft, have been influential in the STEM learning game world).

In STEM subjects, the concepts of methods, rules, and deterministic processes are of great significance in developing learners’ understanding of the subject content. Methods and rules are expected to be well defined enough that the processes that they are used to model can be specified and predicted. Prediction and control are arguably the raison d’etre of applied science. The precision and accuracy requirements of STEM learning has a commonality with the high stakes error avoidance outlook of high pressure flow, but without the imposition of the time constraints and hormeosis-prompting difficulty spikes typical of high pressure flow; the very factors that most strongly characterise that flow state (taking exams could be a notable exception). I found myself trying to put this across to a colleague of mine recently by saying that when the way to win a game involves planning and specifying a procedure beforehand and then executing the procedure, the game is not really being played any more so much as it is being witnessed. Designing a procedure that could have been executed by an appropriately instructed machine is not so much playing a game as telling a machine how to play the game on your behalf. The ‘live’ experience of being the game’s player fades, and along with it I suspect does the much vaunted intrinsic motivation that game playing is supposed to offer.

There is in principle a role for low pressure flow in STEM learning in that the low pressure flow state has affinities with more experimental activities in which curiosity is more of a driver than the intent to perform well. Learning activities based around weakly directed experimentation are notoriously ineffective at translating into measurable learning, at least one reason for which may be that what learners learn (and when they learn it) when they undertake such activities cannot be known in advance, so anyone wanting to measure the results of such learning has a difficult job to do. Some sort of precarious balance probably exists wherein the scope of activities is narrow enough to make the meaning of them clear, but also wide enough to make the ways in which these meanings are discovered conducive to a sense of free exploration (medium pressure flow?), but STEM learning game design looks like it has a long way to go to find that balance.

Advertisements

Specialist Arguments

I have recently finished reading The Enigma of Reason, which I first mentioned ages ago in this post. The Enigma of Reason (hereafter EoR) provides an attempted explanation for the ways that human thinking operates, including its various shortcomings. In summary, EoR makes two main arguments. The first argument is that when an individual person thinks about how something works/why something happens as it does, in evolutionary terms it is advantageous to said individual to be satisfied with the first explanation available to them that is not so obviously incorrect that its incorrectness is unmistakable. Favouring availability over rigour can be seen as useful when it is considered that once an explanation is available to an individual, any time and effort expended on finding better explanations may be a waste if the first explanation found is adequate and the possible benefits of the better alternatives are largely superfluous (assuming that better explanations even exist, and are possible for the individual to recognise). Another way of saying this is that people have evolved to be effective ‘satisficers‘ (within the environments that they evolved at least). The second main argument made in EoR is that people have evolved to scrutinise information gleaned from each other for how such information may be deceptive in various ways. Put simply- we very easily convince ourselves of almost anything that we happen to think of (even though much of what we think is deduced implicitly and subject to a host of biases), but cannot anywhere near so easily convince others, nor be so easily convinced by others ourselves.

According to EoR, it is a mistake to think that humans have evolved any generalisable explicit problem solving facility. Rather, people’s thinking is basically implicit and contextual. This argument goes further than dual process models of cognition (most famously Khaneman’s System 1 and System 2 thinking) by contending that not only is most human thinking not what might be termed rational and objective, but that actually none of it is, and that it does not need to be so to generate behaviour that approximates to what rationally minded humans would produce. 

According to EoR, the interaction of solo satisficing and distrust of others allows groups of people to be quite effective at collectively making inferences about how many aspects of the world are ordered (assuming that their implicit knowledge about the world is at least vaguely helpful). The way that this effectiveness is made possible is that when some other person tells us that some x is in some way y, our instincts are to ask ourselves why we would or would not believe what that other is telling us about the relationship of x and y, and via that to reflect on what we already think about the relationship of x to y, and to what extent we continue to be satisfied by our prior assumptions about and y and to perhaps discard assumptions that previously seemed good enough to be accepted in favour of new reasons that now seem better. It has definitely been my personal experience that having other people to discuss my thoughts with helps me to detect the less helpful thoughts, and I have also often found that such discussions prompt me to some very interesting thoughts that I do not think would have occurred to me in isolation.

An assumption that EoR makes (I personally think it could be made clearer) is that in the environments in which humanity evolved reasoning abilities, individuals less inclined to update their assumptions about the world based on discussions of those assumptions with others would tend to be disadvantaged and less likely to survive and reproduce. Those who learned whom to believe and about what such that they would become recipients of optimal survival information would tend to be the fittest. By the same logic, groups which optimised reciprocal exchange of optimally useful information would tend to survive better than groups that either allowed their members to accept too many unhelpful beliefs or restricted useful beliefs to fewer of the groups’ members. Groups that learned the most useful information about the world were those that generated the most beliefs, shared their beliefs most, and most scrutinised those beliefs most while sharing them. These groups learned to survive better than other groups and their learning practices became the human norm (so says EoR as I understand it).  

The human world of today has in many respects become enormously different to the world in which human reasoning evolved. I find it interesting to consider how the differences between today’s human world and that of our distant ancestors might interact with the mechanism of human learning proposed by EoR.  

What seems to me to be the crucial difference between now and long ago in terms of human learning is that in the world now the connection between knowledge and utility has been greatly weakened, or at least occluded. In some remote hunter-gatherer past, argumentative discussion (accepting the EoR hypothesis) had enormous survival value. The kinds of things that people might have argued with each other about would be matters of immediate consequence, like how to best obtain healthy food, find shelter, and avoid dangers. Having wrong ideas about eminently practical considerations would have clear and severe consequences. Those people who could not see the importance of adapting their beliefs about the world when those beliefs were demonstrably and expensively wrong would be naturally selected against. Our ancestors had a lot of ‘skin in the game’ when it came to their being right or wrong about things. 

Fast-forward to today, and much of what people know and discuss with each other (for the social mainstream in the developed world at least) is quite dissociated from what will immediately affect their survival opportunities. People retain their instincts for argumentative debate, but if being shown to be wrong in such arguments does not obviously and immediately impinge on survival, then the loser of an argument does not change their knowledge of the world as a result of losing an argument, but rather changes what they choose to argue about, and/or who they choose to argue with. The argumentative instinct exists separately from the survival instinct, and nothing prevents the argumentative instinct from prompting actions that run counter to the survival instinct if threats to survival are rare and unexpected. Consequently, people can fail to learn a plethora of apparently very elementary behaviours while becoming ever more skilled at constructing arguments about why they do not in fact need to learn such things. For as long as these arguments remain untested in practice (as long as the people making them continue not to have their survival, or at least comfort, jeopardised) then people can continue to be convinced that they have nothing to be concerned about.

It (hopefully) goes without saying that if the premises of EoR are to be believed, the implications for education, and indeed for culture, are pretty huge. The central implication seems to me to be that in general people will only learn behaviours related to ensuring survival rather than to winning arguments if they have been convinced to do so by some combination of argument and brute necessity. It seems plausible too that arguments that have been historically effective at prompting survival ensuring learning have tended to be effective in significant part due to their associations and correlations with brute necessity, in the sense that they have been the arguments wielded by various social authorities that have ultimately punished non-compliance to the extent of negatively affecting survival opportunities.

A highly unbalanced social situation such as that described above can be maintained (although not indefinitely I suspect) to the extent that the incentive for people to win arguments results in some critical minimum part of the population being interested in arguing about how certain technologies work that can be leveraged to fantastic degrees in productivity terms. To put this crudely, if a small group of extremely geeky people compete with each other to show that they know more than each other about certain technologies that allow much of the labour necessary for material comfort of the population to be automated, the technological fruits of their arguments can be exploited by an administrative/political wing of society sufficiently well to make up for the productivity losses incurred by the majority of the population becoming ever less concerned with learning things that would be useful for their survival.

I think it is a reasonably fair characterisation of the way that the world economy has developed to say that it has become based on ever increasing the productive leverage that can be exerted by the technological output from a geek-elite. Seeing things this way, I think it is a reasonably fair characterisation of the way that the world education system has developed to say that it has become based on ever increasing the supply of net geekyness, whether this be by increasing the number of geeky people or by increasing the geekyness of the geek-elite. I increasingly find myself thinking that this approach is unsustainable, in that it is increasing the non-geeky majorities dependence on technology to eliminate the need for their learning survival-oriented knowledge faster than it is increasing the global net-geekyness required to produce the rate of technological innovation that can compensate for the growing dependence of the majority. Additionally, it is my suspicion that the world’s education systems have been set up to have no alternative response to this unsustainable disequilibrium than to further exacerbate it by squeezing the population ever harder to extract more geekyness from the population by adding more and more requirements for formal and theoretical education, and in doing so further distancing the population from incentives to learn about more immediate and practical considerations, increasing the rate at which they become dependent on technocratic solutions to their problems. I suppose I am not saying anything here that Ivan Illich did not say in Deschooling Society.

Perhaps surprisingly though, I am inspired to optimism by the challenge of a humanity that is ill-adapted to the world that it has made for itself, as this points a way to how humanity could re-make a world to which it would be better adapted. This post has gone on long enough, so a discussion of this hypothetical way ahead will appear in a future post, but suffice to say it will build on thoughts expressed in some previous posts

The Core Assumption

When I was still teaching in the traditional way (by which I mean pre digital media network technology based way), the scope of what I imagined the effects of my teaching might be on the world was largely constrained by the operational principles of the educational system in which I worked. One teacher in one place can be assumed to make no real difference to the way that the world goes. The inspirational teacher story, based on real events (I am not sure how closely), of Freedom Writers is a pretty striking counterexample to this fatalism, but this story very starkly highlights just how much a teacher who wants to really redress the failures of the system they work in must be prepared to sacrifice to do so, and for a victory that may be tremendous in terms of the lives affected by it first hand, but which is in the end a drop in the ocean. I chose to work in EdTech because of the possibility of building functionally superior education systems that could offer people hopeful possibilities that the world’s failing educational systems and institutions increasingly could not.

In vastly more concrete and functional terms, I had ideas about how technology could help make learning mathematics and related subjects much more accessible, so that learning such subjects successfully could be a realistic aim for a majority rather than merely for a minority. This goal, and how it might be achieved, has two appealing qualities; firstly, it is tremendously rich and deep, and secondly, it seems obvious that it is a worthwhile endeavour. So obvious in fact does the worthiness of the undertaking seem that until recently it had never occurred to me in any properly conscious way to even wonder if it actually was as important as I supposed. What reasons do I have then for thinking it good that mathematical understanding become much more widespread?

Giving some conscious deliberation to this question fairly rapidly made it clear to me that I was considering mathematical understanding as a proxy for the better use of a more general kind intelligence, something like Khaneman’s System Two thinking. A world of people in the grip of System One thinking is a world where everyone makes only unreflective decisions based on being convinced that what they know is by definition the only thing that they need to know. This is a world in which people only learn things the hard way, and anything too hard to learn is never learned. A pessimist might say that this is the world we actually live in, and there is some truth to that, but it is hardly the whole truth.

Accepting a healthy dose of pessimism, I must concede that learning mathematics (and physics, statistics, logic, etc…) does not ultimately drive out the demons of System One thinking even a little distance beyond the context in which these things are learned. As learning is detached from life in general, so are the fruits of that learning. One of the enduring themes in (some) EdTech dreaming is the notion of embedding learning in the broader pattern of living. Quite recently, this was a fairly prominent theme. M-Learning (both mobile and micro), lifelong learning, and learning on demand were popular buzz-terms. More recently, educational institutions (and the funds that they represent) have become more EdTech enthusiastic. Project based learning, based around hubs that map conveniently to existing educational infrastructure, has taken more of a foreground position since then. PBL could be a way of merging the world of work with the world of education; schools becoming makerspaces promoting design thinking and STEM skills.

When I was teaching, I would stress to my tutees preparing for higher education or employment, that they needed to be giving major consideration to what they could learn to do that would not be easily automatable in the near future. Jack Ma took this idea to something of an extreme in the speech where he argued that students should not be learning anything that a machine could do, because machines will outsmart them soon enough. Ma’s extreme statement does not perhaps go very much farther than Ken Robinson’s ideas on individuality and creativity. Robinson is critiqued rather well in this blog post, and I concur that Ma and Robinson are wrong to say that we can avoid being outsmarted by machines purely by doing different things than they do, because if we do not understand what machines do, then how do we know what would be different to that? I should have said to my tutees (not that they believed my warnings) that they should anticipate that in their future they would be competing for jobs with a computer, and that like in any other competition, they should want to know anything about their competitor that they could find out in case something in that would help them to win the contest.

The question that bubbles up from these insights (hopefully) is the question of what kind of skill and/or intelligence is really going to be helpful for most people, now and looking ahead. In the twentieth century there was almost no question that what basically mattered was IQ. The dominance of the importance of IQ has been such that it has become uncontroversial for people to see IQ as a synonym for intelligence. I would argue that IQ is more properly a particular kind of intelligence, but the only kind that can effectively be systematically measured, and the kind that generally mattered most in a world overwhelmingly organised along systematic approaches to solving problems. Human evolution, and the ways of learning that come most easily to most humans, are not particularly systematic, they’re far more heuristic. Heuristics can be very effective in many situations, but they don’t tend to travel well or to scale well. The effectiveness of heuristics in the particular niches in which they apply can easily act as a disincentive to painstakingly learning systematic alternatives to the heuristics, as the heuristics so often work better (up until the point where they don’t).

The clash between the short-term gains versus the long-term opportunity costs of relying on heuristics is something that I think could be ameliorated in two main ways.

The first way would be to teach the heuristic and systematic approaches to learning things in parallel, only making connections when learners are interested in making them. There exists a qualification called “Use of Maths”. The name is misleading, the course content is barely any different from applied maths, but what if “Use of Maths” did exactly what it said on the tin? What if it was a subject that taught people heuristics that would be helpful in their lives, things that were mathematically derived but didn’t need to be understood systematically. People do not need to know how to calculate compound interest to be taught and to remember that a 25% APR on a loan is not a particularly good thing to be offered. The collection of useful facts that people can know about dealing with money, and more generally with risk, as well as how to better recognise manipulation and deception relating to quantifiable events, are things that can be learned pretty much in isolation from their systematic underpinnings, by whatever heuristic methods people find helps them to remember the key facts.

The second way of aligning heuristic learning with the painstaking effort of systematic learning would be to make systematic learning less painstaking. One way of doing this would be through interacting with artificial environments in which heuristics applied that taught things which corresponded rather than clashed with what systematic approaches to learning would teach, such as games that were constructed to teach mathematical procedures while also being enjoyably playable; your brain would like them even if you didn’t.

What I’ve more and more come to realise is that no one really knows with much surety what the important things are for people to learn to prepare them for the future. I do still think though that whatever approaches reduce the insight lag between people’s System Two and System One thinking ought to be worth trying.

Writing

It has been quite a while since I last posted. I have had a lot less time to post since I returned to full time work. I am now wholeheartedly working in EdTech, and hope to continue to do so. It is sometimes strange to think that I am perhaps unlikely to teach in a classroom again, but life can be strange. 

Working in Edtech has turned out somewhat differently than I imagined it would. Rather than the kind of technical role I thought I would find, I am working as a Subject Matter Expert. My work is primarily concerned with writing, and I mean writing subject content rather than writing code.

When I was teaching for a living, it was very obvious how little most of the learners I encountered were willing to read even a very tiny amount, and the reading that these learners were willing to do was not what I recognised as reading so much as it was copying (in the sense of the action that immediately precedes pasting). Learners would hold what they read (or thought that they had read) in their mental copy buffers just long enough to transcribe it (often severely mangled) to some repository external to their biological memory. Communication of information in such a reading (or more accurately, such an anti-reading) culture as this necessarily had the attributes that I think can be aptly characterised as ‘high noise, high redundancy’. The assumption had to be made that what was communicated would not be understood until it was repeated several times (if then) and that the content being communicated would not be stably retained, or even received, and so the forms suitable for communication had to be those which were as robust and self-correcting as could be achieved. This requirement meant that broad, general principles of the subjects being communicated were far more amenable to reception than precise details; this shift of focus over a number of years led me to think about such principles in ways that I might not have otherwise considered. The inevitable variability in communication approaches necessitated by ubiquitous noise and need for repetition naturally led to the formation of multiple representations of subject content. Broadly speaking, these conditions promoted the development of a meta-cognitive approach to the communication of subject content. The approaches necessary for dealing with non-intrinsically motivated learners seemed to me to be potentially highly cross-applicable to much more engaged learners, a small number of which I was fortunate enough to teach and who seemed to enjoy not just learning the content that I was charged with teaching them but reflecting on how it was that they were or were not succeeding in understanding the content. 

I cannot stress enough how great was the gulf between the intrinsic motivation of some learners I taught and what was implied by the expectations implicit in the official documentation of the qualifications that these learners were nominally engaged with. I recall one particular learner, a tutee of mine, who was really extraordinarily, staggeringly unmotivated. This person seemed to me to have something of the same feeling for apathy that Schopenhauer had for pessimism. Schopenhauer maintained that we would be objectively better off if we had never existed, and equivalently this learner seemed to regard any sort of deliberate intention to do anything that was not a route towards ever further disengagement as the most futile kind of masochism. In a tutorial that involved applying mental imagery to the recall of surprisingly long lists of items, this person caught me off guard by extremely uncharacteristically asking an unprompted question. He apparently found it so disconcertingly bizarre and counter-intuitive that anyone would go to the bother of generating their own mental imagery when an effectively inexhaustible supply of imagery could be obtained online that he could not help actually being curious enough to ask me why I would want to waste my time auto-generating images when (as he put it) someone else could do that for me. When I told him that compared to what I could imagine, the input from YouTube etc… was generally cloying and numbing, he appeared to interpret this as my outrageous bragging, at which point he went back to his usual not caring.

This anecdote may or may not have anything useful to say about education, but I have come to think that I have explicitly said plenty up till now about educational theory and suchlike. Colleagues of mine in my new job have suggested that I should explore writing more narrative (and less theory presumably), maybe to see what ideas develop from that, or maybe just to practice better writing. Writing is now my job after all.         

 

Starting with something new

In a previous post I wrote about what I called Open Content Learning Webs (OCLW). These are hypothetical community run (and ideally community owned) trans-disciplinary knowledge sharing websites that assess the ability of their users based on the community’s responses to content contributions that users make to the OCLW. 

It might be supposed requiring learners to contribute new content for them to be assessed would be a serious limitation in terms of assessment; how can inexperienced new learners be expected to generate new content? Educational taxonomies tend to rank creation as a very high level mode of learning activity appropriate for learners who have achieved fluency in various precursory activities (like remembering and applying).

What this apparent difficulty is missing I believe is that new learners’ contributions need not be content that is in any sense involved in the foundations of the subjects to which they contribute content but rather represents novel applications of existing content; not necessarily remarkable or ingenious new applications, but simply applications not already specified. New learners need not contribute new knowledge but only new uses for knowledge. Actually, a contribution need not even be a new use for knowledge. A contribution could be concerned with already specified applications if the contribution provides information to the community that facilitates community members’ location of and/or recognition of uses of existing knowledge. A vast amount of knowledge exists today that could have a multitude of transformative effects on the lives of a multitude of people if only those people were aware that this knowledge existed and how this knowledge could benefit them.

This kind of novel application and consolidation of knowledge has started to impinge on mass awareness, such as in the form of the phenomenon of ‘life hacks‘. Life hacks however are so far not generally seen by educators as being much at all to do with education or learning. Education as a formal process tends to characterise life hacks (if it acknowledges them at all) more as laziness-inducing distractions from learning that insinuate to learners that many of the problems that they perceive as relevant to them can be satisfactorily addressed without very much recourse to the formal process of education. The possibility that life hack curious learners might in fact be correct regarding the efficacy of life hacks to their concerns rather than the educators who deride the usefulness of such life hacks is indicative of the implicit necessity for education as a formal process to be given the exclusive right to determine what is and what is not useful for a learner.

Open Content Learning Webs would be communities where those who did not have specialist subject knowledge to contribute would acquire merit within their community by devising and sharing life hacks and by recognising how the knowledge existing in these learning webs could be used to make more and better life hacks.         

Disclaimer: This post has nothing to do with Audrey Watter’s Hack Education blog.

Keep stirring the concrete

To have learned something implies that one remembers it; a person must have ready access to their learning for it to be learning worthy of the name. The most direct forms of assessment are essentially concerned with what learners can recall when prompted to do so. By calling these forms of assessment direct, I mean to say that they are most directly concerned with what a learner has supposedly learned. The concern is primarily on the content- how stably it has been stored and replicated. The question of what that content may or may not mean to a given learner is of lesser consideration. If the learner can produce the content when prompted, they are assessed as successful (and as not if they could not).

Obviously, very little can realistically be learned about any subject without access to a set of referents that are foundational to that subject. It is very reasonable for a teacher to want learners to have ready access to various facts that act as highly efficient short-cuts in solving problems in a subject. If learner’s understanding of a subject is in a state of perpetual, unpredictable, turbulent flux then they may easily forget or unlearn what they had learned, or learn some alternative incompatible concepts that preclude the attainment of the understanding that their teacher hoped to bring them towards. This is a very obvious problem.

Another problem which seems to be less readily recognised is that a learner’s path towards understanding may become impassable not because a learner loses sight of that path, but rather because their view of it gets blocked by their existing knowledge; possibly the very knowledge that they were taught in order to keep them on their path.

I saw a striking example of this kind of phenomenon illustrated in Skemp’s Psychology of Learning Mathematics, in which a fatally flawed understanding of the Pythagorean theorem is illustrated.

pytherr 

Seemingly, the mathematics learners who produced diagrams such as these had a acquired the belief that the orientation of a square relative to a page it was displayed on was part of the definition of a square. Making one side of a square parallel to the diagonal hypotenuse of a right triangle apparently disqualified it from being considered a square. I suspect that learners who had made this error would probably also not recognise the shape below as a hexagon; they would assume that hexagon exclusively meant regular hexagon.

ireghex

It is not hard to imagine (for me anyway) the well-intentioned efforts of teachers to inculcate stable knowledge of certain common shapes having the effect of making those shapes only recognisable as the shapes that they in fact were when they appeared in ways that were substantially similar to the ways in which they were shown when the recognition of them was taught. The effect of this would be that concepts that were taught to learners became not only concepts but examples of fixations

I chanced across a nice example of the opposite of such a fixation based misunderstanding in this online puzzle.

matchmath 

One popular solution to this was to change 4 + 9 = 1 to 4 + 3 = 7, but I preferred a different solution that one person had suggested.

mathmathminus

 I liked the solution -4 + 5 = 1 more than 4 + 3 = 7 because of the inventiveness of recognising that the 4 has an implicit sign, which can be changed just as well as the explicitly shown sign can.

The learning of physics has a particular susceptibility to being made more difficult by learners’ reification of nonessential, circumstantial aspects of how physical concepts are presented to them during their learning. I strongly suspect that this susceptibility has at least some of its origins in the method of attempting to explain physical phenomena through exposure to various models that aim to represent successive stages of  approximation of said physical phenomena. 

The problematic aspect of making use of initially highly oversimplified models for explaining physical phenomena is that the oversimplifications of those models can become sufficiently familiar to learners to be perceived as the final explanations of these phenomena, not mere stepping stones to more subtle understanding. 

Take for example a model of the molecular structure of a solid that might be used in teaching atomic theory of matter.

solid.jpg

Presenting a model like this can reinforce a naïve concept of an atomic solid as consisting of stacked atoms that remain in position because nothing pulls or pushes them in directions perpendicular to gravity, while gravity holds them down. That is indeed what would be happening to a macroscopic model made in this form, which is what a naïve learner is primarily aware of. Crucially, such a learner would probably have only a vague intuitive concept that there are attractive (or repulsive) forces between the atoms. The ability of the atoms in the model to stay in their positions when the model was manipulated would need to be understood in some way or another. The learner would look at the model rotated, and ask themselves…

solid2

The naïve answer is probably that some sort of friction wedges the atoms together, so that The atoms collectively inhibit each other’s motion.

This misconception could lead to the further misconception that if single lines of atoms were removed from the atomic lattice them friction would not hold them together, and they would fall apart into individual atoms.

solid3 

A learner might conceive of this as a phenomenon that explained the characteristics of powdery materials, and then go on to conflate this with the change of matter from solid to gas phase; influenced by the recognition that some powders can when disturbed form clouds- clouds that superficially resemble diagrams they would have been shown of atomic models of gasses as freely moving particles.

The overall point is that learners have abundant motivation and opportunity to interpret models that they are shown in terms of what is already something that for them is concrete in an experiential sense.

Personally, I greatly favour finding ways to base conceptual learning on concrete operations, through manipulative methods like interactive simulations and the designing and making of models. I have a very strong interest in the representation of numbers and mathematical operations in concrete forms in ways that continue their use drastically beyond the pedagogical stage at which symbolic representations of numbers and number manipulations usually replace concrete representations (this is a splendid example of what I mean). While a learner still has access to concrete versions of what they are learning, they retain a degree of ability to use them as they see fit; to play with them. Exchanging concrete representations for symbolic versions is for many learners effectively an act of faith, or at least of compliance. The symbolic operations are a code that they are assured is how things really work. Learners may be reassured that this code will come to be meaningful to them, but ultimately this change in them may mean abandoning what they once meant by meaning and understanding for something that is basically aspirational; meaningfulness as an ever unfulfilled promise, and a void where once imminent understanding resided as a condition that one simply learns to live with- understanding conceived of as becoming resigned to not really understanding but carrying on regardless.

 

   

 

 

The (In)Visible College

I posted a while ago about the possibility that people’s knowledge is at least partially generated by the process of asking them questions about their knowledge, perhaps even to the extent that such knowledge may not even exist in any well defined sense before the questioning process occurs (this post). The first book I read that seriously and persuasively proposed the notion that how knowledge actually forms in learners’ minds is an essentially opaque and impenetrable phenomenon was The Learning Spy, by David Didau (which I discussed in this post). Didau didn’t seem at all concerned with ontology, just with epistemology. Fair enough I suppose; it probably doesn’t make any difference whether knowledge actually exists before it is measured in some way if the only way to show that knowledge exists is by measuring the interactions of knowledge possessors with the world. Perhaps ways to infer the results of interactions between different items of unmeasured knowledge could be devised, but presumably with great difficulty.

If knowledge is in some sense created by the attempts made to apply it, then I think what kinds of attempts to apply knowledge are involved in learners’ education would potentially make a great deal of difference to the character of the resulting knowledge. 

Two particular, contemporary educational approaches have names that explicitly refer to the premise that the nature of learning involves something not directly observable; Visible Learning and Invisible Learning. 

John Hattie has made the term Visible Learning (somewhat) famous. To be succinct, Hattie’s approach has been to consistently and (as much as possible) comprehensively replace anecdotal, heuristic, and superstition based theories of effective teaching practice with effect-size measurements found from multitudes of studies. Hattie has gone on to use the results obtained to justify the argument that there ought to be greater emphasis on modelling metacognition to learners that builds on learner fluency with subject matter acquired through directed instruction. The dependence of effective learner metacognition on effective learner motivation and an analysis of the factors that most affect learner motivation is another key area that Hattie has stressed. Visible Learning is so named not because learning is intrinsically visible, but precisely because it is not so, and must be made visible to learners (and teachers) for it to occur effectively. Hattie’s body of research is certainly impressive. I cannot help but wonder though to what extent the results of studies from which he obtains effect-sizes are measuring learning in the sense in which he uses the term rather than in the sense of learners learning how to give the right answers to the right questions at the right time without them necessarily being able to shed much inner light on how they themselves were able to do that.   

Invisible Learning is associated with the Education Futures organisation. Unlike Visible Learning, Invisible Learning is not much concerned with educational studies and their effect-sizes but rather with the importance of informal learning, that is not measured by education systems (although what data Google et al may be mining is another question). According to Invisible Learning, informal and serendipitous learning is argued to be the dominant factor in determining learners’ overall motivational proclivities and metacognitive abilities. Learners whose experience of schooling does not result in the successful elicitation of motivation to engage with and achieve mastery of academics may well be effective learners in various other areas of their lives which unfortunately go unrecognised and unrewarded. John Moravec has as one of the principal drivers of Invisible Learning has published Manifesto15a guide for educators to a future of education that he sees as increasingly based on invisible, informal learning that is becoming ever further separated from what is measured by educational institutions. Manifesto15 states that

The best innovations are often killed the moment we start worrying about measurement. We need to put an end to compulsory testing and reinvest these resources into educational initiatives that create authentic value and opportunities for growth…Black boards have been replaced by whiteboards and SMART Boards. Books have been replaced by iPads. This is like building a nuclear plant to power a horse cart. Yet, nothing has changed, and we still focus tremendous resources on these tools, and squander our opportunities to exploit their potential to transform what we learn and how we do it. By recreating practices of the past with technologies, schools focus more on managing hardware and software rather than developing students’ mindware and the purposive use of these tools. 

Such inflammatory words would not I suspect really be Hattie’s style, yet Hattie would surely agree that educational systems are on the whole set up so as to focus efforts on obtaining favourable knowledge measures more than they are on ensuring that those knowledge measures are compatible with what the world beyond the educational system considers to be useful knowledge.

Visible Learning strikes me as being more focused on the epistemology of learning. It looks in detail at formal methodologies. It extensively uses quantitative measures. It does not actively question the continued viability of the contemporary educational paradigm. Invisible Learning seems to be more interested in redefining learning, and to that extent is asking what learning is and whether it is even possible to distinguish learning from the ways in which learners determine what they do or do not know, implying that such determinations are perhaps inherently relative- what does a learner know compared to another learner? Who or what can a given learner learn from? Knowledge is supposed to arise from information exchanges that flow unpredictably. Learning Chaos, by Mac Bogert, very clearly argues for the embracing of unpredictability as a way to revitalise education. Chaos theory may be actually be more useful than quantum mechanics as a basis for comparison in understanding the learning process.