If you can’t say it and you can’t show it then you don’t know it

This was one of the two stock phrases that I used in teaching (the other one was ‘No one ever looked good making someone else look bad’, used in behaviour management). The point of ‘If you can’t say it and you can’t show it then you don’t know it’, was that I encountered a fair few learners who insisted that they knew one thing or another, despite being unable to answer questions or perform actions that demonstrated that knowledge. The knowledge that they had was some sort of private revelation that was not amenable to external scrutiny.

I have chanced to learn some intriguing things since my time in the classroom that make me think that there could after all be some truth to learners’ claims of private knowledge (although I think that the claim of such knowledge is generally an empty one). 

As an educator, my specialism is physics. Physics offers a variety of interesting ways to think about things, including about thinking.

The particular example of this that I am discussing here is quantum cognition.

Quantum cognition is the phenomenon of human decision making that is inconsistent with classical logic but is consistent with quantum mechanics. A well known example of this phenomenon involved asking students whether they would buy a ticket for a Hawaiian holiday, depending on whether

  • they had passed a big test.
  • they had failed the test.
  • they didn’t yet know whether they had passed or failed.

More than half said they would buy the ticket if they had passed. Even more than that said they would buy the ticket if they failed. Strangely though, 30 percent said they wouldn’t buy a ticket until they found out whether they had passed or failed.

This defies classical logic because if you would have some preference P if some condition C is true, and you have the same preference if C is false, then you should have the preference P whether C is true or false, whatever your current knowledge about the truth value of C. In quantum mechanics this does not lead to a contradiction however as quantum mechanics is based around operators that are non-commutative, meaning (A X B) ≠ (B X A); the order of these operations must be taken into account to find the result of them, so it is possible to have a situation where a preference can fail to exist because a question has not yet been asked where that preference would exist whatever the answer to the question was.

Quantum mechanics has been used to analyse the results of surveys where the order of pairs of ‘yes/no’ questions is reversed to see how switching the order of  the questions affects survey respondents’ answers to these questions. QM predicts not only that switching the order of question pairs should change what answers respondents give for them, but that the number of respondents who switch from answering both questions with ‘yes’ to answering both questions with ‘no’ when the order is switched should balance the number of respondents who do the opposite (switch from answering ‘no’ to both questions to answering ‘yes’ to both questions when the question order is reversed). Weirdly enough, this balancing is observed in survey results. No one yet understands why this is so.

What this implies (but falls far short of concluding) is that preferences are produced by the act of being asked questions about those preferences. The preferences apparently did not really exist before they were asked about. If we consider that preferences ought to be at least partly based on memories (we base our future expectations on our memories of the past) then the implication arises that memories are at least partially generated by the act of remembering!

To conjecture a bit further; preferences have dependencies with other preferences (the same is true of memories). Being asked a question that sets a preference/memory into a certain state therefore has knock on effects on other preferences/memories. If these dependencies also happen to work similarly to how QM does then some sets of preferences/memories could have complementary relationships with each other, meaning that determining the state of one of the complementary items would result in the state of the other complementary items becoming undetermined; knowing one thing clearly would mean that there would be something else that you couldn’t simultaneously know clearly.  

It seems to me that a complementarity principle of some sort (or something quantum-like at any rate) can be discerned in the process of learning. This phenomenon arises (I think) around the issue of the relationship between some knowledge (referred to for convenience as ‘k‘) that someone (referred to for convenience as ‘p‘) knows  and how it is known (by p or by someone else) that p knows whether or not they know k.

The basic idea is that whether or not p knows k is affected by asking p whether or not they know k. More specifically, asking p if they know k may reduce the extent to which they do know k. This may sound strange and far-fetched but I am arguing for this on the basis that asking p whether or not they knows k involves some sort of assessment process that the act of participation in alters how p perceives k. This argument rests on the principle (which I invoke here!) that just about any example of learning that a human can possess is decomposable in a variety of ways.

There are (in general) lots of different ways of knowing any particular thing, and the way that a learner knows a thing may be different to the way that someone assessing that learner’s knowledge knows that same thing (let’s call the knowledge again). If an assessor asks a learner questions (even indirectly) concerning k then those questions cannot help but influence the learner’s state of determination of some memories/preferences related to that k. If the assessor learned k in a different way than the learner did then the assessor’s questions could disrupt the learner’s state of determination of k.

QM terminology could be useful in explaining this situation. In QM, any measurement of the state of a system results in what is called the projection of the state of the measuring system onto the state of the measured system (this is a more technical way of stating the oft-repeated principle of QM that the act of observing a system changes that system). Using this sort of terminology, it would no longer be valid to speak of in itself but only of the projection of the assessor or learner on k. These projections can be denoted kassesor and klearner. When an assessor attempts to measure a learners knowledge of k, this results in a projection of kassesor onto klearner, which can be denoted kassesorklearner. This projection is different to klearner and may represent a less determined state of knowledge than klearner does.

Thinking in this sort of way, I speculate that two entities exist (they could be called variables, but that doesn’t do justice to how abstract and complex they are) that can have a complementary relationship to each other. These entities are ‘Whether learner knows k‘ and ‘What is’. Strange as it may sound, I am suggesting that it might be possible to say clearly whether a learner knew something but not be able to say clearly what it was they knew. Conversely, it might be possible to say clearly what a learner knew but not be able to say clearly whether or not they knew it. 

In practical terms, I would argue that these two entities need to be measured in distinctly different ways to try and minimise how complementary they are.

‘Whether learner knows k‘ should be measured in terms of some sort of minimally ambiguous outcome (perhaps a rather artificial one), and is more accurately expressed as ‘Whether learner can produce some outcome that has been assumed to be connected with knowledge of k‘. Only the outcome would be measured, the process by which the outcome was achieved would be a black-box to the assessor. Assessors could of course observe the process but their observations could not influence the determination of whether the outcome was or was not achieved.

‘What is’ would be measured not in terms of some outcome brought about by the learner but by how consistently successfully such outcomes were achieved by people other than that learner who were instructed by that learner in how to achieve the outcome. If a learner can consistently induce outcome achieving actions in others then some sort of shared construct must exist common to that learner and those whom they have instructed. Scrutinising different interpretations of this construct would go some way to establishing the characteristics of  k, especially in terms of how those constructs correlated with individuals’ ability to achieve outcomes. 

My speculations on this are, well, speculative. Intuitively I think that I am onto something and that learning will one day be understood to be more quantum than classical, and that what we learn is not our learning, but our learning exposed to our way of asking questions about what our learning is.

 

Advertisements

The curse of motivation

The curse of knowledge is a cognitive bias that educators are particularly vulnerable to. When you spend your working life explaining certain things, it becomes progressively harder to remember what it was like not to understand them. Your knowledge becomes ever more implicit and your learners’ failure to grasp what to you is simply ‘obvious’ becomes a barrier to effective communication of that knowledge.

The problems arising from the curse of knowledge are perhaps less severe than those arising from what should be called the ‘curse of motivation’.

Educators are inclined towards believing that formally learning things is desirable. If educators did not believe that formal learning was an inherently worthwhile undertaking then it would be hard to account for their choice to work as educators. This default attitude for educators is by no means necessarily closely related to the attitude of typical learners.

Design considerations pertaining to the learner engagement potential of learning content and learning objects are not likely to be appropriately handled if the designers’ have specific empathy deficits regarding learners’ motivation to engage with what is being designed.

A crucial factor involved in learners’ motivation to learn is their belief about whether of not they can learn.  A lot of awareness currently exists regarding fixed and growth mindsets. A fixed mindset is often described in mindset related literature to be associated with regarding intelligence as static rather than capable of development. Focusing mindset on attitudes towards intelligence is not necessarily very helpful for promoting the cause of the growth mindset. Intelligence in the sense of general intelligence g -as measured in IQ tests is not necessarily a highly mutable characteristic. What is far more clearly amenable to development is learners’ crystallised intelligence, which many learners might better recognise as experience.

Experience is something that is difficult to acquire quickly (other than by taking big risks). Anything that only occurs gradually has a precarious status in the minds of a lot of contemporary young learners, for whom ‘long‘ is a pejorative. Network technology has made very high speed communication so commonplace that anything that involves a significant degree of waiting carries with at a distinct whiff of obsolescence that relegates its perceived importance to near negligible levels. Furthermore, what is learned through experience can in many cases be learned fairly implicitly. When learners learn implicitly, they do not necessarily explicitly recognise that they have learned.

Learners tend to have unrepresentative views of what they do and do not know. This in turn affects what they tend to be curious about. People tend to be most curious about what is not too detached from what they already know. It follows from this that if learners were able to become more aware of what they did actually know, that should make them more curious about what else they might not have realised that they knew, or wanted to know. This has given me the idea for a learning tool that I would like to call a curiosity integrator (CI).

A CI works similarly to a decision engine (the part of a search engine that recommends things to you, such as when it auto-completes your search term inputs). The CI is an app that is synced to your social media and search engines to collect information about what you are interested in and hence what you probably know about. Either by data mining or by modelling human learners (or both), the CI can extrapolate from your interests (and the knowledge that this implies that you have) what you might know implicitly without knowing that you know. The CI also tracks what you have (probably) learned from experience.

The CI uses what it knows (and can infer) about you to deliver an unpredictably timed series of informal reflections on what you have learned and when, and how these things connect with each other in ways that probably had not occurred to you. The CI suggests what future activities that you might want to undertake could be facilitated by particular learning experiences and links you to appropriate resources for learning.

The point about the CI is that it has no agenda. It does not expect that you should or should not learn anything in particular, it merely attempts to help you to recognise that you know more than you think that you do, in the hope that you find that information encouraging.      

 

 

 

 

 

 

The accreditation crunch

Online credit recovery… it sounds like something to do with mis-sold insurance policies or a procedure for compensating victims of phishing scams. Well, there are certainly some accusations of ‘scammyness’ being directed at OCR (in terms of its pedagogical soundness).

So what is OCR then?

OCR is the process of learners retaking failed courses by doing online tests. In a way, this is hardly any different to ordinary online testing; the learners doing OCR necessarily failed tests on their first attempt- that’s the only formal difference.

In practice, OCR (as described in Slate) is online testing which has some pretty dubious qualifying characteristics- namely, testing software that

  • allows unlimited repeats at test answers.
  • repeats questions fairly predictably.
  • accepts answers pasted from the copy buffer, even when other browser windows are open.
  • is used with minimal or zero supervision. 

Testing software used like this might well be assessing learners’ knowledge about as effectively as the hypothetical Chinese Room assesses its occupant’s understanding of Chinese (for those unfamiliar with the Chinese Room argument, see below).

chiroo.jpg

The premise of the Chinese Room argument is that the person in the room is following a combinatoric list of instructions concerning some shapes that are ostensibly meaningless to them. Observers outside the room that see only the inputs to and outputs from the room can however apparently legitimately conclude that the room speaks Chinese.  

Learners completing courses using OCR differ somewhat from the Chinese Room in that the Chinese Room involves only exact input-output functions (presumably in the book in the room there is a page that says “If you see any sequence of shapes not listed on any of the other pages then do not produce any shapes”). The OCR learner is less reliable and may give incorrect outputs before finding the correct ones by a process of elimination (they are a probabilistic-iterative Chinese Room). 

OCR honestly does not surprise me very much though. While I was teaching for a living, my job was in the further education sector. This sector was post-compulsory in nature and the culture of learning there was something of a grey area in-between that of school and higher education where a substantial portion of the learners were re-taking courses. As part of the job I was sometimes involved in school links projects, where I experienced up close the differences between the practices of further and secondary education. What I witnessed was a preponderance in secondary education of what I would call ‘transcription based learning’ (TBL). TBL primarily involves learners first being given hard copies of some learning content that mainly consists of a discrete set of items of information and then rewriting/paraphrasing these items of information in a different order into a largely blank container document (largely blank- some of the container might be pre-filled as a way to model the filling process). TBL can easily be used as ostensible evidence of learning as TBL materials provide a very convenient ‘before’ and ‘after’ states, demonstrating that learners have translated/rearranged information (the unstated implication being that they did so independently rather than imitatively).  

TBL and OCR clearly represent a kind of debasement of the currency of learning (by which I mean accreditation for courses). Such a debasement is reminiscent of concerns over grade inflation (as implied in this data:source).

gradeinflation.jpg

These steadily improving results for 16+ learners in the UK cover much of the same timescale over which, according to PISA scores (taken by 15 year-old learners), the UK has had consistently falling results in mathematics, reading and science.

PISA

It seems incongruous to say the least that learners’ performance at age fifteen has been dropping at the same time that it has been climbing for learners aged sixteen and over.

Educational progression is for the most part intranational, hence the debasement of educational currency at one stage of progression within one country can be accommodated at other stages in order to maintain consistency. Such debasement is therefore only necessarily evident when comparisons are made against standards derived from either other countries education systems or from outside of education systems (such as from employers).

It is above all where the utility of education is measured against employability (most obviously in terms of additional earning power resulting from gaining qualifications) that evidence for debasement is easiest to detect. The point at which the additional earning power provided by qualifications falls below the cost of studying for those qualifications is the point at which the purchaser of education notices debasement of qualifications. More accurately, this applies when education is purchased individually. The cost versus value determination of education as a publicly funded good is not noticed anywhere near as easily as it is by a single individual deciding whether or not to pay for tuition for the simple reason that in the individual case clear comparisons can be made between cases of those people who have and have not taken the option to purchase education beyond that which is compulsorily provided. Education as a public good is for the most part compulsory. Everyone partakes of it. When everyone uses a service it ceases to be easy to say what would have happened to those that did not use it. Obviously, post-compulsory publicly funded education in the form of subsidised higher education has existed in various countries at different times, but has tended to do so less problematically where access to higher education has been restricted not by ability to pay but by difficulty of obtaining the requisite entry qualification grades and where adequate provision of alternative forms of learning (such as work based training) exist.

The effects of an accreditation crunch on the value of qualifications is in some ways analogous to the effects of a credit crunch (a la 2008) on the values of financial institutions’ assets. In 2008, when the reality of the extent of bad debts that had held good credit ratings became clear, exposed financial institutions massively curtailed lending to each other and in the worst cases could not meet their financial obligations; eventually the worst affected businesses either went bankrupt or received government bailouts. 

An educational meltdown to match the 2008 financial meltdown would start with a widespread acknowledgement that many educational qualifications had become seriously devalued. This acknowledgement would result in many educational institutions ceasing to accept each others’ qualifications as valid entry criteria. Quite soon after that, many educational institutions would be unable to find enough suitably qualified fee-paying students to cover the cost of maintaining their current capacities, so would have to either sell-off or otherwise dispose of many of their assets. Educational institutions might instead start calling for taxpayer bailouts to keep them operational while accreditation mechanisms were reformed.

It can be argued that an accreditation crunch has in fact been happening, and for some time, and much more gradually than the 2008 credit crunch happened. Taxpayers have not had to make a vast short-term bailout in response to an imminent crisis of unmistakable proportions but have rather spread out such a bailout over many years by continually subsidising a devalued education sector in order to obscure acknowledgement that devaluation of qualifications was occurring. The severity of the accreditation crunch is probably much less than of the credit crunch partly because the value of educational qualifications are not as amenable to leveraging as is the case with financial products- paper money can have imaginary value much more easily than paper credentials; the credentials are at least attached to a human being whose qualities cannot be as readily ignored as can those of a faceless and impersonal corporate entity.   

Crisitunity knocks

In a recent EdSurge article, Julia Freeland Fisher, the Director of Education Research at the Clayton Christensen Institute wrote (as ever, my emphases)-

Education innovators love to talk about adoption curves. It’s a fancy way of looking at a pretty basic concept: the rate at which a given tool, model or approach saturates a market.

Lately, I’ve been seeing these curves crop up a lot in the conversation about personalised learning. As more school systems attempt to customise learning environments and more education advocates and funders champion personalised models, people are increasingly anxious to know: At what rate might we expect new ideas and tools to permeate the traditional school system?

But not all adoption curves are created equal. Depending on the features of the tools and their intended users, the arc of adoption might look vastly different. One of those distinctions hinges on the degree to which a new tool or model conforms to the traditional school structure.

What Fisher means by a traditional school structure corresponds to what I have referred to in other posts (mainly this one) as a closed pedagogy. Such traditional structures tend to be self-perpetuating and resistant to change, and apparently regardless of their effectiveness. If traditional structures’ effectiveness becomes excessively compromised though then these structures must finally undergo some sort of crisis. From the perspective of alternatives to these traditional structures, such crises represent opportunities.

I think that I have observed the beginnings of at least part of the pattern of the decisive crisis of traditional educational structures. In this post I have assembled some fragments of this pattern that are suggestive of the pattern in general.

In the UK and the USA (not only in these places, but rather prominently in them), public education funding is starting to see real cuts in per-learner spending, even at the level of compulsory education. Weakening traditional educational structures by under-funding them does not necessarily make these structures less dominant, only less effective. However, some media reports suggest the emergence of changes that seem to reduce the extent to which traditional structures automatically crowd out alternatives.   

One story tells of fund-starved schools adopting four day weeks, another of such schools shortening their working day. If these measures (and extensions of them) come to pass, eventually a point will be reached at which the acceptance of public schooling as the default educational route followed by the great mass of learners will no longer be so easily assumed. Parents would have to contend with schools not being places that their children would be able to go in parallel with as many of those parents’ working hours as had previously been the case. A non-trivial part of those children’s weekly routines would start to require some sort of parental planning. It does not seem far-fetched to me that parents would become increasingly minded to start to contemplate what options existed for enabling their children’s home based learning, and that the options so researched would conceivably have markedly different adoption arcs to those compatible with traditional school structures.

While working parents would of course legally be required to ensure the supervision of their children alongside their education, in practice it might transpire that some combination of mobile communications with children and automated supervision-assisting gadgets of various kinds, supplemented  by some sort of formal or informal emergency supervising visit-on-demand cover service, there could be a popular re-appraisal of how independently safe children left in their homes could be expected to be. If this re-evaluation happens then it might be a step in the direction of increasing children’s early exposure to self-efficacy and resilience promoting circumstances (the decline of which I discussed in another post).       

As well as less public school hours being available, schools that continued to remain open much of the time might not necessarily be equally available to all students. I noticed this story regarding an academy school that had attracted negative attention for offering/encouraging some of its students to undertake Elective Home Education (EHE) rather than study at the school in person. The school in question seems to have acted opportunistically in allowing the EHE cases that it did, apparently doing so to keep certain students out of the school that were not helping the smooth running of the school. How much this policy should be seen as dereliction of duty by the school depends considerably on how effective its EHE provision is and also on whether students are taking up EHE as a matter of preference rather than as a way of avoiding problems in the school that, were those problems to be solved, would mean that those students would prefer to remain in school. The general principle that students who for one reason or another are not being best served by attending a school could have the option of learning outside of a school does not seem to me a necessarily flawed one. Why should some learners (perhaps even many of them) not prefer to learn outside of a school if such a learning environment was more conducive to their learning?   

Where these stories are published they are presented in no uncertain terms as portents of crisis- as negatives with no associated positives. Another similar story is that of the teacher recruitment/retention crisis. I was personally involved in this trend, to the extent that if it did not exist I would still be teaching full time, only very peripherally concerned with EdTech, and certainly not writing this blog. I certainly have an axe to grind about the protracted deskilling and deprofessionalisation of educators that has accompanied the ascent of managerial culture in the state education system, but at the same time I recognise that educators themselves constitute an aspect of the traditional educational structures that determine the adoption arcs of various EdTech developments.

I have posted before that educators have an underutilised potential to redefine their roles in educational structures (becoming more ‘intrapreneurial’), but in general it seems that most teachers have identified with traditional educational structures as part of a stance of defending the continued provision of education as a public good, correctly recognising that it is under attack. The crisis of teacher recruitment/retention has the expected effects of removing experienced teachers from the teaching community and making their replacements those with less commitment to established teaching methodologies. I have termed such a new breed of less qualified, less formally educated, less unionised and lower-paid education workers as ‘EdTechnicians’ rather than educators. EdTechnicians may have more potential affinity with the educational affordances of various technologies than more traditional educators if they see a substantial aspect of their work as being the implementation of EdTech systems.

As well as changes in compulsory education, higher education is experiencing a crisis that contains opportunities. The crisis in HE has two main aspects; financial and credential.

The financial aspect is highlighted by the recent observation that a higher average return on investment can currently be achieved by investing tuition fees in a stock-market tracking fund than would be gained by the career-enhancing effects of achieving a degree (although presumably this says as much about the overvaluation of shares as it does about graduate employment prospects).

The credential related crisis in HE is illustrated by the decisions made by the firms of Ernst & Young and also by PriceWaterhouseCoopers to cease relying on degree classifications and pre-university qualification grades when recruiting in favour of internally defined and assessed standards. The tendency for employers to find that the standards defined by awarding and examining bodies are of little vocational applicability is likely to accelerate the more that such bodies focus on what can be measured by standardised testing and tied in to the teaching and assessment products supplied by corporations with links to such bodies.

The combined effect of financial and credential concerns are likely to have had something to do with the fact that applications to university from domestic applicants in 2017 fell by 5%. College applications in the USA have been falling since 2010.

Alternatives to traditional higher education structures seem to have developed more than for compulsory education. This is hardly surprising given that compulsory education is mandatory and higher education is elective. Interesting alternative models have appeared that are based on internship rather than formal study, such as Praxis and Galvanize. The ‘bootcamp’ model has also become increasingly common, which in one rather extraordinary case is provided without charge (by the university called simply 42). Praxis, Galvanise and 42 are notable in how much they emphasise learner resilience, real-world problem solving, and collaborative practice. Graduates of these processes’ credentials are ultimately what they have done, and what they have chosen to do, during their involvement in the process. The learning that occurred in these processes was not so much preparation for employment but practice at effective working. Translating this shift in priorities to lower levels of education clearly involves greater levels of resistance by traditional educational structures, but the crises of those structures may be what leads to the collapse of that resistance.   

 

 

 

 

 

 

The what and the why of Ed and Tech

The previous post on this blog characterised the educational theory of connectivism as basically arguing that the most important kind of knowledge is the knowledge of who to ask for help from.

Since then, I happen to have read an intriguing 2011 article from Behavioural and Brain Sciences, written by Hugo Mercier and Dan Sperber. The article makes the persuasive argument that making persuasive arguments (persuasive to other people that is) is the primary function of explicit human thinking. The article argues that explicit deductive reasoning about the behaviour of inanimate objects is secondary to the ability to reason about the motives of humans interacted with, perhaps even a side effect of it.

The justifications for this claim can be summarised by stating that people (if they are neurotypical)-  

  • perform better at reasoning tasks when they are set in argument-based contexts or involve insights into humans’ motivations (such  as the Wason selection task).
  • are subject to individual confirmation bias and group think.
  • anticipate potential arguments against positions that they adopt when reasoning in isolation from others.
  • tend to arrive at conclusions based on the ease of reaching such conclusions by argument more than by the practical effectiveness of the conclusions
  • are capable of reasoning implicitly about inanimate objects with some degree of effectiveness.
  • tend to be more fluent in teleological reasoning than purely analytical reasoning. 

What these collectively imply is that it is not only true that knowledge of who to get help from is the most important kind of knowledge, but that (explicit) knowledge in general is primarily knowledge of how to get help from people (by persuading them to give such help, even if the help involved is simply to accept an argument).  

This argument (referred to hereafter as the argumentative theory of reasoning) seems to have major implications for the teaching of scientific and mathematical ideas. If learners’ reasoning is primarily interpersonal and is less effective when applied to inanimate objects then acquisition of explicit understanding relating to inanimate objects is likely to lag and be distorted to reflect interpersonal understanding.

Implicit learning about inanimate objects need not be affected detrimentally by explicit argumentative reasoning, as implicit knowledge need not be explicitly communicated to a learner. Learners can acquire implicit knowledge through a variety of automatic, implicit learning processes.

Different teaching methods each span a spectrum of implicit and explicit teaching, each with characteristic weightings of implicit and explicit emphasis. Rote learning of symbol manipulations according to given rules, taught using strict behaviourist methods, involves explicit learning mainly in terms of obtaining learners’ explicit compliance in directing attention to some input and producing some output when requested to (Implicit learning from phenomena is dependent on the salience of phenomena to a learner). The understanding of the rules of the symbolic manipulations could be learned implicitly if the rules were sufficiently simple, the examples supplied were sufficiently numerous, and learners’ compliance and attention was sufficiently sustained.  

The degree of learner compliance required for teaching methods like the one mentioned above is of course not realistically obtainable without learners having previously gone through some process, most likely an explicit process, of recognition of and consent to be instructed by some educational authority figure. In other words, in order to be in a viable position to be able to implicitly learn from some teaching source, a learner must to some extent have explicitly agreed with some argument presented by that authority regarding its status as a valid teaching authority.

In the case of mathematics learning, it can be observed that much mathematics teaching has historically displayed tendencies to use explicit authority figures that utilise more implicit methods to inculcate mathematical knowledge (rote learning, algorithmic procedures). When mathematics teaching has attempted to include more explicit learner understanding (such as by discovery/enquiry learning) it is noticeable that teaching styles involved do not so much assume the existence of an authority figure as require an authority figure to be established by explicit negotiations with learners; such negotiations would be expected to subject to argumentative reasoning effects.

Perhaps it is the case that knowledge that needs to explicated to be effectively understood needs to be explicated primarily in order to persuade learners to engage in cognitive processes similar to those that would tend to occur automatically during implicit learning from an accepted teaching authority. This highlights the notion that for learners the ‘how’ of learning cannot be separated from the ‘why’ of it- learning is shaped by learners’ sense of what the learning is for. This kind of goal-orientation distinguishes learning as it is understood here from activity which is more purely process-oriented, which is how play is more often understood (although implicit learning through play is of course possible).

The premise that the activity of learning is bound up with the goals of learning is illustrated by Meyer & Lands’ idea of threshold concepts. Threshold concepts are those which are markedly transformative, troublesome, irreversible, integrative, bounded, discursive, reconstitutive and liminal. A very obvious threshold concept is that of phonetic spelling. Once phonetic spelling is learned by someone, from that point on in their lives, what were once a collection of arbitrary shapes are automatically converted by an automatic process into inner speech. Once a learner can spell, they have a degree of literacy that permits their learning to develop along drastically different lines to what could have existed without such literacy.

Because a threshold concept is transformative, that concept changes how a learner views their learning. Because a threshold concept is irreversible, that concept brings about a permanent change in a learner. Because a threshold concept is reconstitutive, that concept brings about changes in how a learner sees them self.  

In the life of a mature adult, the temporal density of threshold concept learning is low compared to that of a child; more thresholds are crossed earlier in life than later. Adult consciousness is far removed from childhood’s comparative crush of no-turning-back learning events that do so much to set the shape of the future. From the child’s point of view it is understandable that they feel inclined to stand their ground for the worldview that they currently occupy and only open its development to those who have been able to convince them that they are trustworthy to do so.

If the ‘how’ of learning cannot be separated from the ‘why’ of it, then the processes of education cannot be separated from the goals of education. In the education systems that currently exist- systems that were designed in and developed throughout the twentieth century, the goals in question are twofold: firstly to maintain and consolidate centralised control of the system, and secondly to maximise the efficiency of the system. While increasing efficiency ought to generally be a good thing, it is not necessarily the best thing to prioritise. The Modern Learners movement agenda makes the point that “Doing things right is efficiency. Doing the right thing is effectiveness.” Nevertheless, central control (which takes for granted that it is directing people to do the right thing) and maximum efficiency are the core goals of Taylorist ‘scientific’ management theory applied to education (discussed in an earlier post). 

Since I first formally trained in teaching, I recognised that the most important factor affecting educational practice was the role of examining and awarding bodies (although I stress that this was not even once explicitly mentioned in my training). Examining and awarding bodies are responsible for certification of qualifications, of defining what knowledge is and is not included in such qualifications, and what are and are not valid forms of assessing such knowledge. These bodies ultimately define the goals of educational practice. Because these bodies define the goals of educational practice, they cannot help but define the methods of educational practice. 

EdTech presents learners with two opportunities that are ultimately very different but superficially quite similar, in that both opportunities relate to increased choice for learners regarding their learning.

One opportunity is that given by increasing learners’ freedom regarding the educational paths that they take.

paths

Three different educational paths

Another opportunity is that given by increasing learners’ freedom regarding the educational goals that they set.

ends

Several different educational goals

What should be apparent from the difference between the path freedom condition and the goal freedom is that if an end goal is fixed, a learners’ motive to take indirect paths to that end goal cannot avoid being interpreted as an inefficiency inherent to a learner. The educational process cannot in turn avoid interpreting its purpose as developing ways to reduce such learner inefficiencies. The educational processes attempts at making learners more efficient may well consent to meet learners where they are rather than where the process deems that they ideally should be, but the process will nevertheless act so as to incentivise learners to take direct, efficient paths towards goals defined by the process rather than the learners. The freedom offered to learners to select learning paths is offered in bad faith, as a kind of learner self-diagnostic of error for the process to calculate how to correct rather than as an expression of any sort of valid learner preference. Reflecting on this offer made in bad faith, I find myself recalling the assertion made by Noam Chomsky that, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….” Allowing learners to debate among themselves how to achieve a goal may well be an effective means of deflecting them from asking questions about the desirability of that goal.

The assumption that examining and awarding bodies are the best placed agents for determining the goals of education is a questionable assumption. Originally, examining and awarding bodies (EABs) consisted of a mixture of senior university academics and government educational policy planners. Such people were assumed to best understand the requirements of national economies for various skills within the paradigm of economic management by quasi-Keynesian government and central bank planning in conjunction with high status professional organisations. The shift towards neoliberal preference for free markets working in conjunction with business managerial culture and increased automation of professional roles has seriously undermined this assumption. EABs have not faded into irrelevance however- they have taken a more active (albeit indirect) role in curriculum planning. While academics and politicians still play roles in EABs, their influence has increasingly tended to be ceded to corporations that produce educational resources and assessments. Incidentally (co-incidentally?), in the USA, undergraduate textbook prices increased by 1,041% between 1977 and 2015. Also, according to Lazarin Overtesting Report from 2014, “Our analysis found that students take as many as 20 standardised assessments per year and an average of 10 tests in grades 3-8.” If educational processes are guided according to the principle of maximisation of learning activities that can be measured then it will be measurement that tends to be maximised rather than learning.  

It is perhaps surprising that employers’ organisations do not play a larger role in EABs. Most formal education is provided for vocational purposes, and education that is not for some vocational purpose has no obvious need to enforce certification standards.

Ed Tech’s more optimism-inducing opportunity is to give learners more freedom regarding their learning goals. I will be delivering a presentation on this subject later this month at the TechXLR8 event in London, where I will be stressing the importance of self-organising learning networks’ capacity to create their own learning objectives. If the argumentative reasoning theory is accepted, then part of how such self-organisation is possible is due to the members of such networks engagement in convincing each other of the usefulness of the networks’ objectives. Groups of learners mutually reinforce each others’ commitment to the pursuit of common learning goals.

An important part of the presentation will relate to how goal generating learning networks can assess the learning of their members without recourse to standardised testing. This indirectly refers to a proposal outlined in the post EdTech’s disputed politicosocioeconomics for the devising of learning tools that are ‘pedagogically open’. The meaning of pedagogical openness was only hinted at in that post, but can now be expanded upon here. 

Open pedagogy is an approach to learning content generation which has no arbitrary limits placed upon the learning contents’ end goals (it would be explicitly transdisciplinary). Such content is necessarily independent of the knowledge and assessment structures developed by various EABs. Pedagogically open learning content connects to all other pedagogically open learning content (but not all connections are equally weighted). The underlying principle of such openness is rather well described by this Foundation For Critical Thinking video

Members of learning networks can collaboratively engage in the production of open learning content webs (OLCWs). An OLCW would be like a cross between a very large, detailed and intricate mind-map and a gamified version of Wikipedia. Map entries are pieces of knowledge content that can be defined and added by any network user.

OLCW users have a score, which can be increased by adding examples of content that are consistent with the definitions of existing entries (consistency is judged by a community vote, where voters’ votes are weighted by their scores. Diminishing returns are awarded for repeated examples added to any particular entry). Users can increase their scores more substantially by having their suggestions for links between map entries approved by community vote (and by having other users link to entries that they have made).

Successfully adding linkages imply that the adder has navigated the map well enough to have some understanding of the linkage structure or possesses some sort of independent equivalent representation that makes it possible for them to deduce the map’s linkage structure. For a learner to be assessed by the community as having knowledge of the map, that learner must validly add something to the map. 

Validation of examples and links is ultimately dependent on community votes but before voting there is opportunity for users to discuss with each other whether or not to validate. Discussions of this kind would prompt argumentative reasoning that should improve users’ reasoning about the examples and links being validation checked.

In the post where I floated the idea of open pedagogy I also raised the idea of educator-developers who would generate open pedagogy systems. Since then I have learned the word ‘intrapreneur‘. An intrapreneur is an employee of a large organisation who innovates like an entrepreneur within that organisation. Active users (those that contribute as well as view) of OLCWs would be members of a large organisation that were continually innovating to improve that organisation. If an OLCW was monetised so that there were charges for accessing some of its content, then its active users could receive discounts for use by contributing, and make a profit if they contributed sufficiently. Ideally an OLCW would be a platform cooperative belonging to its active users or perhaps shared between them and various employers of the OLCW’s users.  

 

 

Help! I need somebody (not just anybody)

The previous post on this blog expressed concerns about the fragility of learner autonomy in a world that increasingly aims to provide maximum pre-packaged convenience and do all the work for you (as long as you pay for this somehow or other). This post will hopefully examine the issue from a fairly optimistic perspective by aiming to show that providing learners with lots of on-demand help in the appropriate way can improve their self-efficacy.   

The method of perceptual learning is an interesting one. Perceptual learning is based on the (experimentally demonstrated) principle that providing repeatedly presenting a learner with different examples of the kinds of elements that they would need to manipulate in some task and for each example providing ‘yes/no’ feedback as to whether they have successfully matched the elements they perceive to a different arrangement or encoding of those elements, learners develop greater fluency in manipulation of elements and reduce their cognitive loading associated with performing such manipulations- freeing up cognitive resources for other kinds of thinking.

Below is an example of a perceptual learning exercise question. During perceptual teaching, many such questions would be presented in an unpredictable sequence.  

PerceptL

You may be thinking ‘Isn’t this just an MCQ?’ In a way it is, but note that the learner is not being asked to solve an equation. Finding x or y is not the task- the task is just to match one of the equations to the graph. The other big difference between this and an MCQ is that this is being used for teaching rather than for assessment. It’s effectiveness as a technique is a more specialised example of the finding that testing is effective as a form of learning rather than a form of assessment (testing for learning tends to go by the name of retrieval practice). 

Matching equations to rearranged forms of themselves is a similar type of task.

PerceptL2

When learning perceptually, learners are supposed to respond quickly. Perceptual learning is implicit and may act largely independently of explicit efforts to work out correct answers. Learners are encouraged to ‘see’ correct answers through repeated practice rather than deduce them through conscious deliberation.

A commercial promotional video demonstration is available here 

During my teaching career it was often necessary to help learners shape up their ability to rearrange formulae, convert between different units and also between standard form and ordinary number expressions. Much wailing and gnashing of teeth usually accompanied such practice.

What was very clear was that I and my learners had different ideas about how to improve at these types of tasks. I was well aware that many of my learners did not analytically understand the procedures for performing these kinds of tasks and were reliant on faulty heuristics. One obvious example of this was the use of what I called ‘hanging numbers’.

rearr 

A 2 has disappeared from the right hand side, consistent with dividing both sides by 2.

A 2 has also appeared at the left hand side, but it is ambiguous whether the 2 is part of the numerator or the denominator. The 2 is left hanging; it might end up in either the numerator or the denominator, usually whichever the learner decided would be more convenient in simplifying the expression (incidentally, this is a neat free pdf book on the unreliability of opaque mathematics heuristics). 

I sympathise with the confusion that learners making those mistakes would have been feeling when they encountered examples of educational content like this.

200px-NewtonsLawOfUniversalGravitation.svg

Is that G supposed to be part of the numerator or the denominator?!?

My learners’ preferred method of improving their ability to rearrange formulae etc… was to witness and imitate large numbers of examples. Because it was apparent that this approach was not addressing the lack of core understanding of the rules of the procedures for rearranging etc… I stressed to learners the usefulness of understanding these rules by careful, conscious and slow examination of a small number of examples (this approach was generally disliked for being long).   

I understand better now that what my learners would have liked, and what would probably have helped them well, would have been something like perceptual teaching. If those learners had seen enough examples of rearrangements with numbers always clearly part of the numerator or the denominator then I strongly suspect that they would have learned to see (without consciously thinking about it) that something was wrong when they wrote a hanging number.

A step up from perceptual learning in terms of learner autonomy is choice-based-assessment and preparation for future learning. These approaches are being developed at Stanford AAA lab and I have posted before about them in this blog. The basic idea of these approaches is to give learners some sort of online problem solving task to perform in an application which also contains the information necessary to complete the task correctly.

photolet

Learner’s interactions with the application are automatically recorded to later determine how effectively learners make use of the available information. Here is a video demonstration of such an application. According to a AAA lab studies, learners successful use of an application called ‘Oh no! has talent’ predicted 35% of the variance in their mathematics grades. By researching which learners are and are not effective at making use of information available to them it may be possible to design systems for which learners find it easier to make use of available knowledge.

Personally, I am a big fan of choice-based-assessment and have high hopes for it. This has a lot to do with choice-based-assessment being a potential alternative to standardised assessment.

Then there is connectivism.

Siemen’s Principles of connectivism, from Connectivism: A Learning Theory for the Digital Age (my emphases added)

  1. Learning and knowledge rests in diversity of opinions.
  2. Learning is a process of connecting specialised nodes or information sources.
  3. Learning may reside in non-human appliances.
  4. Capacity to know more is more critical than what is currently known.
  5. Nurturing and maintaining connections is needed to facilitate continual learning.
  6. Ability to see connections between fields, ideas, and concepts is a core skill.
  7. Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
  8. Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision.

 

A rather disparaging shorthand for connectivism could be phrased as, ‘Knowing who to get help from is the most important kind of knowledge.’ There are parts of connectivism that I hugely agree with, particularly (6) that seeing interdisciplinary connections is a core skill.

There is though, something about connectivism that doesn’t seem to add up. Kant would have had problems with universalising it; if all learners get their answers by asking other learners the answer then surely no learners ever actually learned anything to share with anyone- right? The answer to who actually knows things may come down to (2), learning residing in non-human appliances