The accreditation crunch

Online credit recovery… it sounds like something to do with mis-sold insurance policies or a procedure for compensating victims of phishing scams. Well, there are certainly some accusations of ‘scammyness’ being directed at OCR (in terms of its pedagogical soundness).

So what is OCR then?

OCR is the process of learners retaking failed courses by doing online tests. In a way, this is hardly any different to ordinary online testing; the learners doing OCR necessarily failed tests on their first attempt- that’s the only formal difference.

In practice, OCR (as described in Slate) is online testing which has some pretty dubious qualifying characteristics- namely, testing software that

  • allows unlimited repeats at test answers.
  • repeats questions fairly predictably.
  • accepts answers pasted from the copy buffer, even when other browser windows are open.
  • is used with minimal or zero supervision. 

Testing software used like this might well be assessing learners’ knowledge about as effectively as the hypothetical Chinese Room assesses its occupant’s understanding of Chinese (for those unfamiliar with the Chinese Room argument, see below).


The premise of the Chinese Room argument is that the person in the room is following a combinatoric list of instructions concerning some shapes that are ostensibly meaningless to them. Observers outside the room that see only the inputs to and outputs from the room can however apparently legitimately conclude that the room speaks Chinese.  

Learners completing courses using OCR differ somewhat from the Chinese Room in that the Chinese Room involves only exact input-output functions (presumably in the book in the room there is a page that says “If you see any sequence of shapes not listed on any of the other pages then do not produce any shapes”). The OCR learner is less reliable and may give incorrect outputs before finding the correct ones by a process of elimination (they are a probabilistic-iterative Chinese Room). 

OCR honestly does not surprise me very much though. While I was teaching for a living, my job was in the further education sector. This sector was post-compulsory in nature and the culture of learning there was something of a grey area in-between that of school and higher education where a substantial portion of the learners were re-taking courses. As part of the job I was sometimes involved in school links projects, where I experienced up close the differences between the practices of further and secondary education. What I witnessed was a preponderance in secondary education of what I would call ‘transcription based learning’ (TBL). TBL primarily involves learners first being given hard copies of some learning content that mainly consists of a discrete set of items of information and then rewriting/paraphrasing these items of information in a different order into a largely blank container document (largely blank- some of the container might be pre-filled as a way to model the filling process). TBL can easily be used as ostensible evidence of learning as TBL materials provide a very convenient ‘before’ and ‘after’ states, demonstrating that learners have translated/rearranged information (the unstated implication being that they did so independently rather than imitatively).  

TBL and OCR clearly represent a kind of debasement of the currency of learning (by which I mean accreditation for courses). Such a debasement is reminiscent of concerns over grade inflation (as implied in this data:source).


These steadily improving results for 16+ learners in the UK cover much of the same timescale over which, according to PISA scores (taken by 15 year-old learners), the UK has had consistently falling results in mathematics, reading and science.


It seems incongruous to say the least that learners’ performance at age fifteen has been dropping at the same time that it has been climbing for learners aged sixteen and over.

Educational progression is for the most part intranational, hence the debasement of educational currency at one stage of progression within one country can be accommodated at other stages in order to maintain consistency. Such debasement is therefore only necessarily evident when comparisons are made against standards derived from either other countries education systems or from outside of education systems (such as from employers).

It is above all where the utility of education is measured against employability (most obviously in terms of additional earning power resulting from gaining qualifications) that evidence for debasement is easiest to detect. The point at which the additional earning power provided by qualifications falls below the cost of studying for those qualifications is the point at which the purchaser of education notices debasement of qualifications. More accurately, this applies when education is purchased individually. The cost versus value determination of education as a publicly funded good is not noticed anywhere near as easily as it is by a single individual deciding whether or not to pay for tuition for the simple reason that in the individual case clear comparisons can be made between cases of those people who have and have not taken the option to purchase education beyond that which is compulsorily provided. Education as a public good is for the most part compulsory. Everyone partakes of it. When everyone uses a service it ceases to be easy to say what would have happened to those that did not use it. Obviously, post-compulsory publicly funded education in the form of subsidised higher education has existed in various countries at different times, but has tended to do so less problematically where access to higher education has been restricted not by ability to pay but by difficulty of obtaining the requisite entry qualification grades and where adequate provision of alternative forms of learning (such as work based training) exist.

The effects of an accreditation crunch on the value of qualifications is in some ways analogous to the effects of a credit crunch (a la 2008) on the values of financial institutions’ assets. In 2008, when the reality of the extent of bad debts that had held good credit ratings became clear, exposed financial institutions massively curtailed lending to each other and in the worst cases could not meet their financial obligations; eventually the worst affected businesses either went bankrupt or received government bailouts. 

An educational meltdown to match the 2008 financial meltdown would start with a widespread acknowledgement that many educational qualifications had become seriously devalued. This acknowledgement would result in many educational institutions ceasing to accept each others’ qualifications as valid entry criteria. Quite soon after that, many educational institutions would be unable to find enough suitably qualified fee-paying students to cover the cost of maintaining their current capacities, so would have to either sell-off or otherwise dispose of many of their assets. Educational institutions might instead start calling for taxpayer bailouts to keep them operational while accreditation mechanisms were reformed.

It can be argued that an accreditation crunch has in fact been happening, and for some time, and much more gradually than the 2008 credit crunch happened. Taxpayers have not had to make a vast short-term bailout in response to an imminent crisis of unmistakable proportions but have rather spread out such a bailout over many years by continually subsidising a devalued education sector in order to obscure acknowledgement that devaluation of qualifications was occurring. The severity of the accreditation crunch is probably much less than of the credit crunch partly because the value of educational qualifications are not as amenable to leveraging as is the case with financial products- paper money can have imaginary value much more easily than paper credentials; the credentials are at least attached to a human being whose qualities cannot be as readily ignored as can those of a faceless and impersonal corporate entity.   

Crisitunity knocks

In a recent EdSurge article, Julia Freeland Fisher, the Director of Education Research at the Clayton Christensen Institute wrote (as ever, my emphases)-

Education innovators love to talk about adoption curves. It’s a fancy way of looking at a pretty basic concept: the rate at which a given tool, model or approach saturates a market.

Lately, I’ve been seeing these curves crop up a lot in the conversation about personalised learning. As more school systems attempt to customise learning environments and more education advocates and funders champion personalised models, people are increasingly anxious to know: At what rate might we expect new ideas and tools to permeate the traditional school system?

But not all adoption curves are created equal. Depending on the features of the tools and their intended users, the arc of adoption might look vastly different. One of those distinctions hinges on the degree to which a new tool or model conforms to the traditional school structure.

What Fisher means by a traditional school structure corresponds to what I have referred to in other posts (mainly this one) as a closed pedagogy. Such traditional structures tend to be self-perpetuating and resistant to change, and apparently regardless of their effectiveness. If traditional structures’ effectiveness becomes excessively compromised though then these structures must finally undergo some sort of crisis. From the perspective of alternatives to these traditional structures, such crises represent opportunities.

I think that I have observed the beginnings of at least part of the pattern of the decisive crisis of traditional educational structures. In this post I have assembled some fragments of this pattern that are suggestive of the pattern in general.

In the UK and the USA (not only in these places, but rather prominently in them), public education funding is starting to see real cuts in per-learner spending, even at the level of compulsory education. Weakening traditional educational structures by under-funding them does not necessarily make these structures less dominant, only less effective. However, some media reports suggest the emergence of changes that seem to reduce the extent to which traditional structures automatically crowd out alternatives.   

One story tells of fund-starved schools adopting four day weeks, another of such schools shortening their working day. If these measures (and extensions of them) come to pass, eventually a point will be reached at which the acceptance of public schooling as the default educational route followed by the great mass of learners will no longer be so easily assumed. Parents would have to contend with schools not being places that their children would be able to go in parallel with as many of those parents’ working hours as had previously been the case. A non-trivial part of those children’s weekly routines would start to require some sort of parental planning. It does not seem far-fetched to me that parents would become increasingly minded to start to contemplate what options existed for enabling their children’s home based learning, and that the options so researched would conceivably have markedly different adoption arcs to those compatible with traditional school structures.

While working parents would of course legally be required to ensure the supervision of their children alongside their education, in practice it might transpire that some combination of mobile communications with children and automated supervision-assisting gadgets of various kinds, supplemented  by some sort of formal or informal emergency supervising visit-on-demand cover service, there could be a popular re-appraisal of how independently safe children left in their homes could be expected to be. If this re-evaluation happens then it might be a step in the direction of increasing children’s early exposure to self-efficacy and resilience promoting circumstances (the decline of which I discussed in another post).       

As well as less public school hours being available, schools that continued to remain open much of the time might not necessarily be equally available to all students. I noticed this story regarding an academy school that had attracted negative attention for offering/encouraging some of its students to undertake Elective Home Education (EHE) rather than study at the school in person. The school in question seems to have acted opportunistically in allowing the EHE cases that it did, apparently doing so to keep certain students out of the school that were not helping the smooth running of the school. How much this policy should be seen as dereliction of duty by the school depends considerably on how effective its EHE provision is and also on whether students are taking up EHE as a matter of preference rather than as a way of avoiding problems in the school that, were those problems to be solved, would mean that those students would prefer to remain in school. The general principle that students who for one reason or another are not being best served by attending a school could have the option of learning outside of a school does not seem to me a necessarily flawed one. Why should some learners (perhaps even many of them) not prefer to learn outside of a school if such a learning environment was more conducive to their learning?   

Where these stories are published they are presented in no uncertain terms as portents of crisis- as negatives with no associated positives. Another similar story is that of the teacher recruitment/retention crisis. I was personally involved in this trend, to the extent that if it did not exist I would still be teaching full time, only very peripherally concerned with EdTech, and certainly not writing this blog. I certainly have an axe to grind about the protracted deskilling and deprofessionalisation of educators that has accompanied the ascent of managerial culture in the state education system, but at the same time I recognise that educators themselves constitute an aspect of the traditional educational structures that determine the adoption arcs of various EdTech developments.

I have posted before that educators have an underutilised potential to redefine their roles in educational structures (becoming more ‘intrapreneurial’), but in general it seems that most teachers have identified with traditional educational structures as part of a stance of defending the continued provision of education as a public good, correctly recognising that it is under attack. The crisis of teacher recruitment/retention has the expected effects of removing experienced teachers from the teaching community and making their replacements those with less commitment to established teaching methodologies. I have termed such a new breed of less qualified, less formally educated, less unionised and lower-paid education workers as ‘EdTechnicians’ rather than educators. EdTechnicians may have more potential affinity with the educational affordances of various technologies than more traditional educators if they see a substantial aspect of their work as being the implementation of EdTech systems.

As well as changes in compulsory education, higher education is experiencing a crisis that contains opportunities. The crisis in HE has two main aspects; financial and credential.

The financial aspect is highlighted by the recent observation that a higher average return on investment can currently be achieved by investing tuition fees in a stock-market tracking fund than would be gained by the career-enhancing effects of achieving a degree (although presumably this says as much about the overvaluation of shares as it does about graduate employment prospects).

The credential related crisis in HE is illustrated by the decisions made by the firms of Ernst & Young and also by PriceWaterhouseCoopers to cease relying on degree classifications and pre-university qualification grades when recruiting in favour of internally defined and assessed standards. The tendency for employers to find that the standards defined by awarding and examining bodies are of little vocational applicability is likely to accelerate the more that such bodies focus on what can be measured by standardised testing and tied in to the teaching and assessment products supplied by corporations with links to such bodies.

The combined effect of financial and credential concerns are likely to have had something to do with the fact that applications to university from domestic applicants in 2017 fell by 5%. College applications in the USA have been falling since 2010.

Alternatives to traditional higher education structures seem to have developed more than for compulsory education. This is hardly surprising given that compulsory education is mandatory and higher education is elective. Interesting alternative models have appeared that are based on internship rather than formal study, such as Praxis and Galvanize. The ‘bootcamp’ model has also become increasingly common, which in one rather extraordinary case is provided without charge (by the university called simply 42). Praxis, Galvanise and 42 are notable in how much they emphasise learner resilience, real-world problem solving, and collaborative practice. Graduates of these processes’ credentials are ultimately what they have done, and what they have chosen to do, during their involvement in the process. The learning that occurred in these processes was not so much preparation for employment but practice at effective working. Translating this shift in priorities to lower levels of education clearly involves greater levels of resistance by traditional educational structures, but the crises of those structures may be what leads to the collapse of that resistance.   







The what and the why of Ed and Tech

The previous post on this blog characterised the educational theory of connectivism as basically arguing that the most important kind of knowledge is the knowledge of who to ask for help from.

Since then, I happen to have read an intriguing 2011 article from Behavioural and Brain Sciences, written by Hugo Mercier and Dan Sperber. The article makes the persuasive argument that making persuasive arguments (persuasive to other people that is) is the primary function of explicit human thinking. The article argues that explicit deductive reasoning about the behaviour of inanimate objects is secondary to the ability to reason about the motives of humans interacted with, perhaps even a side effect of it.

The justifications for this claim can be summarised by stating that people (if they are neurotypical)-  

  • perform better at reasoning tasks when they are set in argument-based contexts or involve insights into humans’ motivations (such  as the Wason selection task).
  • are subject to individual confirmation bias and group think.
  • anticipate potential arguments against positions that they adopt when reasoning in isolation from others.
  • tend to arrive at conclusions based on the ease of reaching such conclusions by argument more than by the practical effectiveness of the conclusions
  • are capable of reasoning implicitly about inanimate objects with some degree of effectiveness.
  • tend to be more fluent in teleological reasoning than purely analytical reasoning. 

What these collectively imply is that it is not only true that knowledge of who to get help from is the most important kind of knowledge, but that (explicit) knowledge in general is primarily knowledge of how to get help from people (by persuading them to give such help, even if the help involved is simply to accept an argument).  

This argument (referred to hereafter as the argumentative theory of reasoning) seems to have major implications for the teaching of scientific and mathematical ideas. If learners’ reasoning is primarily interpersonal and is less effective when applied to inanimate objects then acquisition of explicit understanding relating to inanimate objects is likely to lag and be distorted to reflect interpersonal understanding.

Implicit learning about inanimate objects need not be affected detrimentally by explicit argumentative reasoning, as implicit knowledge need not be explicitly communicated to a learner. Learners can acquire implicit knowledge through a variety of automatic, implicit learning processes.

Different teaching methods each span a spectrum of implicit and explicit teaching, each with characteristic weightings of implicit and explicit emphasis. Rote learning of symbol manipulations according to given rules, taught using strict behaviourist methods, involves explicit learning mainly in terms of obtaining learners’ explicit compliance in directing attention to some input and producing some output when requested to (Implicit learning from phenomena is dependent on the salience of phenomena to a learner). The understanding of the rules of the symbolic manipulations could be learned implicitly if the rules were sufficiently simple, the examples supplied were sufficiently numerous, and learners’ compliance and attention was sufficiently sustained.  

The degree of learner compliance required for teaching methods like the one mentioned above is of course not realistically obtainable without learners having previously gone through some process, most likely an explicit process, of recognition of and consent to be instructed by some educational authority figure. In other words, in order to be in a viable position to be able to implicitly learn from some teaching source, a learner must to some extent have explicitly agreed with some argument presented by that authority regarding its status as a valid teaching authority.

In the case of mathematics learning, it can be observed that much mathematics teaching has historically displayed tendencies to use explicit authority figures that utilise more implicit methods to inculcate mathematical knowledge (rote learning, algorithmic procedures). When mathematics teaching has attempted to include more explicit learner understanding (such as by discovery/enquiry learning) it is noticeable that teaching styles involved do not so much assume the existence of an authority figure as require an authority figure to be established by explicit negotiations with learners; such negotiations would be expected to subject to argumentative reasoning effects.

Perhaps it is the case that knowledge that needs to explicated to be effectively understood needs to be explicated primarily in order to persuade learners to engage in cognitive processes similar to those that would tend to occur automatically during implicit learning from an accepted teaching authority. This highlights the notion that for learners the ‘how’ of learning cannot be separated from the ‘why’ of it- learning is shaped by learners’ sense of what the learning is for. This kind of goal-orientation distinguishes learning as it is understood here from activity which is more purely process-oriented, which is how play is more often understood (although implicit learning through play is of course possible).

The premise that the activity of learning is bound up with the goals of learning is illustrated by Meyer & Lands’ idea of threshold concepts. Threshold concepts are those which are markedly transformative, troublesome, irreversible, integrative, bounded, discursive, reconstitutive and liminal. A very obvious threshold concept is that of phonetic spelling. Once phonetic spelling is learned by someone, from that point on in their lives, what were once a collection of arbitrary shapes are automatically converted by an automatic process into inner speech. Once a learner can spell, they have a degree of literacy that permits their learning to develop along drastically different lines to what could have existed without such literacy.

Because a threshold concept is transformative, that concept changes how a learner views their learning. Because a threshold concept is irreversible, that concept brings about a permanent change in a learner. Because a threshold concept is reconstitutive, that concept brings about changes in how a learner sees them self.  

In the life of a mature adult, the temporal density of threshold concept learning is low compared to that of a child; more thresholds are crossed earlier in life than later. Adult consciousness is far removed from childhood’s comparative crush of no-turning-back learning events that do so much to set the shape of the future. From the child’s point of view it is understandable that they feel inclined to stand their ground for the worldview that they currently occupy and only open its development to those who have been able to convince them that they are trustworthy to do so.

If the ‘how’ of learning cannot be separated from the ‘why’ of it, then the processes of education cannot be separated from the goals of education. In the education systems that currently exist- systems that were designed in and developed throughout the twentieth century, the goals in question are twofold: firstly to maintain and consolidate centralised control of the system, and secondly to maximise the efficiency of the system. While increasing efficiency ought to generally be a good thing, it is not necessarily the best thing to prioritise. The Modern Learners movement agenda makes the point that “Doing things right is efficiency. Doing the right thing is effectiveness.” Nevertheless, central control (which takes for granted that it is directing people to do the right thing) and maximum efficiency are the core goals of Taylorist ‘scientific’ management theory applied to education (discussed in an earlier post). 

Since I first formally trained in teaching, I recognised that the most important factor affecting educational practice was the role of examining and awarding bodies (although I stress that this was not even once explicitly mentioned in my training). Examining and awarding bodies are responsible for certification of qualifications, of defining what knowledge is and is not included in such qualifications, and what are and are not valid forms of assessing such knowledge. These bodies ultimately define the goals of educational practice. Because these bodies define the goals of educational practice, they cannot help but define the methods of educational practice. 

EdTech presents learners with two opportunities that are ultimately very different but superficially quite similar, in that both opportunities relate to increased choice for learners regarding their learning.

One opportunity is that given by increasing learners’ freedom regarding the educational paths that they take.


Three different educational paths

Another opportunity is that given by increasing learners’ freedom regarding the educational goals that they set.


Several different educational goals

What should be apparent from the difference between the path freedom condition and the goal freedom is that if an end goal is fixed, a learners’ motive to take indirect paths to that end goal cannot avoid being interpreted as an inefficiency inherent to a learner. The educational process cannot in turn avoid interpreting its purpose as developing ways to reduce such learner inefficiencies. The educational processes attempts at making learners more efficient may well consent to meet learners where they are rather than where the process deems that they ideally should be, but the process will nevertheless act so as to incentivise learners to take direct, efficient paths towards goals defined by the process rather than the learners. The freedom offered to learners to select learning paths is offered in bad faith, as a kind of learner self-diagnostic of error for the process to calculate how to correct rather than as an expression of any sort of valid learner preference. Reflecting on this offer made in bad faith, I find myself recalling the assertion made by Noam Chomsky that, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum….” Allowing learners to debate among themselves how to achieve a goal may well be an effective means of deflecting them from asking questions about the desirability of that goal.

The assumption that examining and awarding bodies are the best placed agents for determining the goals of education is a questionable assumption. Originally, examining and awarding bodies (EABs) consisted of a mixture of senior university academics and government educational policy planners. Such people were assumed to best understand the requirements of national economies for various skills within the paradigm of economic management by quasi-Keynesian government and central bank planning in conjunction with high status professional organisations. The shift towards neoliberal preference for free markets working in conjunction with business managerial culture and increased automation of professional roles has seriously undermined this assumption. EABs have not faded into irrelevance however- they have taken a more active (albeit indirect) role in curriculum planning. While academics and politicians still play roles in EABs, their influence has increasingly tended to be ceded to corporations that produce educational resources and assessments. Incidentally (co-incidentally?), in the USA, undergraduate textbook prices increased by 1,041% between 1977 and 2015. Also, according to Lazarin Overtesting Report from 2014, “Our analysis found that students take as many as 20 standardised assessments per year and an average of 10 tests in grades 3-8.” If educational processes are guided according to the principle of maximisation of learning activities that can be measured then it will be measurement that tends to be maximised rather than learning.  

It is perhaps surprising that employers’ organisations do not play a larger role in EABs. Most formal education is provided for vocational purposes, and education that is not for some vocational purpose has no obvious need to enforce certification standards.

Ed Tech’s more optimism-inducing opportunity is to give learners more freedom regarding their learning goals. I will be delivering a presentation on this subject later this month at the TechXLR8 event in London, where I will be stressing the importance of self-organising learning networks’ capacity to create their own learning objectives. If the argumentative reasoning theory is accepted, then part of how such self-organisation is possible is due to the members of such networks engagement in convincing each other of the usefulness of the networks’ objectives. Groups of learners mutually reinforce each others’ commitment to the pursuit of common learning goals.

An important part of the presentation will relate to how goal generating learning networks can assess the learning of their members without recourse to standardised testing. This indirectly refers to a proposal outlined in the post EdTech’s disputed politicosocioeconomics for the devising of learning tools that are ‘pedagogically open’. The meaning of pedagogical openness was only hinted at in that post, but can now be expanded upon here. 

Open pedagogy is an approach to learning content generation which has no arbitrary limits placed upon the learning contents’ end goals (it would be explicitly transdisciplinary). Such content is necessarily independent of the knowledge and assessment structures developed by various EABs. Pedagogically open learning content connects to all other pedagogically open learning content (but not all connections are equally weighted). The underlying principle of such openness is rather well described by this Foundation For Critical Thinking video

Members of learning networks can collaboratively engage in the production of open learning content webs (OLCWs). An OLCW would be like a cross between a very large, detailed and intricate mind-map and a gamified version of Wikipedia. Map entries are pieces of knowledge content that can be defined and added by any network user.

OLCW users have a score, which can be increased by adding examples of content that are consistent with the definitions of existing entries (consistency is judged by a community vote, where voters’ votes are weighted by their scores. Diminishing returns are awarded for repeated examples added to any particular entry). Users can increase their scores more substantially by having their suggestions for links between map entries approved by community vote (and by having other users link to entries that they have made).

Successfully adding linkages imply that the adder has navigated the map well enough to have some understanding of the linkage structure or possesses some sort of independent equivalent representation that makes it possible for them to deduce the map’s linkage structure. For a learner to be assessed by the community as having knowledge of the map, that learner must validly add something to the map. 

Validation of examples and links is ultimately dependent on community votes but before voting there is opportunity for users to discuss with each other whether or not to validate. Discussions of this kind would prompt argumentative reasoning that should improve users’ reasoning about the examples and links being validation checked.

In the post where I floated the idea of open pedagogy I also raised the idea of educator-developers who would generate open pedagogy systems. Since then I have learned the word ‘intrapreneur‘. An intrapreneur is an employee of a large organisation who innovates like an entrepreneur within that organisation. Active users (those that contribute as well as view) of OLCWs would be members of a large organisation that were continually innovating to improve that organisation. If an OLCW was monetised so that there were charges for accessing some of its content, then its active users could receive discounts for use by contributing, and make a profit if they contributed sufficiently. Ideally an OLCW would be a platform cooperative belonging to its active users or perhaps shared between them and various employers of the OLCW’s users.  



Help! I need somebody (not just anybody)

The previous post on this blog expressed concerns about the fragility of learner autonomy in a world that increasingly aims to provide maximum pre-packaged convenience and do all the work for you (as long as you pay for this somehow or other). This post will hopefully examine the issue from a fairly optimistic perspective by aiming to show that providing learners with lots of on-demand help in the appropriate way can improve their self-efficacy.   

The method of perceptual learning is an interesting one. Perceptual learning is based on the (experimentally demonstrated) principle that providing repeatedly presenting a learner with different examples of the kinds of elements that they would need to manipulate in some task and for each example providing ‘yes/no’ feedback as to whether they have successfully matched the elements they perceive to a different arrangement or encoding of those elements, learners develop greater fluency in manipulation of elements and reduce their cognitive loading associated with performing such manipulations- freeing up cognitive resources for other kinds of thinking.

Below is an example of a perceptual learning exercise question. During perceptual teaching, many such questions would be presented in an unpredictable sequence.  


You may be thinking ‘Isn’t this just an MCQ?’ In a way it is, but note that the learner is not being asked to solve an equation. Finding x or y is not the task- the task is just to match one of the equations to the graph. The other big difference between this and an MCQ is that this is being used for teaching rather than for assessment. It’s effectiveness as a technique is a more specialised example of the finding that testing is effective as a form of learning rather than a form of assessment (testing for learning tends to go by the name of retrieval practice). 

Matching equations to rearranged forms of themselves is a similar type of task.


When learning perceptually, learners are supposed to respond quickly. Perceptual learning is implicit and may act largely independently of explicit efforts to work out correct answers. Learners are encouraged to ‘see’ correct answers through repeated practice rather than deduce them through conscious deliberation.

A commercial promotional video demonstration is available here 

During my teaching career it was often necessary to help learners shape up their ability to rearrange formulae, convert between different units and also between standard form and ordinary number expressions. Much wailing and gnashing of teeth usually accompanied such practice.

What was very clear was that I and my learners had different ideas about how to improve at these types of tasks. I was well aware that many of my learners did not analytically understand the procedures for performing these kinds of tasks and were reliant on faulty heuristics. One obvious example of this was the use of what I called ‘hanging numbers’.


A 2 has disappeared from the right hand side, consistent with dividing both sides by 2.

A 2 has also appeared at the left hand side, but it is ambiguous whether the 2 is part of the numerator or the denominator. The 2 is left hanging; it might end up in either the numerator or the denominator, usually whichever the learner decided would be more convenient in simplifying the expression (incidentally, this is a neat free pdf book on the unreliability of opaque mathematics heuristics). 

I sympathise with the confusion that learners making those mistakes would have been feeling when they encountered examples of educational content like this.


Is that G supposed to be part of the numerator or the denominator?!?

My learners’ preferred method of improving their ability to rearrange formulae etc… was to witness and imitate large numbers of examples. Because it was apparent that this approach was not addressing the lack of core understanding of the rules of the procedures for rearranging etc… I stressed to learners the usefulness of understanding these rules by careful, conscious and slow examination of a small number of examples (this approach was generally disliked for being long).   

I understand better now that what my learners would have liked, and what would probably have helped them well, would have been something like perceptual teaching. If those learners had seen enough examples of rearrangements with numbers always clearly part of the numerator or the denominator then I strongly suspect that they would have learned to see (without consciously thinking about it) that something was wrong when they wrote a hanging number.

A step up from perceptual learning in terms of learner autonomy is choice-based-assessment and preparation for future learning. These approaches are being developed at Stanford AAA lab and I have posted before about them in this blog. The basic idea of these approaches is to give learners some sort of online problem solving task to perform in an application which also contains the information necessary to complete the task correctly.


Learner’s interactions with the application are automatically recorded to later determine how effectively learners make use of the available information. Here is a video demonstration of such an application. According to a AAA lab studies, learners successful use of an application called ‘Oh no! has talent’ predicted 35% of the variance in their mathematics grades. By researching which learners are and are not effective at making use of information available to them it may be possible to design systems for which learners find it easier to make use of available knowledge.

Personally, I am a big fan of choice-based-assessment and have high hopes for it. This has a lot to do with choice-based-assessment being a potential alternative to standardised assessment.

Then there is connectivism.

Siemen’s Principles of connectivism, from Connectivism: A Learning Theory for the Digital Age (my emphases added)

  1. Learning and knowledge rests in diversity of opinions.
  2. Learning is a process of connecting specialised nodes or information sources.
  3. Learning may reside in non-human appliances.
  4. Capacity to know more is more critical than what is currently known.
  5. Nurturing and maintaining connections is needed to facilitate continual learning.
  6. Ability to see connections between fields, ideas, and concepts is a core skill.
  7. Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
  8. Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision.


A rather disparaging shorthand for connectivism could be phrased as, ‘Knowing who to get help from is the most important kind of knowledge.’ There are parts of connectivism that I hugely agree with, particularly (6) that seeing interdisciplinary connections is a core skill.

There is though, something about connectivism that doesn’t seem to add up. Kant would have had problems with universalising it; if all learners get their answers by asking other learners the answer then surely no learners ever actually learned anything to share with anyone- right? The answer to who actually knows things may come down to (2), learning residing in non-human appliances


Studies have demonstrated that texts written in difficult to read fonts are easier to learn from than when they are written in easy to read fonts. When more effort is required to distinguish visually ambiguous symbols then apparently more effort gets exerted on comprehension too. This phenomenon is one of various others showing that challenge (as long as it is not excessive) acts as a spur to learning.

Deliberately increasing the effort involved in interactions contradicts the underlying approach typically applied in user-experience/user-interaction design of making interactions as effortless and automatic, as convenient, as possible.

The contradiction of challenge and convenience is central to the effectiveness of educational systems. Levels of challenge appropriate to optimal learning may be too inconvenient to induce engagement while levels of convenience ensuring engagement may not provide sufficient challenge for effective learning. The resultant trade-off between the factors of challenge and convenience is not necessarily easily optimised

The twentieth century witnessed an explosion of mass participation in education and the extension of that participation into more and more of learner’s lives (which has continued into the twenty-first century). Achieving this huge expansion of learning has been a staggering triumph in terms of increased learner engagement but it is not clear how the balance of challenge and convenience has been affected by this, hence what may have been the effect on learners’ learning. Are learners learning what they need to learn to act effectively in employment (for example)?  

A CPID report from 2015, discussing the employability skills of new graduates, commented that such graduates had skills deficiencies in four key areas:

Working life’ skills: this includes turning up on time and looking presentable, but also takes in what is expected of an employee. As one HR professional puts it: “It’s understanding what being a working professional means.”

Self-awareness and confidence: lack of confidence is often the result of a young person’s life experience so far, and its impact is often underestimated by more experienced staff. One young person told the interviewer: “In my first couple of weeks I’d be scared about picking up the phone. But now it doesn’t faze me; I just pick up the phone and approach people around me.”

Communication: learning how to communicate with colleagues, whether it’s face-to-face or via email, is vitally important, but so many are unaware of the impact of what they say. One young employee said: “The amount of times I would speak to someone and say something completely inappropriate and not have an absolute clue about what I did wrong at the time.”

Commercial skills: this involves not being more business focused, but also an ability to see things from the client’s point of view.

Beyond the issue of employability, Critiques have been made concerning how successfully effectively autonomous adult individuals are developing, suggesting that traditional adult identity is becoming increasingly deferred.   

Being young today is no longer a transitory stage, but rather a choice of life, well established and brutally promoted by the media system. While the classic paradigms of adulthood and maturation could interpret such infantile behaviour as a symptom of deviance, such behaviour has become a model to follow, an ideal of fun and being carefree, present in a wide variety of contexts of society. The contemporary adult follows a sort of thoughtful immaturity, a conscious escape from the responsibilities of an anachronistic model of life. If an ideal of maturity remains, it does not find behavioural compensations in a society where childish attitudes and adolescent life models are constantly promoted by the media and tolerated by institutions.

The kidult does not design his existence along a line that goes from the past to the future; rather, he takes his decisions day-to-day, on the base of needs and desires related to the situation and the context. He lives an artificial youthfulness as infinite potentiality, he lives in a universe in which any valence to diversity between young and adult has been subtracted and in which, on the contrary, the lack of distinction between the two became a characterising element.

In kidults, in particular, the sense of dependency prevails over the search for independence. It becomes an inescapable condition which jeopardises the natural path toward autonomy and individual and social self-determination.

Parts of this critique of ‘kidulthood’ are reminiscent of  the ideas Ian Bogost expressed in ‘Play Anything’  relating to irony as an inauthentic form of play, which was touched on in an earlier post (which discussed how play could be partly understood as a form of respect for the intrinsic properties of objects in and of themselves, demonstrated through an enthusiasm for engaging with those objects as they are rather than as how we might like them to be).

A similar critical view expressed by Simon Pegg, argued that deferred maturity acted as a kind of escapism from unwelcome aspects of reality. 

Recent developments in popular culture were arguably predicted by the French philosopher and cultural theorist, Jean Baudrillard in his book, ‘America’, in which he talks about the infantilzation of society. Put simply, this is the idea that as a society, we are kept in a state of arrested development by dominant forces in order to keep us more pliant. We are made passionate about the things that occupied us as children as a means of drawing our attentions away from the things we really should be invested in, inequality, corruption, economic injustice etc. It makes sense that when faced with the awfulness of the world, the harsh realities that surround us, our instinct is to seek comfort, and where else were the majority of us most comfortable than our youth? A time when we were shielded from painful truths by our recreational passions, the toys we played with, the games we played, the comics we read. There was probably more discussion on Twitter about the The Force Awakens and the Batman vs Superman trailers than there was about the Nepalese earthquake or the British general election.

Arguments have been made  (here by the libertarian FEE) that increasing amounts of compulsory formal education has acted to erode the challenge faced during childhood and adolescence, by introducing an increasingly extended period of economic dependency and integration into a top-down system of control that is inherent in the increasing replacement of employment with education for young people.

An explicitly technology related dimension has also been incorporated into this discussion, questioning how the ubiquity and sophistication of technology has resulted in a situation where many people feel that they could not hope to understand how much of the world functions, to the extent that they come to believe that they can only stand in relation to it much as a young child stands in relation to an adult world that stretches inestimably far beyond the limits of their comprehension- experiencing a type of learned helplessness. 

“Most people think about understanding as a binary condition,” Arbesman told me in an interview. “Either you understand things completely or not at all.” That viewpoint is dangerous when it’s applied to technology today, because there’s simply no way to understand everything. (Or, as Arbesman puts it in his book: “The vast majority of computer programs will never be thoroughly comprehended by any human being.”) Instead, he argues, people should be acting as technological naturalists, approaching complex digital systems the way a biologist would examine living systems. Doing so will require people to rethink what it means to understand technology, and at what scale:

Abstraction in computing—and the elegance of interfaces like the ones that make MacBooks and iPhones so user friendly, for instance—has made machines delightful and easy to use, but has also created a huge gap in comprehension that didn’t exist in the early days of personal computing. (In the beginning, if you wanted to mess around with a computer, you had to learn to speak its language; not the other way around.)


However, various examples (some really quite inspiring) exist that show young people’s willingness and capacity for embracing challenges that bridge educational contexts and the wider world. These may be a minority of learners, but perhaps a minority that is starting to grow.






Plus ça change, plus c’est la même chose

In Education Technology as ‘The New Normal’ (a talk given on 24/5/17 at CENTRO’s symposium “Data, Paper, Scissors Tech-Based Learning Experiences for Higher Education” in Mexico City by Audrey Watters), the following point is made-

I want to be sure that anytime we talk about “the future of education,” that we always consider “the history of education.” We cannot break from history. We have not severed ourselves from the past through the introduction of computers or computer networks. Our institutions have not been severed from the past because of these. Our cultures have not. (At least not entirely. Not yet.) We have not.

When we talk about “the future of education” as an explicitly technological future, I want us to remember that “the history of education” has long been technological – thousands of years of writing, hundreds of years of print, a century of “teaching machines,” 75 years of computing, almost 60 years of computer-assisted instruction, at least 40 years of the learning management system, more than 25 years of one-to-one laptop programs, a decade (give or take a year) of mobile learning. Education technology is not new; it has not appeared “all of a sudden”; and it is not a rupture. It is inextricably linked to history, to histories of education and to histories of technology.

I very much agree with this statement, but feel that it is giving insufficient consideration to an incredibly crucial landmark in the history of educational technology- the ascent of Taylorism (Frederick Winslow Taylor, March 20, 1856 – March 21, 1915). Quoting from the paper Scientific Management Still Endures in Education (various emphases mine)

Frederick W. Taylor’s “scientific” and managerial approach to the workplace maximized efficiency and productivity through the standardization of labor. Through motion and time study, Taylor vigorously studied body movements and assigned exact approximations of the time necessary to complete the labor. A primary principle of his management approach was to eliminate opportunities of chance or accident through the scientific investigation of every detail of labor. Scientific management eliminated the need for skilled labor by delegating each employee one simple task to repeat over and over. Although this method increased the productivity of factories, it stripped employees their freedom to choose their work, as well as how it should be done

With the publication of his first article, “The Elimination of Waste in Education,” John Franklin Bobbitt (1912) started his career as a leader in the field of curriculum and became one of the pioneers that set the stage for the adoption and implementation of scientific management in school administration in the US. Bobbitt’s work in curriculum studies in the US is particularly important because of his application of Frederick Taylor’s concepts of scientific management to educational management and planning. While arguing that factory-like efficiency in education should be driven by objectives, Bobbitt (1920) stated: “It is the objectives and the objectives alone … that dictate the pupil-experiences that make up the curriculum. It is then these in their turn that dictate the specific methods to be employed by the teachers and specific material helps and appliances and opportunities to be provided. These in their turn dictate the supervision, the nature of the supervisory organization, the quantity of finance, and the various other functions involved in attaining the desired results. And, finally, it is the specific objectives that provide standards to be employed in the measurement of results.”

Bobbit argued that schools, like businesses, should be efficient, eliminate waste, and focus on outcomes to the degree that the curriculum must be useful in shaping students into adult workers. Along with Frederick Winslow Taylor, Bobbit believed that efficient outcomes depended on centralized authority and precise, top down instruction for all tasks performed. Within Bobbitt’s educational vision—similar to Taylor’s vision of managers—the administrator gathers all possible information about the educational process and develops the best methods for teachers to get students to meet the standards. 

According to Bobbitt’s (1913) scientifically managed education, teachers must be required to follow the methods determined by their administrators because they are not capable of determining such methods themselves: The burden of finding the best methods is too large and too complicated to be laid on the shoulders of the teachers … The ultimate worker, the teacher in our case, must be a specialist in the performance of the labor that will produce the product. Bobbitt’s conception embraced one of the core logics of scientific management in education, which asserts that the end-points of predetermined objectives and/or standards alone drive the educational process (the production of students). Within these logics, all aspects of education therefore must serve the ends of the education process, with student learning purely based on pre-determination, and teachers’ content delivery structured by pre-determined scientific methods. Thus, the ends determine the means. This allowed the curriculum to be broken down into content units that could be standardized, determined in advance, taught in a linear manner, and easily assessed.

Scientific management of education is the essence of twentieth century educational technology.  In many ways I think that not very much has really changed so far in the new millennium.

Technology in education does not have to be what learners or educators encounter, it can be the provenance of education’s administrators. This is information technology understood in terms of social processes rather than understood by reference to what kinds of artefacts are used in educational activities- information technology in terms of software rather than hardware, where the software is being run on humans, not on computers. 

The essence of scientific management in education was that the education system was centrally planned and the centre decided everything for everyone involved in the system (which included everyone at some stage in their lives, education being mandatory).

In striking contrast to that, the controversy-laden buzzword in education right now is ‘choice‘.

It is undeniable that the idea of choice in education is going hand-in-hand with governmental abrogation of universal education and the opening up of education to the business world. I share the concerns of many that the business world may not be entirely to be trusted with assuming the mantle of the principal educator of society. For as long as educational provision continues to be legally mandated and for as long as most families cannot easily afford to reduce their working hours sufficiently to home-school their children then I strongly expect to see continuing unscrupulous profiteering by educational businesses that recognise that they have a captive market. This is what a monopoly is. The monopoly in education provision is just being sold to the private sector.

Where consumers of education are legally obliged to purchase it, how much of a choice do they really have? The prospect of giving people choice of education providers but not the choice to refuse any of offers provided has notable parallels with some phenomena that have arisen in the USA- firstly, compulsory medical insurance purchasing (and the dissatisfaction therewith) and secondly to the party political system that somehow resulted in the presidential election being contested by two people that the majority of voters both disliked so much that they may have found it hard to decide which they disliked more, but what other choice did they seem to have?


My EdTech career actually began in the year 2000 (although the word ‘EdTech’ was not actually in use at that time, people talked about ‘educational software’). I was working in a start up where I wrote ActionScript for Flash games for BBC Education web pages (Mainly about Robot Wars and the Tweenies). At the time I thought of what I was doing more as game writing rather than as instructional design.

I used to genuinely enjoy using Flash. Flash is really, really quick and easy to use compared to DOM based scripting- so much so that while teaching full time I could still get around to writing a bunch of Flash movies for physics teaching (animations of electric currents, kinetic gasses, free-falling objects, etc…).

I was rather sorry to see Flash start to fall out of usage in the big wide world, but was reassured that the fairly old Windows PCs used where I taught still had IE and old versions of Chrome installed and hence were Flash player enabled. Students accessing the Flash movies on their own up-to-date laptops and tablets weren’t always able to, but as I was teaching physical courses rather than distance learning courses it seemed reasonable that those students could access Flash content when in college.

Since studying an e-learning technology MSc I started to gain an appreciation of content authoring software tools, and noted that many of them boasted of being able to convert Flash content into HTML5 content. Because of the type of content that most authoring tools tend to produce- supercharged PowerPoint, more or less- it did not occur to me that the kind of richly interactive gaming/simulation content that I had liked in Flash could be automatically recreated in HTML5.

This has indeed been happening though, just in a rather stop-start fashion. The latest step in this process (which has been going on since 2012) is Adobe Animate. This is Flash really, but not called Flash anymore.

In practice, Animate works rather differently to Flash; ActionScript has been replaced by the CreateJS libraries. CreateJS is pretty easy to use- if you already understand JavaScript (in other words, less easy to understand than ActionScript). Text input can’t be handled entirely within the Flash-style development environment, it relies on adding a DOM Input Text element.

Most significantly I feel, the documentation for scripting and interactivity in Animate seems frankly to be a pretty muddled mish-mash of stuff that is easily understood only by those who already know JavaScript fairly well and/or have been specifically keeping up with the development of Flash to HTML5 compatible output. It is not exactly made easy for a novice to know where to start.

Ease of initial adoption was one of the great advantages of Flash. I can believe that Flash helped a generation of internet users to engage with web programming.

Flash allowed scripting to be experimented with in playful ways that provided concrete feedback (graphical objects would move and change shape or colour). Arcane I/O protocols did not need to be learned. Children who had learned coding using Scratch would probably not feel too out of their depth scripting in Flash. Flash acted as an informal bridge between web-based graphic design and web functionality scripting.

Animate has some impressive multimedia capabilities that could make for very impressive and versatile browser games (it’s got a 360 degree rotatable stage). Whether Animate is easy enough to use that its features will be taken and run with by a large, diverse community of imaginative designer-developers is not clear. Conceivably, future versions of Animate will have more of Flash’s ease-of-use. Personally I intend to skill myself up on Animate just in case it does become the rebirth of Flash.