The opposite of not even wrong

It is an unfortunate reality that most people do not like to find out that they are wrong about something. This (and more) is written about brilliantly in Kathryn Schultz’s Being Wrong.

People’s dislike of being demonstrably wrong is a foundational challenge for the process of formal education. When formal education commences, informal learning has already shaped the minds of learners to the extent that they come to formal education with various pre-existent ideas that are likely to be incorrect- not least of these ideas concern how learners themselves do and do not learn well.

If education was simply a matter of conveying information then teaching would be a lot more like computer programming (or at least like the training of machine learning based classifiers), which it decidedly is not. Education is more problematic than programming because a key aspect of education involves attempting to produce ‘un-learning’ of previously learned ideas which tend to limit further learning. The most striking examples of what requires un-learning is the use of various cognitive biases which seem to be intrinsic to human cognition, which Daniel Khaneman superbly documented in Thinking Fast and Slow.

Khaneman’s basic discovery is that human cognition appears to operate in two contrasting modes (System One and System Two).

System One cognition is fast and easily amenable to automaticity. System One cognition is inferential and associative and it operates by assuming that whatever information is most immediately available is sufficient for drawing correct inferences.

System Two cognition proceeds slowly and it precludes other substantial cognitive activity from occurring simultaneously. System Two cognition is analytical and rule-based and recognises that immediate information may be of low relevance while information that is not available may be of much greater relevance.

Kahneman’s key insight is that most people most of the time use System One cognition but at almost all times think that they are using System Two cognition.

People’s general inability to recognise their own cognitive short cuts has a profound implication for formal education, that attempts to teach ideas which to be understood require learners to suspend their convictions of the correctness of their models of the world will be subject to garbling translations into forms that allow these new ideas to remain consistent with learners’ existing models. Constructivism generally has recognised that learners translate what they are taught in terms of models that they construct but Kahneman’s findings imply more specifically how such models will tend to develop. Models will tend to develop around fixations.

The phenomenon of fixation can be illustrated in problems, such as this one:

Your aim is to make a necklace that costs no more than 15 cents using the four chains below. It costs two cents to open a link and three cents to close it:  

fixate

 

HINT: This problem defeats many people partly because the way that it is presented creates an unhelpful implicit fixation- that four short chains must be connected.

Perhaps knowing that this fixation is unhelpful will enable a learner to ignore it, perhaps not; you may want to try and solve it. The solution is shown at the end of the post.

My experience of educational practice leads me to believe that formal education as an institution has for the most part quietly admitted defeat at attempting to undo limiting fixations (as well as unintentionally producing a host of limiting fixations of its own) and instead taken the approach of trying to defeat ingrained misconceptions by supplying overwhelming numbers of correct examples of how to solve problems in the hope that repeating correct examples enough times will eventually inhibit learners’ propensity to recall their misconceptions. This is analogous to a medic treating the symptoms of an illness when they can see no way to effectively treat the cause of it.

John Holt critiqued the relentlessly unreflective pursuit of ‘rightness’ in How Children Fail:

Sometimes we try to track down a number with Twenty Questions. One day I said I was thinking of a number between 1 and 10,000. Children who use a good narrowing-down strategy to find a number between 1 and 100, or 1 and 500, go all to pieces when the number is between 1 and 10,000. Many start guessing from the very beginning. Even when I say that the number is very large, they will try things like 65, 113, 92. Other kids will narrow down until they find that the number is in the 8,000’s; then they start guessing, as if there were now so few numbers to choose from that guessing became worthwhile. Their confidence in these shots in the dark is astonishing. They say, “We’ve got it this time!” They are always incredulous when they find they have not got it. They still cling stubbornly to the idea that the only good answer is a yes answer. This, of course, is the result of the miseducation in which “right answers” are the only ones that pay off. They have not learned how to learn from a mistake, or even that learning from mistakes is possible. If they say, “Is the number between 5,000 and 10,000?” and I say yes, they cheer; if I say no, they groan, even though they get exactly the same amount of information in either case. The more anxious ones will, over and over again, ask questions that have already been answered, just for the satisfaction of hearing a yes.

What Holt described is a kind of perfect storm of uncorrected System One thinking (unrealistic guessing) made worse by learners successfully incorporating into their System One built schemas the implied lesson that education means giving the right answer.

Nassim Taleb observed something not too dissimilar in his analyses of the differences between how financial decisions were made by those who were and were not formally educated in mathematics and statistics. Taleb worked as a Wall Street hedge fund manager and derivatives trader during which time he both made and saw others making many high-risk trading decisions involving very large sums of money.

Taleb came to recognise that traders basically came in two types; there were traders who were formally educated about financial mathematical theories and there were heuristic using traders who had learned trading as a practical art through a combination of experiment and imitation of trading techniques practised by more experienced traders that had been observed to be successful.

Taleb’s observations of these two types of traders ultimately led him to conclude that there was one very important difference between their effectiveness as traders. The difference was not that theory based trading worked better than heuristics based trading but that heuristic traders were much better at recognising when their heuristics could not be relied on to produce profitable trades than formally educated traders were at recognising when their theories were not reliable guides to what was happening in trading markets. Formally educated traders were significantly worse at recognising when they were wrong than traders who had not been formally educated (this meant that they sometimes lost LOTS of money- like more than the sum of all the money that they had ever previously made). Note that this shows how very analytical, theory based thinkers could make the classic System One error of assuming that they had all the information that they needed to have and that what they did not know they did not need to know.

I have to wonder if this has got something to do with formal education in mathematics being consistent with the approach used in formal education in general of making learners care primarily about being right and not making them care so much about how wrong they were if they were wrong.

The way that assessment schemes are used in formal education typically assume a baseline of no knowledge for which no reward is given. Correct knowledge is rewarded in approximate proportion to the correctness of answers- some answers are more correct than others. Very rarely do assessment schemes try to accommodate the idea that some answers are more incorrect than others and even more rarely the idea that some incorrect knowledge may be significantly worse to have than no knowledge.

The physicist Wolfgang Pauli is said to have responded to some unappealing ideas suggested to him by other physicists by declaring that those ideas were ‘not even wrong’, apparently meaning that they were contradictory or unfalsifiable (or both, and perhaps inseparably so). An idea that was right because it could not be wrong was of no use.

I have a rough idea for an educational mathematics game (which would probably be called ‘Wronguns!’) in which learners try to generate the most incorrect mathematical statements that they can devise. Working out the scoring system might be tricky though.

 

 

 

The solution to the chain problem

The solution is actually to first completely open all the links of one of the chains and use the three links that result to join the remaining links.

fix2

 

 

 

 

Advertisements

2 thoughts on “The opposite of not even wrong

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s