Undervalued & under-taught: concepts missing in teacher education

Friere, Piaget and Vygotsky are the usual suspects in the theory that underpins many initial teacher training courses, at least in my experience; I am happy to be corrected. The theories of these men, while useful and, in parts, necessary are often presented as the ground truths of teaching and learning or as outright fact.

Over the past few years I have picked up a little bit of knowledge about certain concepts that, if not completely debunking many of these operational fact-theories, certainly voice a challenge to them and I think it would do a lot to develop educators own critical thinking faculties if these concepts were taught alongside the main teacher training dogma.

Many of these ideas I have been exposed to through my own semi-self directed reading about education. I write semi-self directed because although I am choosing which books to read and when, I rely on the recommendations of colleagues and the educators that I follow on twitter.

While each of these, on their own may not be threshold concepts as such (if such a thing exists) learning about them has had a developmental effect on my thinking as an educator.

In my thirties, I can now begin to trace back my own intellectual interests and growth of knowledge. Originally, I was only interested in biology and things directly related to that field. Working as a teacher, my interest in this subject prompted me to develop my knowledge of neuroscience, among others. From here I developed an interest in the teenage brain and then neuromyths.

During my PGCE top-up, it was clear that subjective research processes were held to be just as valid as objective research methods. I challenged some of the ideas fed to us about subjective & experienced based research, arguing that evidence needs to be as objective as possible. My ideas were met with some scepticism, but I went ahead and tried to summarise some of the work on educational neuroscience and to do some sort of quantifiable research on teachers understanding of neuromyths.

Despite the lack of rigour and balanced curriculum, topping up my GTP to a PGCE was worth it. I wanted to do the PGCE because I felt my GTP had not had any academic focus and I didn’t like the fact that I didn’t know much about the theory behind what I was being told to do in the classroom. My PGCE served to get me academically engaged with the educational theory and it is only since I completed it that I have continued to maintain that engagement.

My interest in this area hasn’t abated but as I learn more it has become more nuanced. I agree that we need to be careful interpreting the results of much cognitive research but I do think that it offers that power to help guide us to what may better versus worse pedagogical techniques. They may well help us hone our pedagogical content knowledge.

Currently, these are the ideas that I believe that all teachers should have some training on (in no particular order):

Where can you go to get more valuable knowledge on these concepts? Here are some of the resources that I would recommend:

The Education Endowment Foundation

Daniel Willingham’s blog

The Education Development Trust

Core Knowledge Curriculum

The Learning Scientists

The Learning Spy

PGCE Research: Teacher Understandings of Educational Neuroscience

Below is the pdf of my research project that I completed as part of my PGCE top-up course. It followed from a fuller literature review that can be read here.

Completed in 2015, I only just realised that I didn’t have a link to it.

Download (PDF, 387KB)

Notes on making good progress?: Chapter 9

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

An integrated assessment system

An accurate and useful progression model is the foundation of any assessment system because it explains how pupils make progress. SoWs and lesson plans and the curriculum can all be based on this progression model. Textbooks would be the most efficient way to bring all of these items together.

The first item in a progression model is a collection of formative questions which match up to the curriculum. The bank should be online so that pupils can access it at home or at school. Pupils should have formatives at the end of lessons or chapters and these would contain questions on new material and also on material previously taught. It could be linked up to question level analysis and if it was automated could point pupils straight back to a relevant video or worksheet on what the students had just got wrong.

Next we should have summative item banks. For difficulty model assessments this could be whole past papers or if online a computer adaptive system that makes testing shorter and more accurate. For quality based model we could use a comparative judgement system.

Finally a standardised test bank would contain be non-curriculum linked questions like the CEM tests. These are useful for setting targets.

Such a system would be beneficial because it would be coherent: a clear structure of progressing to novice to expert. It has the ability to give pupils ownership of the curriculum. It could also be self improving.

Notes on making good progress?: Chapter 8

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

Improving summative assessments

The aim of summative assessments is for them to provide an accurate and shared meaning without becoming the model for every classroom activity. Rubrics of prose assessment statements are not particularly good at delivering reliability, and they can end up compromising the creative and original aspects of the task. Prose descriptors can be interpreted in many different ways. Judging in absolute terms is extremely difficult. Markers will overgrade and undergrade depending on the sample. We are much better at making comparative judgements than absolute ones.

very prescriptive rubrics end up stereotyping pupil’s responses to the task removing the justification for having them (grading creative and original work). Responses that are coached to meet the rubric pass and truly original work that doesn’t fails. Rubrics encourage coaching.

Comparative judgement offers the possibility of dropping rubrics by defining quality through exemplars not prose and by not relying on absolute judgement. It simply asks markers to make a series of paired judgements about responses. It relies on tacit knowledge of the subject expert – knowledge that is not easy to express in words.

Comparative judgement is criticised for offering little in the way of formative feedback. This is precisely the point. It decouples the grading process from the formative process. It allows classroom practice to be refocussed away from the rubric and towards helpful analyses of quality. One extremely useful resource that could be produced would be a set of annotated exemplar scripts.

Decisions about the difficulty and content of national summative exams are made by national exam boards. What if a school wants to summatively assess more frequently? To what extent can they be linked to the curriculum that the pupils are following? One solution is to outsource summative assessments, but there is still a gap between the remote standardised assessments like CEM and the formative assessments of classroom practice. It is not easy to create or interpret the results of school made curriculum-linked assessments. It can be difficult to tell if the test is difficult enough, or if it has the right spread of difficulty. Tests taken by small numbers of pupils don’t produce reliable grades. We can compare the results of teacher made tests to national assessments. The content studied over one term is simply not broad enough domain to sample from. Assessments have to sample from what pupils have learnt in that subject, not just in previous terms but in previous years.

A summative assessment can be linked to the curriculum and the most recent unit of study. However if a grade is awarded it will not be based solely on that unit and cannot be seen as reflecting performance on solely that unit. A student can make great strides with a unit but not be reflected on the summative unit as the assessment is not sensitive enough.

Summative assessments need to be far enough apart that pupils have the chance to improve on them meaningfully. However pupils will make relatively slow progress on the large domains that summative assessments are sampling. There are risks with using summative assessments too frequently.

Using scaled scores can overcome this to some extent. A scaled score converts raw marks which are not comparable (from different assessments) into ones that are. They show the continuum of achievement. Grades suggest that pupil performance falls into discrete categories when in fact it is continuous.

Notes on making good progress?: Chapter 7

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

Improving formative assessments

Formative assessments should be:

  • Specific
  • Frequent
  • Repetitive
  • Recorded as raw marks

Specific questions allow teachers to diagnose exactly what a pupil’s strengths and weaknesses are, and they make it easy to work out what to do next, whereas open and complex questions like essays or real-world problems are not particularly well-suited to this. Short answer and MCQs can be very precise. MCQs, despite their reputation, are excellent for diagnosis and indicate what pupils might need to work on next. MCQs give a specific diagnosis of conceptual understanding and are labour saving.

Criticism of MCQs include that they are easy for pupils to answer, but the risk can be mitigated in several ways. You can increase the number of distractors, you can increase the number of questions, you can also include more than one right answer. Answers can be analysed at the level of the class. MCQs can target misconceptions very effectively. Misconceptions are an important part of a progression model often because they involve particularly tricky and fundamental concepts without which pupils cannot progress.

They are very easy to analyse. You can record not just whether the pupil got the question right or wrong but which distractors they chose. When the analysis is done on a topic that has been recently taught then it becomes much more helpful. We don’t necessarily need to re-teach topics but can ensure to highlight those misconceptions again if the curriculum is structured in a way to allow this. Explanatory paragraphs in the question bank for each MCQ make it very easy to give feedback. Once the feedback has been delivered the teacher can follow up with another set of similar questions to see if the pupil has understood this time around. MCQs with together with this kind of in-depth, specific and precise feedback, can form a vital part of a progression model in any subject.

Research shows that the act of recalling information from memory actually helps to strengthen the memory itself. That is, testing doesn’t just help measure understanding; it helps develop it. This is called the testing effect. This effect can certainly apply to summative tests too so long as they don’t force students away from retrieval and into problem-solving search. The power of the testing effect is that it introduces desirable diffficulties. self-testing is much more effective revision than re-reading. Re-reading makes pupils feel familiar with the content but doesn’t guarantee thought. Testing makes it clear if students have understood something.

Generally assessment should not take place too close to the period of study as we can’t make a valid inference about whether a pupil about whether students have learned the material. If a student gets the question right very soon after study we are not provided with a valid inference. Some of the questions set for recap at the start of the lesson or for homework or at the end of the lesson should cover previously learned material.

Recording grades frequently forces formative assessment into a summative model. We could simply stop recording formative assessment as this assessment aims to be responsive not reportive. If we do record marks these should not be converted to grades. When converting to grades you are asserting that the difficulty of the two assessments is the same and that you are trying to derive a shared meaning. Also, the aim of formative assessments is to set questions that are closely tied to what is being studied.