The Extended Essay: The central support for teaching ATL skills?

I have reservations about the IB ATLs. I have written about this previously, mainly focussing on the approaches to teaching and I don’t really want to go over these issues again, suffice to say that it still causes me concern that the IB, as the only truly global non-national/international curriculum has such strong ideology that underpins what it requires teachers do. In fact the more I think about it, the more concerned I am by the fact that, on reflection, most teacher training curriculums that I am aware are not balanced and do not give good education to teachers on evidence, history, philosophy. Instead they simply uncritically present one ideology as fact.

My previous post focussed on the approaches to teaching. In this post I want to focus on the IB’s approaches to learning which I will refer to in this post simply as ATls. Hopefully this post will be a bit more positive!

There are certainly areas of the of the ATLs that I have come to appreciate. Before I get there I just want to state that from what I have read, I think that the evidence from cognitive science is pretty clear cut: there are no such things as general learning/thinking skills. More over, I don’t think that the often quoted 21st Century learning skills or 4Cs of: Communication, Collaboration, Critical-thinking, Creativity are anymore important in the 21st Century than they were in the 19th and 20th centuries (they were referred to then; they are nothing new now) and I think the whole enterprise of trying to teach them outside of domains is an exercise that will only make our education system weaker, not stronger.

To make the ATLs work within the school context they need to be linked to and embedded in domain specific content. Some of them maybe more generalisable than others and in that sense may be more malleable for being taught independently, but most will need to be embedded within the teaching of specific content of a domain.

For example, elements of the self-management tranche of the identified ATLs may well be more stand alone, or at least can be taught independently of subject matter. However, teaching students about time-management still needs material to work on, in this case the students own general workload at school.

Mindfulness is another self-management skill that can be taught independently and, in my opinion, to great value for the learner. However for this to be affective it needs staff buy-in and training. While mindfulness is the new trendy idea, there is a lot of misunderstanding about what it actually is.

Thinking skills, communication skills and research skills, as identified by the IB’s ATL guide all require teaching and embedding within content. In terms of the Communication and research skills, one of the central pillars to teaching these is the Extended Essay.

In most schools the Extended Essay process is placed to the middle to end of the DP, with students perhaps beginning the process in term two of the first year and ending sometime around Christmas of the second year. This year we have gone to the extreme of bringing it to the front of the process as we feel it underpins and provides so many opportunities to explicitly teach the ATLs but still linking them to specific subject knowledge.

We have introduced our students to the process this September and have planned in specific interventions that look at research skills and communication skills, while we also begin to map out how these skills are taught vertically from year 7.

Our current year 12 students are supported through the process with clear scaffolding. First they are asked to think about general topics and clearly led through ways to identify and think about ideas. Subsequently, we introduce them to the library and its resources in a series of sessions which first look at the library and its resources in general before looking at the databases we have access to and how researchers utilise these resources appropriately using boolean operators..

Students are then asked to draft a proposal for their Extended Essay which would include the research question, an outline of the subquestions and a list of potential sources that can be used. This proposal needs to be agreed to and signed off by their supervisor before Christmas of the first year. The proposal becomes the basis for the first formal reflection.

In the second term, we show students how to critically appraise sources and continue to give them support in writing their outline for their essay. This takes place un until April where they submit their outline to the supervisors and follow up with a second meeting.

Following on from this meeting students will recieve feedback and after their exams, during their core week, they are given time to work on writing their Extended Essay in the morning with the aim that they would have a first draft completed by the end of the third term and submitted to their supervisor, this would form the basis of their interim reflection and their third meeting.

Student can then finalise their work over the summer, submitting it and completing their viva voce at the start of their second year. In this way this major piece of work is completed before the bulk of internal assessments and university applications begin.

By front loading the extended essay process in this way, I believe that the team has a greater chance of explicitly teaching, the research and communication skills needed to succeed in the extended essay. This reduces the chances of these skills being left to chance and also allows students to be able to apply these skills in their internal assessments for their other subject.

Finally by also, bringing some of the other internal assessments into the later half of the first year, we can begin to help students develop strategies for their own time management and organisational skills, by explictly showing them how they balance the commitments of the extended essay, internal assessments and other work. This can be done early in the course, allowing them to apply these skills later in the course.

Notes on making good progress?: Chapter 3

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

Making valid inferences

If the best way to develop a skill is to practice the components that make it then it is hard to use the same type of task to assess formatively and summatively.

Summative assessment tasks aim to generalise and create shared meaning beyond the context in which they are made. Pupils given one summative judgement in one school should be getting a similar judgement in another school.

Formative assessment aims to give teachers and students information to form the basis for successful action in improving performance.

Although, at times, a summative task may be able to repurposed as a formative task, generally different purposes pull assessments in different directions. The purpose of an assessment does impact on its design which makes it harder to simplistically re-purpose it.

Assessments need to be thought of in terms of there reliability and their validity. The validity of an assessment refers to the inferences that we can draw from its results. The reliability is a measure of how often the assessment would produce the same results with all other factors controlled.

The example of timing of mocks comes to mind. Whether you want these to be a summative or a formative assessment will affect when you favour setting them.

Sampling (the amount of knowledge from a particular domain assessed by an assessment) affects the validity of an assessment. Normally in summative assessments questions sample the domain, they do not cover it in its entirety.

Some assessment do not have to sample. If the domain they are measuring (letters of the alphabet) is small this isn’t a problem. Further along the educational pathway this becomes harder.

Assessments also need to be reliable. Unreliability is introduced into assessments through sampling, the marker (different markers may disagree) and the student (student performance can vary day to day).

Models of assessment include the quality model and the difficulty model. Sources of unreliability affect each of them in different ways. Quality model requires markers to judge how well a student has performed (think figure skating), difficulty model requires pupils to answer questions of increasing difficulty (think pole-vault).

There is a trade-off between reliability and validity. A highly reliable MCQ assessment (reduction in sampling and marker error) may limit how many inferences you can make from the assessment, you may be unable to use this as a summative assessment as it doesn’t properly match up with the final assessment.

However reliablity is a prerequisite for validity. If an assessment is not reliable then the inferences drawn from it, its validity, is also not reliable. We can’t support valid inferences.

You may well be able to create an exciting and open assessment task which corresponds to real-world tasks; however, if a pupil can end up with a wildly different mark depending on what task they are allocated, and who marks it, the mark cannot be used to support any valid inferences.

Summative assessments are required to support large and broad inferences about how pupils will perform beyond the school and in comparison to other peers. In order for such inferences to be valid they must be consistent.

Shared meanings impose restrictions on the design and administration of assessments. There are specific criteria needed for this. To distinguish between test takers, assessments need items of moderate difficulty and assessments must sample. Samples need to be carefully considered and representative.

The main inference needed for formative assessments is how to proceed next. Assessment still needs to be reliable but the inferences do not need to be shared even with kids in the same room. I can therefore help some kids more than others. It is about methods. It needs to flexible and responsive.

The nature of inference posses a restriction on assessment. Trying to make summative inferences from tasks that have been designed for formative assessment use is hard to do reliably without sacrificing flexibility and responsiveness.

Assessment theory triangulates with cognitive psychology. The process of aquiring skills is different from the product, the methods of assessing the process and the product are different too.

Formative assessments need to be developed by breaking down the skills and tasks that feature in summative assessments into tasks that will give valid feedback about how pupils are progressing towards that goal.

They can be integrated into one system, to be discussed in a later chapter.

Most schools make the mistake of summatively assessing all too frequently.


Notes on making good progress?: Chapter 2

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

Aims and methods

The generic skills approach implies that skills are transferable and that the best way to develop a skill is to practice that skill. It is based on the analogy of the mind acting as a muscle.

There are examples of curriculums that follow this model like the RSA opening minds curriculum; it is interesting to contrast this to the core knowledge curriculum mentioned by Hirsch and the DI models.

Instruction based on this model is organised around developing transferable skills through projects where students practice authentic performances.

However research from 50 years of cognitive science shows us that skill is domain specific and dependent on knowledge. The examples of studies into Chess grand masters is given. These were the earliest experiments but have been reliably repeated in other knowledge domains. There are multiple lines of evidence that suggest the same thing (A bit like evolutionary theory).

Complex skills depend on mental models which are specific to a particular domain. These models are built in long term memory. They can be drawn on to solve problems and prevent working memory from being overloaded.

Working memory is highly limited and relying on it to solve problems is highly ineffective. Learning is the process of aquiring these mental models. Performance is the process of using them.

Formative assessment should aim to assess how these mental models are being developed. Summative assessment measures the performance or act of using those mental models.

Even scientific thinking is domain specific. One cannot evaluate anomalies or the plausibility of a hypothesis without domain specific knowledge.

The adoption of generic skills theory leads to a practical flaw: lessons are too careless about the exact content knowledge that is included in them (this also ties in with Hirsch’s ideas that individualisation leads to a reduction in knowledge). If skills are not transferable we need to be very careful about what the content of a lesson is. Specifics matter.

The educational aim of developing generic skills is sound but we need to think about the method. We can develop critical thinking only by equipping learners with knowledge. Good generic readers or problem solvers have a wide background knowledge.

Aquiring mental models is an active process. project based lessons can work as they help students to make the knowledge their own, its just that sufficient care and attention to the content that is to be learned is applied.

Specific and focussed practice is what is needed to develop skill. As shown by the work of K. Anders Ericsson, there is a difference between deliberate practice and performance. Deliberate practice builds mental models, while performance uses them.

What is learning and how is it different from performance? Learning is the creation of mental models of reality in long term memory. Copying what experts do (performing) is not a useful way of developing the skill of the expert because they do not build the mental models. Instead these lessons will overwhelm working memory and will be counter-productive.

Generic skill models only allow feedback that is generic, it does not allow are feedback to tell students exactly how to improve. “think from multiple perspectives more” is not useful advice if kids don’t know how to do it.

Models of progression are needed to show the progress that students are making.

Peer and self assessment can be useful so long as they are used appropriately. The Dunning Kruger effect shows us that novices cannot judge their performance accurately (does this contradict Hattie’s claim about self reported grades that kids know where they are at?). Developing pupils ability involves developing their ability to pervcieve quality. We cannot expect them to self assess complex tasks and showing them excellent work is not enough to develop excellence. Particular aspects of work need to be highlighted by the teacher.

To develop skill we need lots of specific knowledge and practice at using that knowledge. This helps to close the knowing-doing gap.

Notes on making good progress?: Chapter 1

In this series of posts I record my notes from Daisy Christodolou’s book “Making good progress? The future of Assessment for Learning” It is quite excellent. You can buy a copy here.

Why didn’t Assessment for Learning transform our schools?

Formative assessment is when teachers use evidence of student learning to adapt instruction to meet student need. It’s focus is on what students need to do to improve, on their weaknesses. Feedback then, needs to be tailored thoughtfully to direct the student in how to improve and allow students to act on that feedback.

Formative assessment should be used to diagnose weakness and feedback should tell students how to improve explicitly. AfL is not just about teachers diagnosing weakness and being responsive it is about students responding to information about their progress. Could be a good model for appraisal too.

There is a tension then, between summative and formative assessment. One is about measuring student progress against the aims of education while formative assessment is about the students finding out what they need to do to improve.

If we can agree on the aims of assessment, there is still a discussion about methods. We either favour the generic-skill method which states that to get better at a particular performance you just need to practice that performance. So if you need to practice critical thinking you just practice it or if you want to get better at writing an essay, you just write lots of them. Or we favour the deliberate practice method. This method breaks the final skill down into its constituent parts and practices those. So footballers practice dribbling, passing, defending and shooting not just playing whole games all the time.

Summative assessment is about assessing progress against the aims of education. Formative assessment is about the methods you choose to meet those aims: generic or deliberate. Depending on which one of these you subscribe too will affect your formative assessment, and thus whether assessment tasks can be used formatively and summatively.

If you believe that skill acquisition is generic then formative assessment tasks will match the final summative task. You will write lots of essays, feedback can be given and a grade awarded. If you believe that the method of deliberate practice is better then you may need to design formative tasks that don’t look like the final task. These tasks cannot be used summatively because they don’t match the final task.

Interestingly, belief in generic skills leads down the road of test prep and narrow focus on exam tasks because this model suggests that to get better at the exams you do need you need to practice taking them.

In my mind the key questions for a school, curriculum level or department that wants to adopt the deliberate practice model should be:

  • What are the key skills being assessed in the final summative tasks (don’t forget that language or maths skills might be a large component of this?
  • What sub-components make up these skills?
  • What tasks can be designed to appropriately formatively assess the development of these sub-skills?
  • What does deliberate practice look like in my subject?
  • How often should progress to the final summative task be measured i.e. how often should we set summative assessments in an academic year that track progress?

Whole school support for EAL learners II

Imagine a normal primary school in an anglophone country like the UK or US. Now imagine taking a year 4/grade 3 or year 5/grade 4 child from that school and giving them an academic program aimed at year 12/grade 11 or year 13/grade 12 students. It could be AP, A Levels, IB DP. The course doesn’t matter here. Lets just assume that these children would be taking academic, pre-university courses in the the humanities/social science and the natural sciences. For the sake of argument, lets assume that these fictional children have the social and emotional skills of 17-18 year olds. Clearly I am not describing a real situation here.

From a purely academic point of view: what would happen? Would those children succeed? Would they have the background knowledge, understanding and vocabulary skills to access in class discussions? Or text books for that matter? Or even to understand what the teacher was talking about?

Now, I wonder, how would the teachers, tasked with teaching these children respond? What strategies could classroom practitioners employ to help their students achieve? How could the curriculum coordinators and Heads of Year respond to implement strategies to allow the children to access the curriculum? What would you do?

What makes an EAL student like a primary schooler?

Of course, this never happens in practice or does it? Is there any cohort of students in international schools that would somewhat match this description? I would contend that there are, to varying degrees, and in varying numbers, students who fit this description as EAL students.

Now clearly, an average 17 year old student, has cognitive abilities beyond that of an average 10 year old and certainly, we would hope, more advanced social and emotional skills. And indeed they probably do know more.

But how do we ensure that, when a high school accepts an older student who has never had any prior formal instruction in academic disciplines in the language of the school, and will ultimately sit exams in that new language, this child will be able to succeed.

Some might answer that schools shouldn’t admit students when they cannot meet their needs. I would agree. But I have seen schools that do admit students when they can’t meet their needs; usually when a child’s needs meet the economic needs of a school, the latter concerns tend to win.

My concern here really revolves around the question: If most major testing systems in the English Language (AP, IBDP etc) are norm referenced, then aren’t we simply propping up the performance of our native language speakers with the ultimately poorer performance of non-native speakers? Are our anglophone speakers succeeding on the back of the poorer performance of our EAL students (on an international level)?

Of course, in international schools, there is a lot of variance and there is certainly flexibility in the system. Most students who can’t access the full curriculum will be able to graduate from the school with some form of modified curriculum. But we need to ensure that students have as many options available to them when they leave us as possible. Going to an international school is a privilege and affords so many additional benefits to kids that they may not have had in there home country but we need to ensure that students are able to succeed after they leave us.

How do we solve these problems?

In practical terms when, as a coordinator, I have a cohort of students for the majority of whom English is a second language and many of whom have only been learning their academic subjects in English for a few short years, how do I put strategies in place to support them as best I can?

I have written here, here and here in the past about classroom strategies for teaching upper secondary curriculums to EAL students. I am an interested novice. But now as a coordinator I am concerned about curriculum level interventions.

The context will matter both in terms of the cohorts profile and the curriculums that can be offered as well as their flexibility. I coordinate the IB, which is a flexible system in the sense that, when combined with an American style High School Diploma, students have the option of taking IB certificates in as many or as few courses as they would like.

But I am blue-skying today and want to think about how to offer the full Diploma to as many of my students as possible in this imaginary cohort.

Making the Diploma accessible

There are ways to do this but it may require restrictions in certain areas, for example limiting extended essay subject selection to the students mother tongue or English B if the students level of English is so low that the team feels this would preclude them for taking the extended essay in another academic subject, like business studies or economics for example.

And what level of English is too low? Whats the cut off? Recently I have discussed, with colleagues, using lexile analysis to determine what the English grade reading level is of my EAL students as well as the lexile score. This is a measure of how dense a text is. The lexile score is useful for a number of reasons. It can be used to work out what the equivalent reading age in English is for the EAL students and it can compared to the lexile level of the textbooks used on the course, allowing teachers to the see the difference in where there kids are at and the material they need to present.

The lexile analysis of a biology textbook. The level ranges from Y13/G12 to post secondary!

Lexile analysis can be performed here. Teachers can set up their own accounts but I think this should be done centrally on a term by term basis or semester by semester basis and the information shared with students and their families, as well as teachers as part of a set of on going sharing of strategies and training on support EAL students in the academic classroom.

Hirsch (2016) claims that “Vocabulary size is the outward and visible sign of an inward acquisition of knowledge.”Lexile analysis therefore shows us not only what these students can read but what they know in English as well. Hirsch makes the case that the more domain specific knowledge students acquire, the more their vocabulary naturally increases. This is why, for Hirsch, knowledge rich elementary curriculums are so important. They ensure that students acquire vocabulary and this vocabulary acquisition is the magic formula for reducing inequality. Children from affluent families have more vocabulary when they start school (they oral life at home is richer) compared to their disadvantaged peers and knowledge curriculums help them to catch up.

In a sense our EAL students are like disadvantaged native language children; they certainly don’t benefit from homes where English is spoken and so they don’t benefit from expanding their knowledge and vocabulary in English when they leave school.

The matthew effect shows how learners who have knowledge will tend to acquire more at a faster rate and those with less will acquire knowledge more slowly. This is one of the important psychological principles often overlooked by commentators who claim if we teach knowledge then our kids will be competing with computers. Teaching knowledge is the only way to ensure that they can be life long learners; the more knowledge we have in our brains the quicker we gain new knowledge.  This is also known as the knowledge capital principle it takes knowledge to make knowledge.

Hirsch also claims that “High school is too late to be taking coherent content seriously” as part of his argument for knowledge rich elementary curriculums. Where does this leave our EAL students?

Evidence from cognitive science also shows us that knowledge is domain specific and that it doesn’t transfer readily. Thus students may now about the detailed components that make up the processes of photosynthesis in Korean, but they are unlikely to be able to transfer this knowledge from Korean into English. This creates real problems when it comes to supporting EAL students in the mainstream academic classrooms.

Taking all of the above int account, it seems that we need to begin by getting students exposed to speaking and thinking in English as much as possible.

Let me be clear here, as I have run into hot water on this one in schools. If the aim of a school is to have students graduate by passing English language academic exams for whatever greater purpose, then I think that in school, whenever possible, students need to be encouraged to speak English. I don’t say this because I am a cultural imperialist but because it is demonstrably the best way of getting students to learn the academic subjects, most of the time.

As an IBDP Coordinator this means, among other things, ensuring that students get as much time in the English acquisition classroom as possible. I would consider placing all the students into the English B HL class  at the start of their course. This would give them more hours in the acquisition classroom initially. As they progressed through the course we could look at their progress to see if they could afford to drop down to SL.

Clearly there is a balance to be struck here. Forcing kids to be taking an HL subject they might not be into could seriously backfire in terms of motivation and so continual communication with teachers, students and parents is essential.

To ensure that students felt like they were making progress (and therefore maintaining their motivation – psychology) I would consider having dedicated EAL support after school. This time would be given over to allow the students to do grade-levelled reading in English.

I also apply the IB research discussed in this post to ensure that their is ongoing monitoring of the learners progress, too often students are assessed at the beginning of the year and never again. Ongoing, regular assessment of learners progress is necessary here.

Since beginning to write this, I have been introduced to a piece of software that appears to be an answer to some of these questions.

I hope that ongoing posts on this topic will help me explore the strategies that can be put in place to ensure all learners succeed.


E.D. Hirsch (2016) Why knowledge matters: Rescuing our children from failed educational theories. Harvard Education press