This week on Sunday and Monday I took part in Philosophy 4 Children training at our campus. One of our curriculum objectives in Secondary is embed the concepts of Theory of Knowledge (a core component in the IB Diploma Programme) horiztonally and vertically through the Secondary Curriculum. The TOK course is concerned with developing students conceptual understanding of how knowledge is produced and utilized across the subject areas. It challenges kids to think about how knowledge claims can be justified and supported.
At the same time, our primary colleagues have been exploring how Philosophy for Children (P4C) can be used to improve children’s abilities to reason, justify and explain their ideas about broad topics.
One of the benefits of working in a K-12 school is that we can combine PD between Primary and Secondary which allows for some eye opening sharing of teacher classroom practice. This training provided a good opportunity for me as a curriculum leader to not only learn about P4C as a concept and teaching tool, but also to see how it might enable Secondary teachers to get a better grip of managing dialogue and understanding of abstract concepts in the TOK course.
During the training we encountered a variety of warm up activities that can be used to get thinking and discussion going, as well as a full P4C inquiry which is a structured 11 step process for generating a conversation about an abstract question. I am not going to write up all the activities that we did in this post as I tweeted an ongoing thread throughout the training detailing all of the tasks we used.
The first observation I had was that the P4C model of inquiry is highly structured, providing a scaffold for all learners (teachers included) to work through their thinking about a topic. Following the 11 steps from a real stimulus to a discussion about an abstract concept allows even someone who is relatively unconfident in this area to succeed in generating thinking and discussion.
Commentators who were following my thread were quick to point out that int there experience, P4C training was some of the best training for TOK teaching that was available.
Indeed, it became immediately apparent to me that the 11 step full inquiry is a perfect model for generating knowledge questions, one of the key, and most difficult steps for TOK learners to get. Here is a method that can be directly applied in TOK classrooms to help students unpack knowledge questions from a stimulus or real life situation. With practice, I am confident that many teachers would be able to use this model to help them develop TOK thinking.
In other secondary subjects, this model can also help teachers and students to unpack TOK concepts related to their subject area. For example in natural sciences, some of the key TOK concepts relate to models, uncertainty, inductive and deductive reasoning, falsifiability among others. Using the NoS statements from the subject guides with specific real life examples like models used to predict climate change as a stimulus, this model could be directly applied in the IB Biology classroom to help teachers and their students generate knowledge questions from examples in their syllabus.
Recently, I have been thinking about how I can get my IB biology students more engaged with real world issues or deeper conceptual questions like “what is life?”. I have lots of ideas for stimuli but beyond creating a DART or questionnaire linked to the podcast, video or reading I was at a loss as to how to generate deep thinking and discussion.
This tool, I believe, has given me the key to help my students, think about and generate questions in response to stimuli, and provide a basis for fruitful discussion about the topic of interest.
For example, I am thinking about how I can really engage my students with the issue of climate change, so that as well as learning about it from the biology syllabus, the learning develops real meaning and significance for them so that they are inspired to run a CAS project around the issue etc. I had an idea of using some of the recent planet earth documentary as a stimulus but was unsure how to use it. Now, myself, the Lang B teachers and the geography teacher are collaboratively planning to address this topic in sequence and we will think about how we can bring the 11 steps inquiry into our planning.
I am convinced that P4C is an excellent foundation for TOK, both of which are programs that can help student think and question more deeply as well as become more engaged with big ideas and questions.
P4C is broad, it is concerned with thinking about any of a range of concepts that could be thought of as philosophical. TOK is narrower in focus, and, in a Venn diagram, would sit inside the concepts of P4C. P4C can be focussed on knowledge, TOK is concerned only with inquiry about the nature of knowledge. Both programs are concerned with linking the real world stimulus to the abstract theoretical concept. The P4C 11 step scaffold provides an excellent ladder to allow learners to move between the real and the abstract.
After reading Mary Myatt’s “The Curriculum”, I’ve been beginning to spend some time thinking about how the IBDP can provide opportunities to make the students work more purposeful via opportunities for authentic performance. In her chapter on Beautiful work she writes:
“children’s work should be honored. It should be of the highest quality and it should also have an audience.”
She goes on to quote Ron Berger “Once a pupil creates work of value for an authentic audience beyond the classroom – work that is sophisticated, accurate, important and beautiful – that student is never the same”.
So far I’m thinking about elements common to all Diploma students:
The Group 4 project: this is a collborative 10 hour project that student teams composed of students from different subjects work on together. The project is not assessed but is mandatory. The theme is set by the school and in four schools over 10 years this has usually involved the HOD Sci using a word like colour or survival. However there can have some real world stimulus like the UN sustainable development goals to focus the project. The students would design experiments along this theme and then present their project to the wider school community and guests.
CAS: Im not an expert here by any stretch and you could argue that CAS is already the most authentic part of the DP. What could be more authentic than working on projects that have direct application in the real world? but how many projects in schools around the world actually do? Is there scope here to raise the bar? the students CAS project could also center around a real world stimulus, the activity stage focussed on taking action in some way, again an exhibition to the community could be used to sum up students work in some authentic way.
TOK: TOK has a heavy summative assessment component with a 1600 word academic essay and ten minute presentation, I would be loathe to add to this…but, the presentations could definitely be delivered to a wider audience..school assemblies, some other exhibitions or the community could be invited to the assessment itself.
Extended Essay: With over 40 hrs of work and 4000 words in the making the extended essay is a beast for most students. There are issues with it and you could already argue that, as a piece of original work, it has real world application. This year we are taking the small step to publish our students TOK and Extended Essays together in a volume, a bit like a journal, with work from some of our Visual Arts students work being used as the cover pieces. But I also like the idea of having student’s undertake a more public viva, like a PhD defense. Clearly, an EE is not a PhD but can we make it so that the process is less tick boxy and more formal? I am keen to hear what other schools do.
With all these things I think about scalability. What works in a small school doesn’t necessarily work in a very large one. Ok, sit through 2 group 4 presentations but 30? So instead schools could ensure that some students present at one event and others at another, so long as each student gets some opportunity to deliver their work meaningfully in the real world.
I realise that my ideas are a little unoriginal and perhaps I am a little bit behind the times (some schools are already doing great work) focussing mostly on presentations and exhibitions, what do you think? How else could we make our student’s DP work have more real world meaning?
I recently completed Daisy Christodolou’s “Making good progress?”. You can see my notes here. In the final chapters, after presenting an argument building up to this, she outlines the key aspects of what she terms a “progression model”. In this post I want to line up some ideas about what this may look like in delivering the IB DP Biology course.
In her book Christodoulou suggests, and I agree, that to effectively help students make progress we have to break down the skills required to be successful in the final assessments into sub-skills and practice these. This is a bit analogous to a football team practicing dribbling, striking or defending in order to make progress in the main game.
In the book she also stresses the difference between formative and summative assessments, what they can and can’t be used for respectively and why one assessment can’t necessarily be used for both.
A progression model for biology
A progression model would clearly map out how to get from the start to the finish of any given course, and make progress in mastering the skills and concepts associated with that domain. In order to do this we need to think carefully about:
What are the key skills being assessed in the final summative tasks (don’t forget that language or maths skills might be a large component of this)?
What sub-components make up these skills?
What tasks can be designed to appropriately formatively assess the development of these sub-skills or, in other words, What does deliberate practice look like in biology?
What would be our formative item bank?
What could be our standardised assessment bank?
What are appropriate summative assessment tasks throughout that would allow us to measure progress throughout the course?
What could be our summative item bank?
How often should progress to the final summative task be measured i.e. how often should we set summative assessments in an academic year that track progress?
This is quite a tricky concept to pin down in biology specifically and in the sciences in general. What skills exactly are kids being assessed on in those final summative IGCSE or IBDP/A Level exams. I haven’t done a thorough literature review here so currently I am not sure what previous work has been in this area.
However, I would contend that most final written summative exams are assessing students conceptual understanding of the domain. If this is the case then the skill is really, thinking and understanding about and with the material of the domain. Students who have a deeper understanding of the links between concepts are likely to do better.
In addition, those courses with a practical component, like the IBDP group 4 internal assessment are assessing a students understanding of the scientific process. While it may seem like these components are assessing practical skills per se, they only do this indirectly, as it is the actual written report that is assessed and moderated. To do well the student is actually demonstrating an understanding of the process, regardless of where their practical skills are in terms of development.
Indeed if we look at the assessment objectives of IBDP biology we see that this is very much the case. Students are assessed on their ability to: demonstrate knowledge and understanding and apply that understanding of facts, concepts and terminology; methodologies and communication in science etc.
How can we move students to a place where they can competently demonstrate knowledge and understanding, apply that understanding as well as formulate, analyse and evaluate aspects of the scientific method and communication.
The literature on the psychology of learning would suggest breaking down these skills into their subcomponents. This means we need to look at methods that develop knowledge and understanding from knowledge. Organising our units in ways that help students see the bigger concepts and connections between concepts within the domain will also help. For more on this see my previous post here. I think that understanding develops from knowledge.
I recently read that Thomas Khun claimed that expertise in science was achieved by the studying of exemplars. Scientific experts are experts because they have learned to draw the general concepts of the specific examples.
Useful sub-skills would be:
Fluency with the terminology of the domain
Ability to read graphs and data
Explicit knowledge of very specific examples
Explicit knowledge of abstract concepts illustrated by the specific examples
Ability to generate hypothesis and construct controlled experiments
Deliberate practice in biology
Thinking about these sub-skills, then, we can see what may constitute deliberate practice in biology and thus what would make useful formative assessments within the subject.
Fluency with the terminology can be gained through the studying of terminology decks like those available on quizlet. In addition, the work of Isabel Beck. Suggests that learning words isolated from text is not that helpful to gaining an understanding of those terms. To gain this, students need to be exposed to these words in context. Therefore there is a lot to be said for tasks and formative assessments that get students reading. Formative assessments could then consist of vocab tests and reading comprehension exercises of selected texts.
Reading and interpreting data can be improved through practice of these skills. This is an area where inquiry alone won’t help students make progress. Students need to be shown how to interpret data and read tables and graphs before making judgements. Ideally, in my opinion they should do this once they have learned the relevant factual knowledge of a related topic. Formative assessments focussing on data interpretation should therefore come a little later once students have covered a bulk of the content.
To build up conceptual understanding, students need to be exposed to specific examples related to those topics as I outlined in this post. Tests (MCQs) that assess how well students know the specific details of an example could be useful here to guide learners to which parts they know and those they don’t.
Following this we can begin to link examples together to build knowledge of a more abstract concept. Concepts can then be knitted together to develop the domain specific thinking skills: thinking like a biologist.
Formative assessments could take the form of MCQs but as outlined above, vocab tests, reading comprehension activities, and other tasks may well have their place here.
Summative assessments for measuring progress
I am now thinking that to truly assess student progress against the domain, individual unit tests just won’t cut it. As Christodolou argues, summative tests exists to create shared meaning and do that need to be valid and reliable. Does scoring a 7 in a unit test on one topic of an 11 topic syllabus mean that the student is on track to score a 7? Not necessarily. Not only is the unit test not comparable to the IB 7 because it is only sampling a tiny portion of the full domain, but the construction and administration of the test may not be as rigorous as that of the actual IB papers.
Clearly it isn’t ideal to use the formative assessments described above as these are nothing like the final summative assessment of the course, plus their purpose is to guide teaching and learning, not to measure progress.
I would argue that summative assessments over the two-year course should use entire past papers. These past papers sample the entire domain of the course and performance against them is the best method of progress in the domain. A past paper could be administered right at the start of the course to establish a base line. Subsequent, infrequent, summative tests, also composed of past papers could then measure progress against this baseline.
Why should summative assessments use past papers? What not use unit tests? Unit tests, aggregated, is not the same thing as performance on a single assessment sampling the whole domain. They cannot produce the same shared meaning as an assessment that samples the entire domain. In addition the use of many single unit, high stakes tests will cause teaching to the test as well as much more student anxiety. Instead lots of formative testing and practice of recall should help to build students confidence in themselves.
Imagine a normal primary school in an anglophone country like the UK or US. Now imagine taking a year 4/grade 3 or year 5/grade 4 child from that school and giving them an academic program aimed at year 12/grade 11 or year 13/grade 12 students. It could be AP, A Levels, IB DP. The course doesn’t matter here. Lets just assume that these children would be taking academic, pre-university courses in the the humanities/social science and the natural sciences. For the sake of argument, lets assume that these fictional children have the social and emotional skills of 17-18 year olds. Clearly I am not describing a real situation here.
From a purely academic point of view: what would happen? Would those children succeed? Would they have the background knowledge, understanding and vocabulary skills to access in class discussions? Or text books for that matter? Or even to understand what the teacher was talking about?
Now, I wonder, how would the teachers, tasked with teaching these children respond? What strategies could classroom practitioners employ to help their students achieve? How could the curriculum coordinators and Heads of Year respond to implement strategies to allow the children to access the curriculum? What would you do?
What makes an EAL student like a primary schooler?
Of course, this never happens in practice or does it? Is there any cohort of students in international schools that would somewhat match this description? I would contend that there are, to varying degrees, and in varying numbers, students who fit this description as EAL students.
Now clearly, an average 17 year old student, has cognitive abilities beyond that of an average 10 year old and certainly, we would hope, more advanced social and emotional skills. And indeed they probably do know more.
But how do we ensure that, when a high school accepts an older student who has never had any prior formal instruction in academic disciplines in the language of the school, and will ultimately sit exams in that new language, this child will be able to succeed.
Some might answer that schools shouldn’t admit students when they cannot meet their needs. I would agree. But I have seen schools that do admit students when they can’t meet their needs; usually when a child’s needs meet the economic needs of a school, the latter concerns tend to win.
My concern here really revolves around the question: If most major testing systems in the English Language (AP, IBDP etc) are norm referenced, then aren’t we simply propping up the performance of our native language speakers with the ultimately poorer performance of non-native speakers? Are our anglophone speakers succeeding on the back of the poorer performance of our EAL students (on an international level)?
Of course, in international schools, there is a lot of variance and there is certainly flexibility in the system. Most students who can’t access the full curriculum will be able to graduate from the school with some form of modified curriculum. But we need to ensure that students have as many options available to them when they leave us as possible. Going to an international school is a privilege and affords so many additional benefits to kids that they may not have had in there home country but we need to ensure that students are able to succeed after they leave us.
How do we solve these problems?
In practical terms when, as a coordinator, I have a cohort of students for the majority of whom English is a second language and many of whom have only been learning their academic subjects in English for a few short years, how do I put strategies in place to support them as best I can?
I have written here, here and here in the past about classroom strategies for teaching upper secondary curriculums to EAL students. I am an interested novice. But now as a coordinator I am concerned about curriculum level interventions.
The context will matter both in terms of the cohorts profile and the curriculums that can be offered as well as their flexibility. I coordinate the IB, which is a flexible system in the sense that, when combined with an American style High School Diploma, students have the option of taking IB certificates in as many or as few courses as they would like.
But I am blue-skying today and want to think about how to offer the full Diploma to as many of my students as possible in this imaginary cohort.
Making the Diploma accessible
There are ways to do this but it may require restrictions in certain areas, for example limiting extended essay subject selection to the students mother tongue or English B if the students level of English is so low that the team feels this would preclude them for taking the extended essay in another academic subject, like business studies or economics for example.
And what level of English is too low? Whats the cut off? Recently I have discussed, with colleagues, using lexile analysis to determine what the English grade reading level is of my EAL students as well as the lexile score. This is a measure of how dense a text is. The lexile score is useful for a number of reasons. It can be used to work out what the equivalent reading age in English is for the EAL students and it can compared to the lexile level of the textbooks used on the course, allowing teachers to the see the difference in where there kids are at and the material they need to present.
Lexile analysis can be performed here. Teachers can set up their own accounts but I think this should be done centrally on a term by term basis or semester by semester basis and the information shared with students and their families, as well as teachers as part of a set of on going sharing of strategies and training on support EAL students in the academic classroom.
Hirsch (2016) claims that “Vocabulary size is the outward and visible sign of an inward acquisition of knowledge.”Lexile analysis therefore shows us not only what these students can read but what they know in English as well. Hirsch makes the case that the more domain specific knowledge students acquire, the more their vocabulary naturally increases. This is why, for Hirsch, knowledge rich elementary curriculums are so important. They ensure that students acquire vocabulary and this vocabulary acquisition is the magic formula for reducing inequality. Children from affluent families have more vocabulary when they start school (they oral life at home is richer) compared to their disadvantaged peers and knowledge curriculums help them to catch up.
In a sense our EAL students are like disadvantaged native language children; they certainly don’t benefit from homes where English is spoken and so they don’t benefit from expanding their knowledge and vocabulary in English when they leave school.
The matthew effect shows how learners who have knowledge will tend to acquire more at a faster rate and those with less will acquire knowledge more slowly. This is one of the important psychological principles often overlooked by commentators who claim if we teach knowledge then our kids will be competing with computers. Teaching knowledge is the only way to ensure that they can be life long learners; the more knowledge we have in our brains the quicker we gain new knowledge. This is also known as the knowledge capital principle it takes knowledge to make knowledge.
Hirsch also claims that “High school is too late to be taking coherent content seriously” as part of his argument for knowledge rich elementary curriculums. Where does this leave our EAL students?
Evidence from cognitive science also shows us that knowledge is domain specific and that it doesn’t transfer readily. Thus students may now about the detailed components that make up the processes of photosynthesis in Korean, but they are unlikely to be able to transfer this knowledge from Korean into English. This creates real problems when it comes to supporting EAL students in the mainstream academic classrooms.
Taking all of the above int account, it seems that we need to begin by getting students exposed to speaking and thinking in English as much as possible.
Let me be clear here, as I have run into hot water on this one in schools. If the aim of a school is to have students graduate by passing English language academic exams for whatever greater purpose, then I think that in school, whenever possible, students need to be encouraged to speak English. I don’t say this because I am a cultural imperialist but because it is demonstrably the best way of getting students to learn the academic subjects, most of the time.
As an IBDP Coordinator this means, among other things, ensuring that students get as much time in the English acquisition classroom as possible. I would consider placing all the students into the English B HL class at the start of their course. This would give them more hours in the acquisition classroom initially. As they progressed through the course we could look at their progress to see if they could afford to drop down to SL.
Clearly there is a balance to be struck here. Forcing kids to be taking an HL subject they might not be into could seriously backfire in terms of motivation and so continual communication with teachers, students and parents is essential.
To ensure that students felt like they were making progress (and therefore maintaining their motivation – psychology) I would consider having dedicated EAL support after school. This time would be given over to allow the students to do grade-levelled reading in English.
I also apply the IB research discussed in this post to ensure that their is ongoing monitoring of the learners progress, too often students are assessed at the beginning of the year and never again. Ongoing, regular assessment of learners progress is necessary here.
Since beginning to write this, I have been introduced to a piece of software that appears to be an answer to some of these questions.
I hope that ongoing posts on this topic will help me explore the strategies that can be put in place to ensure all learners succeed.
E.D. Hirsch (2016) Why knowledge matters: Rescuing our children from failed educational theories. Harvard Education press
This season I marked 140 IB DP Biology HL Paper 2 Timezone 1 papers. It was unusual for a couple of reasons: 1) I managed to pass the qualification marking on the first attempt for the first time in six years! 2) I managed to complete my marking target within seven working days and nine days before the deadline – the first time I have managed to complete the work so quickly.
I felt that this years timezone 1 exam was very straightforward to mark. This was particularly evident in the data analysis responses where the mark scheme was much easier to interpret than I recall previous years being.
To qualify for marking, normally there are practice scripts and qualifying scripts to mark. The practice scripts are a chance for you to view comments from the senior examining team, so when undertaking these it pays to go very slowly, really thinking about how the mark scheme applies in each question and when you have marked each question, checking your own marking against the comments by toggling on the annotations. Using this method you may become quickly aware of any small details in the comments that you have missed.
In the past when I have undertaken the qualifying scripts I have opted to mark them in bulk and then submit them in bulk, so I would only submit the scripts once I had marked all of the papers. This year, instead, I submitted each script after I had marked it. This gave me the advantage of being able to read the annotations on each of the qualifying scripts, check my tolerance and adjust my marking of each of the subsequent qualifying scripts. I think this may have been a primary reason why I qualified first time.
Student misconceptions on the paper
I marked 140 scripts and when you mark that many certain themes begin to emerge. This year worryingly a large proportion of candidates were conflating the mechanisms of global warming with holes in the ozone layer. This is not a new thing and it is a problem that I have noticed in previous years but this year the sheer number of candidates writing a confused response to the question on the mechanisms of global warming was staggeringly impressive.
In 2018, 18-year-old students are still writing that carbon dioxide creates holes in the ozone layer and this is what heats up the planet – or something similar. This needs to be addressed. A teacher or teachers somewhere must be teaching kids about the ozone layer.
Now I struggle to believe that this is the result of their biology teachers (who most likely will have studied this subject to sime depth and understand the science) and I am wondering if this is the result of colleagues in other subjects unrelated to science. We know that there is a lot of confusion about climate change in the media and that the scienitific debate is often misconstrued in the popular press. We also know that this is an issue of global importance and for that reason, other subject teachers may well address it. IB student could meet it in TOK, studies in language as well as geography and other teachers. I am wondering if there are some miseducated teachers out there who are confused on the issues of climate science and are confusing their kids. This would be a great area for practitioner research and opens up the question about the professional responsibilities of teachers who have a particular subject specialism: should teachers who are well educated on a particular topic be responsible for sharing that knowledge with colleagues who may also approach this topic in the own teaching?
(on a side note a colleague previously told me that XX and XY chromosomes were “a lie” in a discussion on LGBTQ+ issues in school).
Other misconceptions that became apparent were:
Candidates thought that water was an organic molecule
Candidates didn’t understand that DNA transcription/translation = protein synthesis = gene expression = expression in the phenotype.
Not understanding that linked loci are genes on the same chromosome not in the same place.
Common factual errors were:
Few candidates knew that glutamic acid is replaced by valine.