Feedback Reminders

Here are some useful reminders about feedback.  Click on the image to enlarge it:

feedback wisdom

This great “feedback flowchart” was created by David Didau (@LearningSpy).  Click on the image to enlarge it:

Feedback

More thoughts on feedback and DIRT can be found in this previous post.

A useful resource for helping pupils to reflect on their performance in a test can be found here.

On The Level # 5 – Focus on Feedback

“Level 5. You’ve made some appropriate suggestions here. In future you need to explain how the evidence supports your opinions.”

Is the above comment an example of good feedback or bad feedback? This article in the Guardian sheds some interesting light on this issue. It summarises ideas presented in “Thanks for the Feedback”, a book written by Douglas Stone and Sheila Heen. These are not teachers, but Harvard Law professors, and their aim is not to help people give feedback, but rather to help them receive it. How often, I wonder, do we teachers stop to really consider our feedback from the recipient’s point of view?

Stone and Heen suggest that people usually respond badly to feedback because they receive it in a state of tension: they want praise for the improvements they have made, but they don’t appreciate reminders that they still need to keep improving and are therefore not good enough. To make matters worse, many managers muddle three types of feedback:

• appreciation (praise for accomplishments)
• coaching (tips for improvement)
• evaluation (rating someone’s performance, often in comparison to others)

By muddling these different types of feedback, the manager apparently renders each aspect far less meaningful and effective than it should be. The comment with which I opened this section is pretty much a perfect example of this type of mixed feedback. So, even though that is just the sort of comment we teachers would usually expect to make, it would seem that, according to Stone and Heen at least, it is actually bad feedback: we should try to keep these different types of feedback more distinct from one another.

confusion

Recently I met with teachers from four local schools to pool our thinking on life after levels. Amongst other things, we took a look at some ideas put forward by Michael Fordham in this blog. Michael, a Senior Teaching Associate at the Cambridge University Faculty of Education, is coming from a very similar direction to our two Harvard professors. He points out that we provide mixed feedback because we are juggling too many different requirements:

We want, as teachers, to give helpful feedback to pupils that allows them to get better at the thing we are teaching them. We want, as parents, to know how well our children are getting on (particularly in comparison to other children!). We want, as schools, to identify pupils who are falling behind so that some kind of intervention can be made. We want, as senior managers, to use data to make judgements about teacher competence. We want, as inspectors or the government, to hold schools to account. We want, as a society, to be able to make decisions (Should I employ this person? Should I let them in to university?) based on prior assessments. I simplify on all these fronts, but it is well recognised that assessment gets dragged in multiple directions and this demands modes of assessment that are not always compatible with one another.

In light of all this, Michael suggests that people simply adopt a layered assessment regime in which we use different types of assessment to generate different types of feedback.  All the teachers I discussed this with certainly seemed to like his thinking.

At the end of our session together, each of the schools involved made a commitment to come up with a suggested approach to future assessment, quite possibly based on the thinking outlined above. In May we will meet again to compare notes, and hopefully establish a workable solution to life after levels. What will Loreto’s contribution to this process be? Another post will follow shortly.  In the meantime, if you have any bright ideas, do let me know!

ON THE LEVEL # 2 / A few questions to consider

In this post you will find an update on developments since post # 1, followed by a few thoughts and a few questions to consider.

So, here’s the update:

1. Several of our Loreto colleagues have commented on the last post, emailed me, or chatted with me.  You can read the comments for yourself at the bottom of that post.  There is a strong desire to keep assessment simple, and many seem keen to report attainment in the form of percentage marks.

2. On Monday an officer from OFQUAL gave a presentation to our Heads of Department, and left us under no illusion that there are challenging times ahead.  Amongst other things, he reminded us that for a while some GCSEs will be graded 1-9 whilst others are still being graded A*-G, that the ideology behind the change is to get away from the “bulging” of grades in the C-A* region and instead spread attainment evenly across the available grades/numbers.  He also confirmed that we will not find out the criteria for awarding these grades until after we have introduced a new system for reporting attainment in KS3.  For the same reason, he also agreed that for a while it will be very difficult for anyone to “predict” GCSE attainment.  The upshot of all this is that it will actually be very hard to construct an approach to KS3 assessment which dovetails in with KS4 assessment.

3. Meanwhile, it was announced yesterday that plans to award a “decile” ranking to all Y6 students have now been dropped and instead they will be given a scaled score between 80 and 130.  Look familiar?  I suspect we will find that a score of 100 will represent the average level of (expected?) attainment.

And next, some thoughts and questions:

1. We need a system which incorporates two different types of assessment.  We must produce “quantative” data which gives everyone a basic snapshot of progress and ensures that any underachievement is spotted and addressed.  At the same time we also need to provide “qualitative” feedback which informs everyone about how further progress can be achieved.  Percentage marks could well serve the first of these two aims, but how would we ensure that these were standardised across the school, so that a score of 90% in one subject represented the same level of learning as a score of 90% in another subject?  And how would we relate this to expected progress?  By setting a target percentage?  How would this be arrived at?  How often would we award these percentage marks?  Would we share them with the pupils, or only ever give pupils qualitative feedback?  It might help to consider how our own performance as teachers is rated: would feedback on lesson observations or OFSTED inspections be more welcome if we were told what was good and what needed to improve, but not labelled “good” or “outstanding” in the process – would the feedback be more welcome if such  a crude verdict were not shared with us, or not even formed in the first place?  Or would we feel disappointed that we could not easily keep track of whether we were improving, and could not look at the achievements of others and start to identify potential sources of guidance and support?

2. If we can’t introduce a system that anticipates an end point (in terms of GCSE attainment) and measures progress in relation to this, should we consider one which instead monitors progress in relation to the starting point (the scaled score between 80 and 130 now due to be awarded at the end of Year 6)?  Or do we simply measure attainment in terms of what is expected at that particular time in the pupil’s education?

That’s more than enough to be going on with.  Once we clarify our thinking on some of these issues, we will start to make some progress.

Any comments?