The Observer recently had a supplement on walking. Included in this was a feature on “empty grid squares”. One of these is SR9996, on the Pembrokeshire coast. One of the few things it contains is a disused lime quarry and kiln. Under the heading “How SR9996 helped shape our plucky island nation” we read,
For decades the quarry and kiln supplied lime to fields for miles around, boosting their pH value and helping to produce crops.
Well, maybe. Increasing the pH means reducing the acidity (if the soil is already acidic) or increasing the alkalinity (if it is alkaline). The former, but not the latter, will help produce crops. I would naively have thought that the soil for quite a few miles around a lime quarry was more likely to be alkaline than acidic.
So it may be that this jokey comment was based on partial understanding. But, given what is happening in universities, maybe it is part of a wider perception that more = better.
The innumerate administrators who now run universities love it when a complicated situation can be summarised in a single number, be it impact factor of a journal, number of stars given to a researcher by an external assessor, or student questionnaire results. I have grumbled before about the nonsensical processing through which the questionnaire data is put. But the “more = better” assumption involves a more insidious problem: teaching staff are judged on questionnaire results, whereas there are some issues over which they do not have control, and others where more is not necessarily better.
Our newly-centralised student questionnaires have seven statements, which the students are invited to rate on a scale from 1 (strongly disagree) to 5 (strongly agree). Here are the questions, which I am sure are no better and no worse than those used in many other universities. Without wishing to sound immodest, I should start by making clear that my critique below is not fuelled by resentment at poor scores: I got an “overall quality index” of 99.8% for the module I taught last term (though it is quite unclear to me how this index was computed).
- The module is well taught.
- The criteria used in marking on the module have been made clear in advance.
- I have been given adequate feedback during the module.
- I have received sufficient advice and support with my studies on the module.
- The module is well organised and runs smoothly.
- I had access to good learning resources for the module.
- Overall I am satisfied with the quality of the module.
Let’s look closely at these questions. Keep in mind, as we do, an inspirational teacher from your own past, or perhaps a historic figure like Jesus (whose disciples sometimes called him “Teacher”) or the Buddha. (I can’t resist pointing out that neither of these two teachers provided lecture notes; what we know of their teaching was written down decades later in one case, centuries in the other.)
Question 1 is very subjective, but there is nothing actually wrong with it.
Question 2 is factual. I proposed to the authorities that it would be a good idea to have a purely factual question on the questionnaire, so that if someone gets the answer wrong, their other answers could be ignored. This question, I understand, is not used for the purpose I suggested. Back in the days when we designed our own questionnaires, one of the statements was “I have found the exercise classes helpful.” When I was teaching an advanced module which had no exercise classes, some of the students gave a negative answer to this question.
Question 3 is, on any but the smallest courses, outside the lecturer’s control, and it is quite improper to use the answer to it for judging the lecturer’s teaching ability. The amount of feedback we can give is dependent on policy decisions (such as whether the feedback is formative or summative, and how many questions are marked), and is also partly dependent on the availability of graduate students to do the marking.
I don’t understand question 4. What does “advice” mean? If it is generalities about how to study, it is not an individual lecturer’s responsibility to give this (though I always try to make my advice available to my students on the course web page, and sometimes harangue them during lectures). If it is module-specific, then I have no idea what is intended.
Question 5 is also outside the lecturer’s control. Last semester, because of incompetence of our administrators, my lecture class of 65 was put in a room with a capacity of 45, and with one tiny whiteboard which could only be reached by standing on a chair. (Where are Health and Safety when you really need them?) I had no alternative but to cancel the lecture. A better room was found only just in time for the following week’s lecture. Then, to my horror, the revision lecture for the course was scheduled in the same inadequate room! The students really should have given me low marks for this debacle, though it was certainly not my fault.
Question 6 is a bit of a puzzler. Every student has access to the lectures, which to my mind are the most important learning resource. Not all students avail themselves of this. The practical effect is that a lecturer who slavishly follows the textbook can expect good marks here, whereas one who challenges the students will be marked down.
Finally, question 7. Oh dear, what a stupid question: whoever thought of putting that one on? If I am satisfied with the module I should strongly agree with the statement, even if I am only just slightly more satisfied than not; in the other situation, I should strongly disagree. As Dickens said,
Annual income twenty pounds, annual expenditure nineteen nineteen six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds nought and six, result misery.
I think they expect students’ answers to reflect their degree of satisfaction. But many maths students have a pedantic streak and may well answer the question asked rather than the one intended.
And, as a footnote, where are the questions probing whether students have been helped to understand the subject, or have been inspired to learn more?