Student questionnaires, 2

The Observer recently had a supplement on walking. Included in this was a feature on “empty grid squares”. One of these is SR9996, on the Pembrokeshire coast. One of the few things it contains is a disused lime quarry and kiln. Under the heading “How SR9996 helped shape our plucky island nation” we read,

For decades the quarry and kiln supplied lime to fields for miles around, boosting their pH value and helping to produce crops.

Well, maybe. Increasing the pH means reducing the acidity (if the soil is already acidic) or increasing the alkalinity (if it is alkaline). The former, but not the latter, will help produce crops. I would naively have thought that the soil for quite a few miles around a lime quarry was more likely to be alkaline than acidic.

So it may be that this jokey comment was based on partial understanding. But, given what is happening in universities, maybe it is part of a wider perception that more = better.

The innumerate administrators who now run universities love it when a complicated situation can be summarised in a single number, be it impact factor of a journal, number of stars given to a researcher by an external assessor, or student questionnaire results. I have grumbled before about the nonsensical processing through which the questionnaire data is put. But the “more = better” assumption involves a more insidious problem: teaching staff are judged on questionnaire results, whereas there are some issues over which they do not have control, and others where more is not necessarily better.

Our newly-centralised student questionnaires have seven statements, which the students are invited to rate on a scale from 1 (strongly disagree) to 5 (strongly agree). Here are the questions, which I am sure are no better and no worse than those used in many other universities. Without wishing to sound immodest, I should start by making clear that my critique below is not fuelled by resentment at poor scores: I got an “overall quality index” of 99.8% for the module I taught last term (though it is quite unclear to me how this index was computed).

  1. The module is well taught.
  2. The criteria used in marking on the module have been made clear in advance.
  3. I have been given adequate feedback during the module.
  4. I have received sufficient advice and support with my studies on the module.
  5. The module is well organised and runs smoothly.
  6. I had access to good learning resources for the module.
  7. Overall I am satisfied with the quality of the module.

Let’s look closely at these questions. Keep in mind, as we do, an inspirational teacher from your own past, or perhaps a historic figure like Jesus (whose disciples sometimes called him “Teacher”) or the Buddha. (I can’t resist pointing out that neither of these two teachers provided lecture notes; what we know of their teaching was written down decades later in one case, centuries in the other.)

Question 1 is very subjective, but there is nothing actually wrong with it.

Question 2 is factual. I proposed to the authorities that it would be a good idea to have a purely factual question on the questionnaire, so that if someone gets the answer wrong, their other answers could be ignored. This question, I understand, is not used for the purpose I suggested. Back in the days when we designed our own questionnaires, one of the statements was “I have found the exercise classes helpful.” When I was teaching an advanced module which had no exercise classes, some of the students gave a negative answer to this question.

Question 3 is, on any but the smallest courses, outside the lecturer’s control, and it is quite improper to use the answer to it for judging the lecturer’s teaching ability. The amount of feedback we can give is dependent on policy decisions (such as whether the feedback is formative or summative, and how many questions are marked), and is also partly dependent on the availability of graduate students to do the marking.

I don’t understand question 4. What does “advice” mean? If it is generalities about how to study, it is not an individual lecturer’s responsibility to give this (though I always try to make my advice available to my students on the course web page, and sometimes harangue them during lectures). If it is module-specific, then I have no idea what is intended.

Question 5 is also outside the lecturer’s control. Last semester, because of incompetence of our administrators, my lecture class of 65 was put in a room with a capacity of 45, and with one tiny whiteboard which could only be reached by standing on a chair. (Where are Health and Safety when you really need them?) I had no alternative but to cancel the lecture. A better room was found only just in time for the following week’s lecture. Then, to my horror, the revision lecture for the course was scheduled in the same inadequate room! The students really should have given me low marks for this debacle, though it was certainly not my fault.

Question 6 is a bit of a puzzler. Every student has access to the lectures, which to my mind are the most important learning resource. Not all students avail themselves of this. The practical effect is that a lecturer who slavishly follows the textbook can expect good marks here, whereas one who challenges the students will be marked down.

Finally, question 7. Oh dear, what a stupid question: whoever thought of putting that one on? If I am satisfied with the module I should strongly agree with the statement, even if I am only just slightly more satisfied than not; in the other situation, I should strongly disagree. As Dickens said,

Annual income twenty pounds, annual expenditure nineteen nineteen six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds nought and six, result misery.

I think they expect students’ answers to reflect their degree of satisfaction. But many maths students have a pedantic streak and may well answer the question asked rather than the one intended.

And, as a footnote, where are the questions probing whether students have been helped to understand the subject, or have been inspired to learn more?

Advertisements

About Peter Cameron

I count all the things that need to be counted.
This entry was posted in teaching and tagged , . Bookmark the permalink.

8 Responses to Student questionnaires, 2

  1. Jon Awbrey says:

    The difference between the devil and the divinity may lie in the details, but it’s not unusual for the devil to decoy us with detail after detail, when the unexamined premiss is the screen behind which the real deil lies.

  2. Ross Templeman says:

    In this post Professor Cameron makes the excellent point of asking:

    “And, as a footnote, where are the questions probing whether students have been helped to understand the subject, or have been inspired to learn more?”

    I think there are various good reasons for why such questions are not asked. One reason is probably along the lines of:

    ‘The above questions are not asked because too many students only learn formulas in cramming sessions shortly before the examination period and display a general lack of curiosity and/or self-discipline/initiative throughout their studies (particularly in the first year). As a result if we ask students the above questions, an honest answer from the lazy ones will probably be along the lines of:

    “nah, I didn’t understand nothink bruv, but I guess thats what appin’s when yur don’t go to lectures init? (snigger)”. [*]

    but in practice (being teenagers) such students will just blame the lecturer on the surveys.’

    Another reason applies to Calculus/analysis modules in particular; namely that in order for lecturers to teach students to properly understand Calculus as a subject in its own right, the Calculus syllabus needs to be completely redone from scratch, which is an unrealistic project for individual university lecturers to undertake. Instead it is more practical to continue to treat Calculus as a necessary evil that has to be endured as a prerequisite for studying higher analysis (pure mathematicians) or as a set of ‘memorize or die’ procedures and formulas (applied mathematicians).

    When I was an undergraduate I was self-conscious of the fact that I was sometimes obtaining very high marks (ninety percent and upwards) on modules that I didn’t really understand, but desperately did want to properly understand [my interest was, and still very much is, Calculus].

    I tried to deal with this problem by spending as much time as possible teaching myself the subject in greater depth using textbooks.

    Alas this approach is doomed to fail for the simple reason that modern calculus and analysis textbooks are all the same except for cosmetic differences and suffer from the same problems as the courses that one is trying to supplement. This fact is however well obscured by the sheer size of the Calculus textbooks in question (typically exceeding 1000 pages in length). I have done more than one of these monsters from cover to cover and cannot honestly say that the intellectual payoff was worth the effort.

    Thus students that are like me; hardworking but not naturally talented, have little chance of properly understanding calculus by the time they graduate. Since Calculus is the core subject of applied mathematics and many students are inclined towards applied mathematics, it is probably for the best that the questions Professor Cameron raises are not asked on student surveys.

    ————————————————
    [*] As it happens when I was in my third year I took a course in Topology. I didn’t care for the subject (being an applied mathematician through and through) but I did my best and went to every one of Professor Chiswell’s lectures. On one occasion it was my birthday, the weather was terrible, the public transportation a nightmare and I had a serious headache, yet I still went to the topology lecture…..I was the only student that bothered to turn up. Professor Chiswell (bless him) still gave the lecture as normal.

  3. Pingback: Details, Details, Details | Inquiry Into Inquiry

  4. Gordon Royle says:

    Our previous questionnaires used to have the gem of a question “Material was delivered at the right pace”.

    What can one deduce from a less-than-perfect score on that question?

  5. John Bamberg says:

    At our uni, every unit has the same questionnaire, and the questions are not too different from those you have Peter:

    1. The teacher explains important concepts/ideas in ways that I can understand.
    (Depends if I understand it!)
    2. The teacher stimulates my interest in the subject.
    (Pity I don’t go to lectures)
    3. I am encouraged to participate in classroom and/or online activities.
    (‘Encouraged’ or ‘bribed’ by an attendence grade?)
    4. The teacher demonstrates enthusiasm in teaching the unit.
    (He often runs over time)
    5. Appropriate teaching techniques are used by the teacher to enhance my learning.
    (We have video recordings, lecture notes, past exam papers and everything I need!)
    6. The teacher is well prepared.
    (I’ve never seen what is written on those scrunched up pieces of paper he brings with him)
    7. The teacher is helpful if I encounter difficulties with the lecture/unit.
    (I’ve never had difficulties with the unit)
    8. The teacher treats me with respect.
    (Hmmm)
    9. The teacher is available for consultation (eg email, online, face-to-face or telephone).
    (They are available but they never reply)
    10. I receive constructive feedback that assists my learning.
    (Already said)
    11. I receive feedback in time to help me improve.
    (Que?)
    12. Overall, the teacher effectively supports my learning.
    (?)
    13. Open Ended:
    What are the best aspects of teaching that you experienced in this class?
    Please list any suggestions that will help improve teaching and your learning in this unit

  6. Jon Awbrey says:

    The tests themselves — good, bad, but mostly ugly — are a diversionary maneuver. The end-run we should be watching is the sneaking shift in the locus of evaluation and therefore control.

    A couple of articles pertaining to the Great Education Deformation on the U.S. scene. Naturally, we blame Pearson.

    John Ewing • Mathematical Intimidation : Driven by the Data

    Valerie Strauss • Leading Mathematician Debunks ‘Value-Added’

  7. Pingback: Knowledge Workers of the World, Unite❢ | Inquiry Into Inquiry

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s