Home > Life in General > Marks

Marks

June 23rd, 2008

Exams are just finishing up at the University of Queensland and the grim business of marking is well under way. I’m an observer of the process these days, since my research fellowship doesn’t involve running any courses (though I give a fair number of guest lectures in economics, political science and other subjects). Back in the 60s and 70s, when I was a student, the whole system of examinations and marks was one of the big targets of radical critique; even if relatively minor in the great scheme of things, exams loomed large in our lives, and seemed like a symbol of much that was wrong with society.

That kind of debate seems to have disappeared entirely. While a variety of alternatives to exams have been tried, the pressure to cut costs has driven most Australian universities back to near-total reliance on exams, and, within that, to heavy use of multiple choice and short-answer tests.

But I’m more interested in looking again at the fundamental question of why universities and schools spend so much time and effort on assessment. One possible explanation, is that they provide useful feedback to students on how they are doing, and to the university itself to guide things like admission to later courses. I don’t buy this at all. Feedback provided after you’ve finished a course isn’t much use, after all.

Is it to provide a service to employers? If so, couldn’t they run their own tests? Or is to give students a spur to effort? I guess the last of these is closest to the mark.

Categories: Life in General Tags:
  1. conrad
    June 23rd, 2008 at 08:07 | #1

    I think marks hugely differentiate students (although I doubt exam marks do). Just look at the difference between honours students (whom are selected on the basis of high marks) and the general undergraduates if you don’t believe that — at least where I work, they’re light years ahead of the undergraduates. That isn’t bad given that they are coming from numerous universities. Because of this, and because most courses are targeted at the lowest common denominator, marks are hugely useful such that courses of at least moderate difficultly can be run without generating constant complaints (c.f., almost all undergraduate courses these days).

    Also — feedback after courses is useful if those courses assume cumulative knowledge from other courses (although this is becoming much less common these days). It allows students to choose what they are good at, especially at places without too much grade inflation.

  2. John Mashey
    June 23rd, 2008 at 08:14 | #2

    When I was teaching, years ago, in fact exams were certainly all 3;

    - feedback to students (& teachers, since widespread wrong answers might indicate poor teaching or at least a poor question).

    - spur to study – certainly end-of-class exams don’t provide much of the previous one.

    - employers: hiring good people is *expensive*. Grades are at least (and probably at best) a coarse screen, but they’re part of a university’s brand with employers. Decades ago, Bell Labs used to hire from universities as follows:
    + each university from which we hired would have a persistent, well-organized team of BTL managers attached, 1-2 people per relevant university department, with the whole team usually headed by a BTL Director (managed 150 people) or maybe a Department Head (30 people). Recruiting wasn’t something pawned off on a recruiter, it was something key managers sought to do.

    + BTL people were on campus a few times per year, got to know the faculty, the programs, students. Strong students would likely get contacted by such folks well in advance of when they’d normally start looking for jobs.

    + Due to the persistent nature of this process, BTL folks got pretty good at calibrating what they were told by faculty, and the extent to which grades meant much.

    + Wonderful … but very expensive; monopoly money helps, and very few companies ever ever been able to afford this, hence use grades as a coarse screen.

    Here’s an example of a school where feedback is really builtin:

    I had the fun of serving last Fall on an external review board for a new polytechnic in Singapore, Republic Polytechnic.

    RP runs 3-year programs for (typically) 17-19-year-olds), with expectations that students can be immediately useful in jobs or wil lgo on to university. They are hyper-computerized and use an intense form of “problem-based learning”.

    year 1: common to all students
    year 2: basics in a specific discipline
    year 3: electives in that discipline

    Every student has a laptop, there’s Wifi everywhere, and every classroom has an appropriate projector, used by the 5 5-student teams that make up a class.

    During a 16-week semester, a student will take 5 courses, and at any point in time, be a member of 5 teams, although team membership may get shifted around as often as every week, or may stick for months. A student will study course A on days 1, 6, 11, etc, spending ~8 hours that day on *that* course.

    Here’s a typical day:
    1 hour: teacher presents a problem or related set of problems, with perhaps some ideas about techniques and tools for solving such. A lot of materials come online from university database.

    1 hour: students start digging in, rummaging on Web, etc.

    1/2 hour: back with teacher for clarifications

    2 1/2 hour, incl working lunch: teams solve problems, put together PowerPoint, Excel, etc presentations to show their solutions.

    2 hours: teams present solutions to class & teacher

    1 hour: wrapup, teacher highlights areas where students didn’t seem to get it, points to further study areas if needed. Sometimes, an (individual) quiz.

    During presentations, the teacher asks questions, probes answers, and enters evaluations online.

    By midnight, each student writes a paragraph on “What I learned today and how did I do”. Teacher gives *every* student a grade *every* day, and final course grade is computed from those daily grades.

    ====
    How well this works outside Singapore, I don’t know, but I was certainly impressed. Their educational system assures that RP gets, not the top students, but the 2nd & 3rd quartiles, and their goal was to help them be as good as possible.

    Teachers are sometimes domain experts, but quite often not. The online coursework is constructed and continually refined by domain experts plus more experienced teachers. Senior administrators keep their hands in by teaching courses.

    The classroom teachers mainly need to be able to guide students, rather than create coursework. Many teachers are part-timers, who go through their own intense training course before they start. Some stay part-time, some end up coming on as full-time.

    Everyone on the review team (except me) was a university professor. We had a lot of pedagogy arguments, spoke with a lot of students, and sat in on several classes. I watched a class of 17-year-olds wrestle with probability problems, Poisson arrival processes, and generally do OK. I saw a class of 18-year-olds be given a Web systems design problem, produce dataflow diagrams, sketch GUIs, etc … with a part-time teacher whose main job was being a realtor.

    Whether or not this applies to all disciplines, it certainly felt akin to effects I’d seen when I was teaching (compute science) – we often said “If we get the problems right, and they can do them, we know they understand, and our lectures really won’t matter much.”

    Anyway, I still beleive that the best thing about grades is if students get them often enough that they really work as feedback.

  3. pat
    June 23rd, 2008 at 08:26 | #3

    It might be closest to the mark but exams never spurred me into anything. Must be one of those one-size-fits-all because it fits percentage-x and devil take the minorities, whether dim or quite bright.

    You can focus the exam question here on universities but it is across the board. If you what to test and give feedback there should be two exams, one at the start of a course and one at the end. Feedback after the first. You would also get a measure of what was actually learned in that period. This would provide feedback to the teaching institution as well.

    Fuuny hats and gowns are not feedback but feudal wank.

    Our teaching institutions do not do ‘feedback’ because they do not care about learning. Whatever individuals in them care about and do actually do. They do not care because business does not care about learning, it just want dull yes-people who will knuckle down in crap systems run by managers who are only there because they have the work of managing people who have come out of our learning institutions. A vicious generation cycle of mental abuse. But that’s evolution for you.

    No, Virginia, god didn’t make the world and all it categories. Sorry love.

    Considering business is currently all about sucking up to one-size-fits-all-best-fit crowd they should definitely pay for it as they both cause it, profit off it, and are a product of such social dullness while avoid responsibility for it.

    They’re not looking for effort or ability, or really even skills, but sheeple who do what sheeple do when sheeple tell them what to do. All in the hope/fear they can avoid crazies, never realising how much they create an environment which only encourages them.

    Disclosure: I have a HECS debt (or whatever they call it now) and have to compete in a housing market against baby boomers buying their investment properties unloaded by HECS.

  4. June 23rd, 2008 at 08:47 | #4

    The main assessment items in my management units require students to apply theory to a real situation, for example by analysing an actual organisational problem or gathering and discussing data from published sources. The main purpose of the end-of-semester exam is to enhance authenticity; it’s the only form of assessment where we can be pretty sure it’s the student who is actually doing the work and there is no scope for plagiarism or cheating.

    There are many better methods but they are all labour-intensive and as you say, the tendency for years has been to have increasingly fewer hours available in academics’ workload for assessment tasks. This is known as ‘improving productivity’.

  5. Spiros
    June 23rd, 2008 at 10:01 | #5

    “the fundamental question of why universities and schools spend so much time and effort on assessment”

    In the case of let’s say medical students, it’s quite useful to know whether they understand illness, diagnosis and cure, before letting them lose on actual patients.

    With engineers, you want to be confident that they understand what makes a bridge stay up before they get to build one, and so on.

  6. Spiros
    June 23rd, 2008 at 10:02 | #6

    That of course should be “loose” not “lose”, though “lose” might be appropriate in some contexts.

  7. Highlander
    June 23rd, 2008 at 11:22 | #7

    From what I’ve gathered, the idea of holding only multiple choice and short answer questions is primarily for large courses with several hundred students involved. The reason is, naturally, that nobody wants to sift through (as in many compulsory economics courses) several thousand essays, particularly in courses where the lecturer agrees to mark every piece of final assessment.

    As a student of economics, having just finished all bar two of my major compulsory subjects, I’m inclined to agree with Prof. Q’s arguments. In my entire (three semester) time at my university, I have not had a single essay assessment. While three semesters isn’t much, I would’ve expected my skills as an economist to be applied beyond simply picking out the correct answer from a selection of (often very similar) multiple choice questions or writing out answers to a simple, quite often unrealistic situation.

    Given that (as far as I’ve seen) most academics deal in essays rather than answering multiple choice or short answer questions, it seems silly to deal in only these until Honours.

    Regarding what John Mashey said earlier regarding employers, there are numerous clubs and societies (at least for economists and business/commerce/finance students) which provide students with opportunities to talk to prospective employers. This means that an employer can at least get a preliminary gauge on whether a student is a good potential worker – the grades really just back that up.

  8. jack strocchi
    June 23rd, 2008 at 11:43 | #8

    Pr Q says:

    Back in the 60s and 70s, when I was a student, the whole system of examinations and marks was one of the big targets of radical critique; even if relatively minor in the great scheme of things, exams loomed large in our lives, and seemed like a symbol of much that was wrong with society. That kind of debate seems to have disappeared entirely.

    Mercifully, along with much else that passed for Unconventional Wisdom during that period of ideological progress through intellectual retrogress.

    Educational theory seems to be particularly subject to fads and fashions. No doubt due to its being twice removed from real world results. A victim of the edu-propaganda mill bears witness:

    Those who can, do.
    Those who can’t, teach.
    Those who can’t teach get a Ed.D. in Education

  9. jack strocchi
    June 23rd, 2008 at 11:54 | #9

    Pr Q says:

    Feedback provided after you’ve finished a course isn’t much use, after all.

    This is a valid complaint from the very first cohort of examined students. But exams are useful to subsequent cohorts of students as real world feedback is picked up by the examining authorities. The exams can be fine-tuned to improve their utility as performance predictors.

    I understand that this is how IQ tests evolved. ALthough these are tests of the student’s “natural” ability, rather than the teaching system’s cultural facility.

    Pr Q says:

    Is it to provide a service to employers? If so, couldn’t they run their own tests? Or is to give students a spur to effort? I guess the last of these is closest to the mark.

    THe higher authorities like jurisdiction-wide exams because they provide an “objective” universal standard of accountability, which is reasonably cost-effective to administer. Much coursework, esp in the softer “disciplines”, is too particularistic and subjective.

    Its good to stress-test a product before it goes off the line and onto the shelf. Dumber students and the lazier sort of teachers dont like them, an indication of their value.

  10. O6
    June 23rd, 2008 at 11:58 | #10

    Back when I was an undergraduate in the 60s, assessment was solely at the end of the year, which suited the lazy but clever. When I was an academic in the 60s to 80s, we assessed students (in maths-type subjects) as continuously as we could, to make sure they kept up with the work. The currelation between the cont. ass. marks with final exam marks was positive and high (~0.9). Over the past six years that I’ve been an undergraduate doing a BA part-time in modern European languages, only 25-40% of the assessment has been by exam, and the continuous assessment has the same effect: one keeps up with the work if only to keep up with the assessment. This strikes me as pedagogically sound. It’s also a huge load on the dwindling band of dedicated teachers, in this crassly monoglot country.

  11. Ernestine Gross
    June 23rd, 2008 at 12:31 | #11

    As a student, I found examinations a very good mechanism to focus my mind on the subject matter. I liked examinations.

    In my experience, there is subject matter which lends itself to multiple choice questions (definitions, concepts) and there is subject matter where essays are appropriate and there is subject where numerical work is suitable. For areas strictly outside my experience, I imagine other forms of assessment are essential – eg laboratory work.

    In moderation, I found cramming for some subjects useful. One can learn something about oneself – risk taking and how much material one can ‘digest’ in one way or another in a short time.

    Setting aside the very first semester at university, continuous assessment (‘feedback’), as distinct from unmarked tutorial exercises, used to annoy me a lot. One is kept busy with doing well with little exercises to accumulate marks while running the risk of missing the forest for the trees. It is as if a young adult is not allowed to ever grow up to become a mature adult who is self-motivated and self-confident.

    I am puzzled when reading about a demand for ‘education in entrepreneurship’. It seems to me this demand is at least partially induced by an education method that encourages young people to become dependent on ‘continuous feedback’ and spoon-feeding methods to achieve high marks, which they require to get past the HR departments, often staffed by people who haven’t got a clue as to the meaning of the job description and the requirements. In such circumstances, the HR staff rank people by criteria they know (grades – the higher the better irrespective of the demands of the job – and ‘personality tests’).

  12. Steve Hamilton
    June 23rd, 2008 at 12:33 | #12

    I’m waiting for marks from exams I did at UQ over the past two weeks. I’d make a couple of comments.

    1. I’m generally dissatisfied with the whole assessment system at University. It certainly doesn’t allow employers to perfectly (or even close to it) discriminate between students on the grounds of ability. Everybody knows that rote learning can easily lead to 7′s, but not necessarily to a more capable individual.
    2. Economists assume too often that systems simply and easily equilibrate. I think Universities assess students for two reasons. a) because they believe it provides information to employers regarding the quality of the student and the institution (ie. a Credit student at one University might be superior to a Distinction student at another University), and b) because they believe it “encourages” students to study the material. Notice that for both cases I said “they believe” – I don’t see either of these excuses as legitimate, but that doesn’t mean they don’t motivate University administrators.
    3. The issue is that although the system is far from perfect, I don’t think it would be any more effective, absent such assessments. Proponents of progressive assessment say that exams encourage too much rote learning, while proponents of examinations suggest that progressive assessment encourages mere regurgitation of material. And I’m not convinced that removing assessment altogether would improve student outcomes.
    4. Every experience I’ve ever had with “innovative” assessment has been disastrous, which further cements the maintenance of the current system.

    Cheers
    Steve

  13. Donald Oats
    June 23rd, 2008 at 13:36 | #13

    Exams are like democracy – the least worst system. As for democracry, there are many flavours however.

    The problem now is that the goal of academic achievement, for which written exams were but an instrument of execution, has been seriously eroded by the goals of modern university.

    Some examples of mission change:
    * was student, now EFTSU (effective fulltime student unit) or some other unit of input;
    * was scholar, now education facilitator;
    * was university, now education provider;
    * was (student’s) academic advancement, now EFTSU throughput;
    * was undergraduate, now customer;
    * was multiple choice (and frowned upon in maths, physics, etc), now productivity gain;
    * was grade inflation, now tacit point of market differentiation;
    * was alumni, now private donor;
    * was professor, now high-value grant attainer;
    * was mathematics/physics/statistics, now low-margin product.

    In the days of old, the written exam without multiple choice provided both the student and examiner with a clear objective. For a student to pass (with distinction, dare I say), they knew that they would be expected to demonstrate their understanding of the subject matter. The student’s reasoning skills are there for all to see in the exam book. The examiner could set an exam which reflected the important elements of the subject matter. In particular, a good exam setter could balance the questions so that basic competence could be tested, and then a deeper more reflective understanding of the subject could be tested as well. In other words, the long form exam could discriminate reliably.

    For the exam format to work as intended though, the students needed regular (non-multiple choice) assignments throughout the course and timely feedback on any mistakes they’ve made. This system is labour-intensive. And there’s the rub. In today’s marketplace the never-ending drive for profit is a great burden on scholarship and comprehensive assessment.

  14. hc
    June 23rd, 2008 at 15:53 | #14

    I think the reason for emphasising exams lies in one of things you have complained about in the past – the signalling theory of education.

    Educators must provide ranking signals about students and play down human capital created. Its easier (pedogogically and in terms of meeting clearly defined objectives) to provide such signals than to promote the real objective of education which is (or should be) to promote learning.

    You are right about multi-choice exams etc. There are limited resources and huge student intakes.

    Likewise for setting essays – how can you set essays for hundreds of students?

  15. Martin
    June 23rd, 2008 at 16:38 | #15

    First, society needs some way of restricting access to the better paid and more powerful positions, and to the more expensive forms of education (because the only effective forms of education are very labour intensive). In previous centuries, various methods were used such as ancestry (only members of the aristocracy could get the senior positions) or chance (the Tibetan tulku system). The western world uses the education system to provide a series of ranking levels from ‘left at 15′ to PhD, which can then be used to justify any allocation of individuals to positions.

    Second, there are some arguments for competitive examination as a ranking method. It at least has some relation to desirable characteristics such as ability to memorise and a capacity for effort. It also is relatively unassociated with inherited position. It means that potential troublemakers can be identified and coopted early before they can do any major damage, thereby promoting social stability.

  16. June 23rd, 2008 at 17:11 | #16

    I remember back to University, and dreading exams because it was the only time in my life I would actually have to use a pen and paper.

    99% of what I do is computer based at work, and at Uni, and then I am examined based entirely on something I rarely ever do.

    bizarre

  17. TerjeP (say tay-a)
    June 23rd, 2008 at 17:19 | #17

    I always thought it odd that universities seemed to be trying to measure brilliance rather than knowledge attained. I repeated a few subjects at University (due mostly to being slack on the first occassion). Despite ending the term with grades of around 90% in some of these repeated topic areas the university would only grant a pass because it took me two attempts. I thought this odd given that the same point on the knowledge ladder was attained as somebody who got a high distinction all be it via a slightly longer path.

    In private sector technical certifications (eg Microsoft Certified Systems Engineer) the above approach is not seen. If you pass you pass irrespective of how many attempts it took. Also they typically have a pass mark around the 85-95% mark rather than around the 50% mark and they don’t seem to bother with grading those that pass using anything comparable to the distinction system. They care about whether you know stuff or whether you don’t.

    A degree is useful for showing that you are capable of operating within a disfunctional institution for several years in a row.

  18. June 23rd, 2008 at 18:06 | #18

    Why not use the Cambridge marking method? The story goes that, in the days not so very long before our days, they used to get the oldest fellow in each college to take the college’s papers up to the head of the stairs leading to his set of rooms and throw them down as hard as he could. Those papers that fell on the same landing failed, those that went one floor down got seconds, and so on. Say what you will, this method was not biassed, was convenient, and led to ultimate effects in terms of careers and such that were in no way distinguishable from those produced by any other method.

  19. Lord Sir Alexander “Dolly” Downer
    June 23rd, 2008 at 18:22 | #19

    They are surely a way for the institutions to cover their arse. If citizen X turns out to be a dud who hasn’t learnt anything, the university can say: she passed this exam, approved by such and such a body. There, in black and white.

  20. Steve Hamilton
    June 23rd, 2008 at 18:23 | #20

    This is turning into a human-capital vs. screening debate. I still say there’s some accuracy in the screening hypothesis, but I think Prof. Q. begs to differ.

    Cheers
    Steve

  21. charles
    June 23rd, 2008 at 20:06 | #21

    I’m what they call a lifetime learner and have continued taking subject. The course I am now doing is in a new area and only gives “competent” and “try again”.

    It’s a pain, when pressed for time you don’t know if you can slacken off.

  22. Pepper
    June 23rd, 2008 at 22:17 | #22

    I read somewhere that education is highly correlated with the Big Five personality type “conscientiousness” and that conscientiousness is a predictor of performance in all occupations.

    That must be just as valuable to an employer as the technical knowledge or brilliance.

  23. christine
    June 24th, 2008 at 06:19 | #23

    I just finished teaching a semester (of econ) at the MA level in a mostly political science program. The pol-sci courses set the assessment as one ‘response’ essay on the readings every one or two weeks, and one long paper. I had 3 stats assignments, one fairly short paper, and one final exam. I used the final exam to make sure there was an incentive to cover the range of material in the course, which was the same goal as the response essays. The response essays are great at making sure the students keep up with the material along the way. But I chose the final exam format because it lets the students show what they know at the end, giving students a bit of time to come to grips with the parts they struggle with, and integrate the stuff they’ve learned in different areas. And it wasn’t a take home because I wanted to see if they could do some economics without copying it from someone else. From the answers the students gave on the exams, I think it worked. (This being an MA course, I didn’t have many students, so had the luxury of thinking pedagogical goals rather than realistic workload.)

    timboy: if you dread having to write with pen and paper, just think about how the poor marker feels about having to mark several hundred papers all written by people who don’t practise their hand writing regularly! Urgh.

    Steve Hamilton/Highlander: hang in there – the first year can be quite boring. It gets better. And you should be able to find elective subjects to take that have decent essay components? (My experience at UQ, though many years ago now, was that there were way more essays even in compulsory second and third year subjects than in any other university I’ve heard of. And one very memorable essay with John Foster in Honours macroeconomics on the economic meaning of cointegration. Fun!)

  24. Peter Wood
    June 24th, 2008 at 11:51 | #24

    I’ve always been in favour of open book exams and take home exams, so students can focus on understanding the subject rather than rote-learning. They worked quite well with mathematics teaching at ANU.

  25. FDB
    June 24th, 2008 at 12:25 | #25

    Strocchi has hit on something I’ll agree with for a change – putting aside individual students, exams are probably the most useful means of assessing the course itself and its delivery (in combination with formal and informal feedback processes).

  26. Andrew
    June 25th, 2008 at 13:51 | #26

    I guess it depends on the course of study. I’d be very worried getting an engineer to build me a bridge if I had no way of asessing whether he’d passed his training or not. Likewise a visit to a GP. Do we need to grade an Eng. Lit. degree? Maybe, but no damage done if we don’t.

    I also think competition is a very strong motivational tool – helps focus the students’ minds. Grades in exams creates substantial competition amongst students.

  27. Peter Evans
    June 25th, 2008 at 22:56 | #27

    Andrew,

    You’re slightly missing the point. No engineer or GP would get anywhere near a bridge design or strep throat examination without years of peer supervision and on the job training in a junior role. The 2 or 3 hour exam based part of their training is only for the first few years of a long and laborious winnowing process.

    The central problem of exams, being only a few hours long, is that they can never probe for deep and useful knowledge and how to apply it. They can only cover a fraction of a course in any depth, so to some extent they can be a lottery on what it was you got around to learning well (or were motivated by). Now maybe that’s okay because there’s a good correlation with people doing well in continuous, deeper, assessment tasks and exam ability, though I have known a few folk terribl at exams (nerves) but quite brilliant in their fields. It just took them a few extra years of slog to get where they wanted.

    The job of an educator is to motivate a student to _want_ to learn the subject matter. Exams are a big stick and only partially successful at that, especially for boring but compulsory courses the student may see little value in. It takes many years to ever get really good at something anyway, so exams are at worst a moderate impediment to a student finally getting good at something (assuming they know what they really want to do int he first place).

  28. Andrew
    June 28th, 2008 at 02:53 | #28

    As a recent (just pre-Quiggin era, I think) UQ economics graduate who is now doing a masters at an overseas university (and just received his exam marks today), this is an interesting post.

    Reflecting on this very question in the run up to my recent exams, I was surprised how much of my economics degree was assessed by multiple choice. I was always quite good at guessing the answers to poorly thought out multichoice exams, so they never really inspired fear or motivation (unlike my recent masters written exams).

    I think multiple choice *can* be quite an effective method of assessment, but in the exams I did at UQ, it wasn’t at all. I think it tended to test general intelligence rather than mastery of the course material.

  29. melanie
    July 1st, 2008 at 20:17 | #29

    My department decided that the focus on assessment by exam was justified by the quantity of plagiarism taking place.

    I don’t think that exams motivate many students to work. If they work more than minimally it’s because they’re interested in the subject. So many of them don’t even seem to see a connection between their job prospects and a better than average mark.

  30. Ernestine Gross
    July 3rd, 2008 at 14:38 | #30

    In support of melanie’s point on quality control:
    http://www.smh.com.au/news/web/sydney-uni-cheats-outsource-to-india/2008/07/03/1214950908513.html.

    Further, I believe there is a distinction to be drawn between students using examinations to focus their mind on the subject matter (ie having a deadline for thinking through the material to gain an understanding) and students who see an examination as a ‘big stick’.

Comments are closed.