Teaching from the test -Chat GPT edition

One thing that I did in my final exam for Rice Paddies this Spring was to ask them to compare how how Wikipedia and Chat GPT did at explaining terms that in the past  would have been ID questions. (exam posted below)

I think this worked OK. I am increasingly using exams to try and teach them things, rather than to test if they have already learned something. I don’t really care for in-class exams, since the basic concept “How well can you answer this question without looking at any sources” is sort of similar to “how well can you fix this engine if your only tool is a Phillips screwdriver”. Of course the internet and above all modern AI1 make a lot of the types of things you could do in the past for a take home more problematic. Chat GPT can generate a D+ answer on literally anything. Plus it is harder to prove that they used it, assuming you are willing to be anti student centered enough to accuse them of that.

ID questions (write a paragraph explaining why this matters) used to be a great way to toss a lot of stuff into your exam that was important, but that you had not done enough with to make part of an essay. They are also the easiest thing to do a lazy Chat GPT thing on.

This seems to have worked pretty well, in that I got some good answers that showed that the students were assessing both sources as if they were a person who had taken a class on this (which they were) and some of them seem to have learned something about analyzing sources. Some were less good, but those are the breaks. I might fiddle with the prompt to force them to pull a quote out of Wikipedia next time.

FinalExam.s24.206
  1. I hate AI

4 Comments

  1. I understand the impulse, and it’s not a bad assignment at all.
    I just can’t bring myself to engage even critically, in a way that is almost certainly going to end up reinforcing the idea that it’s valid sometimes, instead of a horrible waste of energy for something which is not even attempting serious machine learning.
    I maybe should just toss my final exam questions in once, just to see what the results look like, because my world survey finals involved a couple of groups who had uncannily similar approaches to questions that are very wide-open….

  2. Yeah, part of the reason I did this is that I was getting a lot of pretty obvious AI answers to the type of broad questions I like to use on exams. It does make wikipedia look good 🙂 All the ones I picked have either good wiki entries or ones that are problematic in some obvious way. At the very least wiki entries sound like they were written by a person, or at least a committee. Chat Gpt puts out the same bland, mostly accurate but maddeningly vague mush for everything. At least with this assignment I thought was reading the results of students thinking about stuff. Well, some of the time anyway.

  3. A great idea in using the exam to actually teach, but it also raises the need for the profession to take more responsibility for Wikipedia. I have made substantial edits to several of the articles you listed,

    1. I suppose I should mention that the students are really impressed when I tell them how much of China wikipedia is you, and that I have met you.

      The only time that they have ever been that impressed was when we had Michael Meyer in to talk about his time in the Peace Corps in China (and his book). He mentioned that he had been published in the New York Times, Financial Times, etc. I chimed in that he had also written for The Onion (UW-Madison undergrad) They all went Ohhhhhh.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mastodon