I rarely comment on the generally misnamed phenomenon known as artificial
intelligence. Mainly, my silence on the matter stems from a lack of expertise, combined
with a plethora of scholars better placed than myself to opine on the issue. I
will confess, however, that I am largely sceptical – not so much to the
phenomenon itself, but to many of its uses, and a all-too-pervasive notion that
artificial intelligence is the answer to a vast range of problems. Currently,
however, I am forced by circumstance to keep artificial intelligence in mind,
and the circumstance is the chatbot ChatGPT, which is being used by students to
generate papers by putting together bits and pieces of whatever is available
online, and whatever is judged by the algorithm to be relevant to the topic or to
the question at hand. As the chatbot is used, so its range of available
material and its sophistication increases, and it is increasingly hard to
distinguish a paper written by a student from one generated by this artificial
intelligence. And given a combination of high pressure on the students and poor
understanding of what purpose such papers actually serve – namely to train to
student to become a better writer and thinker – the temptation to use the
chatbot is very high.
To the best of my knowledge, I have been spared the problem of ChatGPT, in that
none of the exams and dissertations I have graded this year have been generated
by artificial intelligence – at least not that I have been able to detect,
although there is that ever-looming risk of being fooled. There was, however,
one instance this spring which both gave me severe pause, and highlighted to me
why using ChatGPT to generate student papers is, in my opinion, a deeply
immoral thing to do.
The case in question was a BA dissertation, one among several which I was
tasked with reading and grading. Normally, such a task is relatively swiftly
done. We have a set time allotted to read and grade the dissertation, and with
some experience it is usually very easy to swiftly determine what grade the
text deserves. By looking at issues such as formal requirements, thesis
question, structure, and the frames of the discussion, it is possible to arrive
at a just and fair grade without too much dithering. One dissertation deviated
from this norm, however, and it forced me to spend a lot more time than I was
supposed to.
The story unfolded in May of this year, a time when reports told of ChatGPT
improving, but still being at a stage where its prose was far from as
undetectable as it has been feared that it will become. I had read samples of
exam papers generated by this chatbot, and the quality of the prose was indeed so
laughable as to ensure the hypothetical student an easy fail, which I believe
to be a suitable punishment for this kind of cheating. It was exactly this
unpolished aspect of ChatGPT’s prose and grasp of formal essay requirements
that made me hesitate when reading this particular dissertation. Not only was
the writing quite rough at times, but there were several footnotes that were
notably incomplete – some were simply lacking in formal details, while others
were so general as to be impossible to check, at least within the time I had available.
Rough writing and bad footnotes are both hallmarks of inexperienced students
operating under a lot of stress, and before the golden age of artificial
intelligence I would not have paid it much attention but adjusted the grade of
the student as I deemed necessary.
The spectre of chatbots made me nonetheless check some footnotes, and I found
the first to be quite imprecise, although not completely wrong. Another
footnote was completely wrong at first sight, but it turned out that the
student had used an older edition of the textbook, and a laborious check allowed
me to ascertain that the reference did indeed make sense. A third footnote was
even more cumbersome to check, because the work was not digitally available, at
least not to me, and so I was preparing to head to the library on the other
side of campus to take a third sample and see whether I could determine whether
the text had been produced by a human or by a generator. Luckily, I decided to
stop by the office to my colleague who had supervised the dissertation, and thanks
to a chat about the student, their writing process, and the finished product, I
could in the end rest assured that the roughness of the dissertation was the
result of inexperience and stress, not cheating. This chat, in other words, saved
the grade of the student and avoided an unjust failing on my part.
The main problem in this story was that the chatbot had attained a level which
made its writing indistinguishable from poor student writing, and this is a level
of quality quite prevalent among student papers. I do not say this to be either
cruel or condescending, because I am very aware that this low level of quality very
often comes down to factors that are not to be blamed on the students, namely
high pressure, lack of training, and various diagnoses that make the first two
factors all the more harmful. The uncertainty about he root of rough writing is
caused by the rise of chatbot programmes, and this uncertainty puts extra
pressure on those of us who are grading essays, exams or dissertations, because
you sometimes need to spend much more time than you should in order to arrive at
a more solid basis for grading the paper. Moreover, due to the roughness of the
prose, chatbots like ChatGPT cast an added shadow of doubt over those students
whose prose is similar to that of the text-generator. If the grader does not
have enough time to dig into the details, the result might easily be that the
inexperienced student is judged unfairly and failed, not because of the
roughness of the prose but because that roughness has now become suspicious. Chatbots,
in other words, make life harder for those students who are already vulnerable,
and this reason is enough for me to utterly despise and reject such technology.
And was the holy Lamb of God,
On Englands pleasant pastures seen!
- And did those feet, William Blake
On Englands pleasant pastures seen!
- And did those feet, William Blake
mandag 31. juli 2023
A brief note on artificial intelligence
Abonner på:
Legg inn kommentarer (Atom)
Ingen kommentarer:
Legg inn en kommentar