AI and Traceability: Show Me What You Know, Rufus

I recently completed an MS in IT degree, and have Thoughts^TM regarding AI in the classroom.

In several of my courses, we had to post online regarding various queries and prompts using citations — scholarly, popular, and a mix. We had to respond to peers’ posts using both our own thoughts and analyses and various citations.

It frequently became extremely obvious when fellow students were using ChatGPT to write said posts and beef them up with impressive-looking references. Now, while I used to teach college writing in another lifetime, I was in the student role in this course and therefore understood it was NOT my place to reply to another student’s post with “hey, looks mighty likely that you used an AI friend to write that post there.” I’m not a narc. Not the point, even, really.

But what I WAS tasked with doing was responding to those posts in the context of the references they were using, or claiming to use. So if they posted:

As experts have shown, dogs are much better than cats (Rufus, 2025).

And then the entry for (Rufus, 2025) in their References list went to http://arfarfarf.com/dogsdoresearch.html , I would click on the link to see just who this Rufus expert was and what type of research or pontificating they had done. Except 95% of the time, the link was a dud and there was no such source.

So I would politely post and note that I could not get anywhere with http://arfarfarf.com/dogsdoresearch.html and perhaps they could post an updated link so that I could address their claims about dogs being better than cats. Like… maybe I AGREE that dogs are better than cats and want to back them up. But I need to know who Rufus is and what they were doing, goshdarnit!

(Sometimes I would also try to google to find the academic paper they claimed to be citing, if for instance it looked like Rufus had gotten published in the Journal of Things Dogs Definitely Got Peer Reviewed. But those articles also seldom existed. In one case, I went to the Journal in question and checked out the volume and issue being cited, and there was simply no such article in it. There WERE, however, articles in other journals that had titles that could together compose a veritable mash-up of the title being claimed.)

Anyway, maybe this still counts as being a narc, idk. Or just autistic rigidity. But in the context of the online discussion forum, it was just… like, I don’t know how to do the whole discuss-with-sources thing without knowing what the sources, in fact, were.

For what it’s worth, I’m not “for” or “against” ChatGPT in the research classroom, virtual or otherwise1. I get that many of us are using it in some form, and I think it’s all in the how we use it, AND how we stand behind that use when questioned.

Because my peers, when I posted those queries? Inevitably either ignored them, or posted to apologize for the “mistake” and promise an updated link… which was never actually forthcoming.

Sorry, Rufus. Arf arf arf. You might have had something great to say. But we will never know.


  1. I do think it’s a lot more dicey when we get to using AI for artistic or creative purposes. And as someone who has created artistic and creative work, I am definitely not cool with my work being used to “train” AI in any way, shape or form. ↩︎

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *