The Dead Classroom Is Not Something We Can Allow
I am calling for the administration of this university, including the president’s office, ASUW, staff and faculty senate to hold discussions with the student body and faculty to outline how faculty are permitted to use AI in the classroom.
I have recently come under the suspicion that a teacher of mine (who shall remain unnamed) used ChatGPT to generate feedback on one of my assignments. I was immediately offended, disgusted and curious; is there even anything human in a classroom anymore?

The dead internet theory posits that a large portion of internet activity is primarily bots interacting with bots. This theory grew after the launches of large language models like ChatGPT and other generative AI. Looking at Facebook. It’s easy to give this theory credit with things like AI photos of a drowning granny holding up a sign that says, “1 like = 1 prayer” and comments of AI profiles underneath.
It’s no secret that more and more students are using ChatGPT or Claude to do more and more of their work, sometimes turning in entirely AI-generated products.
But what happens when the other side of the grade book turns relies more and more on AI?
As discussed last year in Wyoming Public Radio with our former Editor-in-Chief, tools like Perusall can be easily integrated with Canvas and provide automatic grades to students. ChatGPT and Claude are just as accessible to instructors as they are to students.
There are guidelines that teachers are required to go over in their syllabus about academic dishonesty for actions in their class and in the university. It is nice that teachers are allowed to set their own expectations, and the guidelines for us are still clear.
Where are the guidelines for the teachers?
Turns out there aren’t any.
That’s right. There have been committees formed and administrative discussions had for years about student use of AI but there are no guidelines for faculty. Official guidelines available in syllabus and online searches.
It doesn’t even seem to be in discussion as I went on a wild goose chase calling anyone who might have the information, with each person who answered my phone call as confused as the last.
This is just plain hypocrisy. In fact it’s worse than hypocrisy. Academic integrity is a two-way street.
Professors and instructors are supposed to be leaders of academia and offer guidance on how to succeed in the world and contribute to the collective knowledge of humanity.
This isn’t to reject AI entirely (although I wouldn’t be opposed to this as a solution, it is unlikely UW administrations’ pursuit of “innovation” and being on the “cutting edge” of things wouldn’t be likely to enact something like this). I only call for transparency and discussion with students.
We pay to be here, pay to receive instruction from the best of the best. We do not pay to have our work fed into a large language model without our consent. If we wanted to be graded by AI we could submit our essays to ChatGPT ourselves. We go here for expertise and connection with teachers. The teacher to student ratio is bragged about but what is the point of this ratio if the students are just interacting with AI.
Again, this is not to say a professor could never use AI but they should be required to disclose when they are doing and give the students explicit knowledge of the use of AI during syllabus week so they can switch out of the class if they are not comfortable with the prospect. Teachers should say exactly how they are using AI and mark which assignments they are using it on each and every time.
Students should be able to partake in this discussion which must be had with President Reeves as soon as he takes office. ASUW must discuss this as should the faculty senate and staff senate.
Consent into LMM is not a given. Our academic work is our intellectual property which is ours to control. It should not be the consent of a professor to feed this work to an AI model who will use it to train on the backs of our labor without compensation.
“Professors are just doing what students have been doing for years,” some may say.
“They’re just using it to make their grading more efficient or to communicate with students more succinctly.”
This in no way makes it right. At all. Students are liable to punishment if they use AI and when they are permitted to use it, they typically have to cite it. Instructors must be held to the exact same, or higher, standard.
What is the point of teaching if not to engage with students and help them grow into well rounded individuals and students. This should be the priority of our instructors above all else, not research, not study abroad trips. Instructing and engaging with students.
The best professors I had have taken advantage of the fact that quizzes and essays can be easily cheated and engaged with students in the most effective way: in person discussion. This is how learning and developing are done. This is the ingenuity and progress UW should be incentivizing, not using technology to throw mindless slop back and forth from instructor and student.
If a professor doesn’t care to engage with their students directly, why should we engage with them at all? What incentive is there to preserve our own academic integrity if it will just be met with AI. If professors aren’t setting the example, a classroom will no longer be a place of learning
AI use on assignments is inevitable, apparently from both the students and instructors but this doesn’t mean we just sit back and let it happen. Instructors should still engage with students and both parties should be expected to disclose that they are using AI.
AI consent and standards must be a two-way street and we must have clear guidelines for both parties.
