Are Professors Even as Smart as ChatGPT? Students Are Wondering…
Office hours vs. instant answers: guess who wins.
In my post last week, “Want to Stop Students from Cheating with AI? Stop Grading Them,” the conversation quickly expanded into a lively debate about assessment, rigor, and the role of grading in higher education. Commenters raised thoughtful concerns: some argued that what we call “student work” often reflects how well students can reproduce a professor’s standards; others discussed what happens to credentials and merit when grades are de-emphasized. Many, though, shared stories of experimenting with ungrading or contract grading—highlighting increased student engagement, creativity, and ownership of learning, even while acknowledging the challenges of implementation. The tension between accountability, societal expectations, and student growth was clear, but so was a shared desire to rethink how we define meaningful learning.
Last week’s discussion of grades and AI sparked big questions about the purpose of higher education. This week, I want to push that conversation further by asking: what if AI could be even smarter than a professor?
This is by no means an idle question. In the past two years, AI has moved from the margins of higher education into the center of debates about teaching, assessment, and even the purpose of the university itself. Professors are no longer the unquestioned gatekeepers of knowledge. Students can consult ChatGPT in the middle of the night and get feedback faster than we could ever hope to return an email. They can ask an AI tutor to explain Aristotle or derivatives or APA formatting and receive an instant response. The uncomfortable truth is that in many narrow ways, AI already is “smarter” than us: quicker, more responsive, and (at least on the surface) endlessly patient.
When Professors Feel Replaceable
N. Weimann-Sandig (2024) describes how faculty in Germany are grappling with precisely this disruption. She notes that many professors feel their authority “sensitively disturbed” when AI can handle technical questions or provide feedback on scientific writing. If knowledge transmission is no longer our monopoly, what is left of our role?
Her answer is to reframe teaching as co-constructive: not professors depositing knowledge into students (a very old model) but teachers and learners constructing meaning together. In this model, AI is not a competitor but a tool that can support exploration. Professors remain essential not because we know more facts, but because we help students interpret, contextualize, and use those facts meaningfully.
That’s a beautiful vision. And yet, beneath it lies a fear many of us carry quietly: if AI can do my tasks, how do I prove my worth in this job?
The Five Levels of an “AI Professor”
Cheolkyu Shin and colleagues (2024) take this fear head-on by mapping out a framework for the “AI professor.” Inspired by the automation levels of self-driving cars, they outline five stages—beginning with AI as a simple assistant (today’s reality) and ending with a fully automated professor at Level 5. At that final stage, human faculty wouldn’t even need to supervise; AI would design, teach, assess, and advise entirely on its own.
This is both fascinating and terrifying. The authors admit that technical barriers make Level 5 still hypothetical. But they also stress that the real danger isn’t whether AI can do all the tasks of a professor—it’s whether universities, under pressure to cut costs and boost efficiency, will be tempted to try.
Here’s the catch: Shin et al. emphasize that professors do far more than deliver content. They list five competencies: expertise, class structuring, teaching, assessment, and communication. AI may be able to automate aspects of each, but the relational, ethical, and human elements remain stubbornly beyond its reach. What’s at stake isn’t whether AI can be programmed to grade essays or answer calculus questions. It’s whether we, as institutions and as a society, believe that education is only about efficient delivery or instead about mentorship, formation, and the messy process of becoming.
“Better Than My Professor?”
Stefano Triberti (2024) frames the tension with a blunt question: what if AI is better than my professor? His analysis of conversational pedagogical agents—chatbots that can answer questions, give feedback, and even draw on motivational theories to encourage students—is sobering. Students often prefer the speed and responsiveness of AI. In distance education especially, a chatbot that answers at midnight can feel more supportive than a professor juggling 200 emails.
But Triberti argues that the real promise of AI is not in replacing professors but in relieving them. If chatbots can handle administrative questions (“When is the deadline?” “Where do I upload my file?”) and basic clarifications, professors can spend more time on higher-level engagement: deep discussions, mentoring, and guiding students through ethical and interdisciplinary questions. AI, in other words, might be better at certain things, but those aren’t the things that define the heart of teaching.
The Worth Question
Still, the emotional undercurrent remains: how do we prove our worth in this new environment? For professors, worth has always been precarious. We measure ourselves in student evaluations, publication counts, grant dollars, and the constant hustle to justify our positions. The intrusion of AI raises the stakes: now we are not only competing with each other, but also with tools that never sleep, never complain, and can scale infinitely.
I’ve felt this tension in my own teaching. When a student asks, “Why can’t I just ask ChatGPT?” the subtext is, “Why do I need you?” My instinct is to respond with all the ways I’m different: I can understand nuance, challenge assumptions, provide mentorship. But the truth is, proving my worth isn’t about drawing a defensive line between “what AI does” and “what I do.” It’s about leaning into what only a human relationship can provide.
And yet, that’s hard. It’s one thing to know intellectually that mentorship matters; it’s another to believe it counts as much as a published paper or a filled classroom when tenure committees or administrators are making decisions. The worth question is not just about identity—it’s about economics.
Shifting the Metrics
What if we changed the way we measure professors’ value? Instead of evaluating us solely on output—syllabi produced, classes taught, papers published—what if we rewarded relationships built? What if the most important Key Performance Indicator wasn’t student pass rates, but whether students felt known, challenged, and supported in ways AI never could?
The irony is that much of the research I’ve cited already points us there. Weimann-Sandig’s co-constructive model depends on authentic human relationships. Shin et al. admit that even Level 5 AI professors will still require human oversight for ethical and relational reasons. Triberti concludes that professors are indispensable for mentoring and guiding students beyond technical knowledge.
So maybe the challenge isn’t that AI will be “smarter” than us. It’s that universities may not yet have the courage to value the parts of our job that machines can’t do.
A Future of Collaboration, Not Competition
If AI continues advancing (and it will) the question isn’t whether it will outsmart us in certain tasks. It already has. The deeper question is whether we as educators will cling to proving our worth by being faster, more knowledgeable, more efficient—or whether we’ll reimagine our worth around what only humans can offer: empathy, ethical discernment, contextual wisdom, and the courage to accompany students through uncertainty.
In that sense, the arrival of “AI professors” might be less of a threat than an invitation. An invitation to let go of the illusion that our worth lies in grading stacks of essays at 2 a.m., or answering every factual question with perfect recall. And an invitation to double down on what drew many of us to this vocation in the first place: being part of the human project of learning together.
So yes, AI might someday be “smarter” than a professor. But teaching has never just been about information; it’s about transformation.
TTFN,
Dr. Sydney


AI is not “smarter” - it can retrieve information faster. If students use ChatGPT (or any AI tool - Google is now essentially an AI research assistant) simply to surface information and “answer” questions, they better know how to pose good questions, ask important follow ups, and question the results - all things they cannot do well without professors. If you’ve structured your class solely to place yourself as the authority on a subject, then they will bypass you, especially if you’ve written anything. They can just ask your book. But if your class is dynamic and skill based scaffolded around the content, you won’t be replaceable.
A wonderfully, thoughtful piece. My father used to say “Whatever it is, it’s 90% people”. He underestimated. It’s ALL about relationships and that’s one of the most significant contributions of the post. In 2009 Howard Jarvis wrote a book entitled “What would Google do?” My main takeaway was “Do what you do best, and outsource the rest.” So what is it that teachers do best? They form relationships, they inspire, not just motivate. I had a chat yesterday with one of my former teachers — from 1965! He’s 90 now, still sharp as a tack, still inspiring me. In turn, many of my own students from 20, 30, 40 + years ago still keep in touch. Why? Not because I was an authority on 18th Century French Lit (I wasn’t), but because they knew I cared about them and their learning then and I still care about them and their learning now. Yes, AI may be able to “motivate”, but let’s face it, 60 years from now I’m not going to be trying to get hold of v 4o of ChatGPT. I would encourage us to consider an entirely new set of metrics for assessing professors and teachers along the lines that were mentioned. With a nod to Arthur Clark, any teacher or professor who can be replaced by AI should be.