The trigger for this article started when someone asked me if, as a college professor, I could tell if my students were cheating on their term papers using ChatGPT, and if I did—how could I prevent them? The question brought up thoughts about artificial intelligence (AI) in general, technology adoption, cheating, trust, and a more fundamental question: what is the purpose of education?
For starters, I asked ChatGPT itself: should GPT be allowed in the classroom? Here is the answer it gave me 4 seconds later:
“As an AI language model, I do not have personal opinions or beliefs. The use of GPT or any other technology in the classroom is a decision that is best made by educators, administrators, and educational experts based on their own understanding of the educational needs and goals of their students. Some may find GPT useful for providing information, generating writing prompts, or assisting with research projects, while others may have concerns about the accuracy and impartiality of AI-generated information. Ultimately, the decision should be guided by what is in the best interest of the students and the educational institution” [Generated by ChatGPT].
I must admit, it appears to be a thoughtful answer, and I was glad that it started with the AI application “admitting” that it does not have personal opinions or beliefs….
Our fear of technology
In a LinkedIn poll I launched a week before writing this article, I asked whether ChatGPT should be encouraged, allowed, discouraged, or banned from the college classroom. Judging from the responses, it became clear that we fear new technologies. You don’t have to go too far into the past to remember what most people felt when the COVID-19 pandemic started, and they had to use tools such as Zoom, which they didn’t feel comfortable with.
That fear of technology stems from a lack of trust, resulting from not spending enough time with the technology to know how it can be trusted. We felt the same with Zoom, cruise control, digital cameras, and the Internet. The initial reaction was, “we don’t need it,” or even “we should ban it.” But as time went by, and we started using the technology, learned its quirks and how to use it effectively, we adopted it. Think about how comfortable we are today with Zoom. The same will likely happen with ChatGPT, and we will trust it over time.
Will it make our students dumber?
Would our students lose their critical thinking ability if we allowed them to use ChatGPT?
This is not the first time this question has come up. It came up when we asked if we should continue to teach them how to write in cursive? Use a calculator? Use Excel? Learn how to read an analog clock? And many more. First, I must say that using a calculator never caused students to think less critically, simply because calculators don’t “think” critically. Neither does ChatGPT.
My rule of thumb to answer a question like that is simple: how available is the tool? What would happen if the tool was not available? Would it create a life-and-death situation? If you can reasonably rely on the tool's availability, and its unavailability would not cause a life-and-death situation, then, by all means, use it! In my opinion, one of the purposes of education is to prepare students for life after school. Will they be allowed to use a calculator at work? Excel? ChatGPT? Electricity?
Proponents of banning ChatGPT in education claim that it will reduce student ability to think critically. But let me ask you this. If ChatGPT can answer the questions we ask our students, and with ChatGPT having no higher-layer cognitive abilities, aren’t we the ones who reduce our students' critical thinking abilities by asking questions that only require knowledge and comprehension and nothing more?
Cheating and trust
And here comes the elephant in the room: will students be using ChatGPT to cheat on their term papers? A few universities and some of the nation’s leading public school districts are already banning the use of ChatGPT on their campuses.
But to answer this question, we must first recognize that cheating already happens. “Term-paper mills” already exist, and students pay to get their term papers written by others. That ship had sailed.
What is the definition of cheating in the context of a school assignment? It comes down to submitting somebody else’s work as your own. Would we have a problem if, at the end of the assignment, the student indicated that this assignment was “created by ChatGPT”? Well, it kind of defeats the purpose of the assignment, doesn’t it? But, then again, is the purpose of education to have the students regurgitate what we told them? If so, ChatGPT could be used for cheating. But should that be all that we should expect from students? Aren’t we setting the bar too low?
And if the students did use ChatGPT to cheat on their assignments, how can we tell, or even prevent it?
Someone suggested that we go back to pen and paper. And as soon as someone suggested it, I was forwarded a video of a 3D printer used to write on paper what ChatGPT created… A spokesperson for OpenAI (the company that created ChatGPT) said, "the company is working on a system for teachers to check whether students have used ChatGPT to cheat or plagiarize.” How long do you think it would be before someone develops an add-on to ChatGPT that would defeat such a detection mechanism (I’m being told that an add-on like that was already developed…). Another question arises: if the student uses ChatGPT to answer a question, and the teacher uses another AI tool to check it, shouldn’t we eliminate the middlemen (the teacher and the student) and let the robots talk among themselves?... And now, back to the elephant in the room:
What is the purpose of education?
I’ll make it simple and tell you what I believe the purpose of education is. To me, it must prepare students to be contributing members of society. Whether they are garbage collectors, teachers, rocket scientists, artists, musicians, psychologists, authors, consultants, or any other role that society needs, the education system must prepare them for that.
We can’t prepare them for that by simply giving them information and only expecting them to be able to read it back to us. No role in society requires no more than that.
One of the most foundational concepts in education is Bloom’s Taxonomy, created in 1956 by Benjamin Bloom et al. It is a six-layer model of cognitive abilities. In 2001, the model was modified.
Image source: https://www.valamis.com/hub/blooms-taxonomy
In both versions, the lower level is knowledge. It is not much more than the student’s ability to remember and recall information. ChatGPT can do that, and if all we ask of students is to recall what they memorized, they will resort to ChatGPT, whether we allow it or not. But even then, care must be taken, as OpenAI states in the FAQ section of ChatGPT: “ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.”
The second layer is comprehension. We need students to understand the information they recalled, and this is where ChatGPT’s abilities end. It’s an excellent tool for gathering information and putting it in a readable, grammatically correct (some grammar-correction tools may disagree) manner.
The third layer is application. Now, we ask the students to apply what they recalled and understood to a new set of circumstances. That’s a potent tool, and I learned it in my first week of law school as the concept of precedent. I’m not worried that my entrepreneurship students might use ChatGPT because building a concept for a new business is highly contextual and creative. ChatGPT, being very generic, cannot generate such content.
The fourth layer is analysis. Here, we ask the students to break the information into pieces and understand how they operate, are connected, and affect one another. It’s like tearing apart a watch and then putting it back together again.
The most significant 2001 change in Bloom’s Taxonomy was in the top two layers, in which it transitioned from synthesis (the fifth layer in the 1956 model) and evaluation (the sixth layer) to evaluation (lowered from being the sixth layer) and creativity (the new sixth layer). It involves several skills, one of which is the ability to evaluate the credibility, validity, and other aspects of the information sources we use. ChatGPT will not do that for you. Schools used to ban the use of (at least quoting from) Wikipedia because the content there was created by a community and not necessarily by researchers conducting academic research. But Wikipedia, at least to me, remains a great starting point, establishing a basic language and terminology for a topic, which I continue to pursue afterward.
The ability to synthesize independent pieces of knowledge and create something new, sometimes adding new thoughts and extending knowledge, is at the top of Bloom’s Taxonomy and should be what we demand of our students. And this sets the stage for my answer to the main question:
Should you encourage, allow, discourage, or ban ChatGPT in your classroom?
Incentives drive behaviors. If the only motivation the education system offers is grades, and the grades are based wholly on the students’ ability to memorize information, then they will use ChatGPT or similar tools because it saves time and, well, does a pretty darn good job at that level.
In my class, I tell my students in the first 15 minutes of the first meeting that nobody will fail my class. In fact, nobody will get less than a B, and even B’s are rare in my class. With that, I removed the grade incentive. Now, I tell them, it’s up to you what you take from this class. If you want to take nothing, that’s up to you. If you want to be able to evaluate and present a new business idea, that’s your choice (and benefit).
We should not be afraid of technology. We should adopt it. I encourage my students to use ChatGPT (or any other tool at their disposal) and to share the fact that they used it. At the same time, I will make sure that the assignments I give them require a higher level of critical and creative thinking.
Think about it: we used to ask students to do math on paper. Then, we realized they could use a calculator. What did we do? We banned the use of calculators in school. Students then found ways to get around that ban. Until we realized that calculators were here to stay, began allowing their use, and started pushing students into higher levels of cognitive thought.
The opportunity that ChatGPT creates for the education system
And hence is the opportunity for the education system. K-12 student spends 16,380 hours in school. It’s a finite number. What percentage of that many hours should they spend on tasks that can easily be done by technology? If technology can take care of the lower levels of cognitive skills, the opportunity is to push them to demonstrate the higher levels and use technology for the lower layers. ChatGPT gives us the opportunity (actually, forces us) to cause them to be critical thinkers!
No, we shouldn’t worry about ChatGPT in the classroom. We should embrace it. That’s my answer, and I’m sticking to it. But if you need something to worry about, worry about robots having more brainpower than humans and deciding they have no use for us anymore. But we have at least four more years until that happens….
Want to hear more? Listen to this week's podcast episode:
Part 1: https://podcasts.apple.com/us/podcast/s8e6-part-1-should-chatgpt-be-banned-from-the-classroom/id1569249060?i=1000598462477
Part 2: https://podcasts.apple.com/us/podcast/s8e6-part-2-should-chatgpt-be-banned-from-the-classroom/id1569249060?i=1000598468817
Dr. Yoram Solomon is a trust expert, author of The Book of Trust, host of The Trust Show podcast, a two-time TEDx speaker, and facilitator of the Trust Habits workshop and masterclass that help build trust in organizations. He is a frequent speaker at SHRM events and a contributor to HR.com magazine.