Sunday, Jun 04, 2023 | Last Update : 12:01 PM IST

  Opinion   Columnists  23 Mar 2023  Shashidhar Nanjundaiah | Uncertainty in classrooms amid ChatGPT disruptions

Shashidhar Nanjundaiah | Uncertainty in classrooms amid ChatGPT disruptions

The writer is a media educator and observer, who has edited magazines and newspapers in both India and the United States. He is currently the chief storytelling officer at a Bengaluru-based multinational company.
Published : Mar 24, 2023, 12:00 am IST
Updated : Mar 24, 2023, 12:00 am IST

ChatGPT is now a headache for educators because it can easily act as a substitute for student effort

ChatGPT can formulate legal notes, spot humanly undetectable diseases, and write computer code. (Photo: Wikipedia)
 ChatGPT can formulate legal notes, spot humanly undetectable diseases, and write computer code. (Photo: Wikipedia)

ChatGPT (Generative Pre-Trained Transfer) is an exciting new arrival that was launched on November 30, 2022. It is a product of the technological geniuses that have been working on artificial intelligence. ChatGPT can formulate legal notes, spot humanly undetectable diseases, and write computer code. In his interview with the US television network ABC on March 16, Sam Altman, the CEO behind Open AI, the company that created ChatGPT, has declared that this could be the “greatest technology that humanity has yet developed”. Yet it is now a headache for educators because it can easily act as a substitute for student effort. As Altman said in that interview, education may need to change.

However, I am not writing as a technological determinist. Instead of treating ChatGPT as a technological disruption, should we consider how the disruption can become a deconstructive moment all across our entire education system?

How should we respond when technology substitutes for established human processes? Higher education administrators have acted with stunning alacrity to control this new onslaught. Committees and workshops are being formed to help understand the scope and to brainstorm the ways forward; teachers are designing ways to tackle it in the classroom. Still, educators are unsure whether this new technology should be allowed into classrooms -- we simply cannot predict the outcomes of doing so.

On one extreme, there is downright scepticism. The excessive perception of threat, as though we are encountering an insurmountable burden, comes with a sense of vindication of a long-standing stand that mobile technology should not be allowed in classrooms per se. This is not an

unreasonable solution, except that it is blindsided by its recognition. Invisibilising something that has become a constant companion is at best a technical resolution. ChatGPT has caused further discomfort among these sceptics because it poses a deeper threat to the long form of learning that pedagogy has always offered. ChatGPT enables a student to take shortcuts, it destabilises our models of authentic learning.

A second kind of reception to this new technology, the uncritical response, stems from the assumption of inevitability. Among these educators there is somewhat incomprehensible jubilance. To them, early adoption of technology should be considered an edge in the competitive marketplace. Administrators and owners may hold this stand; however, that pressure is passed on to teacher-learning and into classroom adoption. In this uncritical adoption there is a sense that education must adapt to technology.

Either way, educators are bending over backwards to accommodate ChatGPT as a supporter of classroom learning, and this is a pragmatic approach. Some professors in the United States claim to dodge the problem simply by giving assignments that ChatGPT cannot answer. In this method, the educator’s challenge is that they must first master ChatGPT in order to beat it. Even within this immediate approach, a three-step method helps: One is to train students how to use ChatGPT responsibly, tell them it is not fool-proof, that an algorithm cannot substitute for the human brain’s ability to go deep into subjects, and so forth. The second step is to assign the student to declare what questions they asked ChatGPT, and have a backup assignment for each take-home

assignment. For example, a student comes to class and makes a presentation about that assignment, explaining and answering the teacher’s questions. The third is to revisit our assessment system. Having offered the student the option to use ChatGPT, the teacher then randomly cross-checks by visiting the platform and asking the same questions of it. They can then create a rubric in which a ChatGPT assignment will be evaluated differently from a manually done assignment.

Even so, these are ad hoc and hurried responses. This is understandable: as educators, our first commitment is to the classroom, so the design of our delivery should have primacy over larger questions. The other way to defend our immediate responses is that they add up to larger solutions, organically and gradually.

While attempting to re-stabilise our learning systems, the steps in learning should be subservient to the actual outcomes. ChatGPT enables easy access to available information; it disables established processes of actual learning. If we look underneath technological solutions, the concern should be how to keep learning afloat. Hence the question before us should be: How can we create a fluid form of knowledge that must necessarily lead to independent knowledge-seeking?

The problem of pedagogy is that it relies too much on historical knowledge and not enough on the intellectual progress of societies. Our modern education systems evolved not despite but from the  enablement of science, which, in turn, emerged from the same education systems. From the printing press to search engines, technology has both determined and been determined by our societies. Therefore, it should be unfathomable that we should suddenly feel disempowered at the hands of technology.

Artificial intelligence runs the risk of being hijacked. Our truths are destabilised by the surfacing of the much-maligned “alternative truths”, and AI-enabled products can nonchalantly amplify fake news and conspiracy theories. But, more important, this “new truths environment” has given rise to uncertainty -- what should we believe anymore? Altman rightly points out that he fears which humans would be in control. We have been trying to reinstate modern institutions in their central position of authority, but social trust in these institutions is eroding.

In a world where certainty reassures us, we are repeatedly struck by uncertainty. The need for certainty, for control, is intrinsic to us. A sense of helplessness and uncertainty pervades our current environment, and the angst is palpable. German systems theorist Niklas Luhmann, writing in the 1980s, argued that uncertainty generates angst among societies. Uncertainty can be troubling, but it can evolve independent thought. It can become the very methodology of constancy of knowledge-seeking using the uncertainty of enquiry. For example, can education be the platform to inform our students that the world we now occupy is divided, so that each piece of information becomes a tool for building new knowledge? Can our assignments demand of students what we never demanded before -- that each response be a new piece in that progression?

Hence, the question that we should deconstruct is: Can the pedagogic knowledge framed as it is make our societies more dialectical, critical and progressive? This larger question is located at the centre of our very paradigm. The answers may not be immediately clear, but our educators are smart enough to work out solutions.

Tags: chatgpt, open ai, niklas luhmann