Report Abuse

Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Post a Comment
Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Since the release of AI Open in November 2022, ChatGPT has generated a lot of concern and controversy, most of which has been offensive to our learning environment. In particular, there are concerns that ChatGPT allows - perhaps encourages - plagiarism and that this plagiarism goes unnoticed and leads to a (further) deterioration in educational standards.

But what is behind this fear? I want to be careful not to cause a tech shock (again). Instead, I argue that universities should respond constructively to the ban on newcomers to artificial intelligence, not fearfully and defensively.

Chatgpiti and his grief

ChatGenerative Pre-Trained Transformer (ChatGPT) is a chat that "generates comprehensive and thoughtful responses to questions and queries." This technology can give different results. Can answer questions about quantum physics. He can write poetry. Yes, and not only can he "come up with meaningful articles on any topic, but also write faster than a person."

Like any technological innovation, ChatGPT is not perfect. Obviously, his understanding of the post-2021 data is limited, so his flight of fancy about Republican George Santos, for example, won't help you. However, its versatility and complexity - especially when compared to other AI tools - have made it the subject of public attention and even outrage.

For example, The Guardian quoted the vice chancellor last month as saying "the emergence of increasingly sophisticated text generators, most recently ChatGPT, that generates highly believable content and makes it difficult to recognize." The story is that some Australian universities are returning to traditional paper-and-pen exams thanks to similar advances in ChatGPT and AI.

West , for his part, describes ChatGPT as "the latest form of academic subversion" and "the new threat of plagiarism" that "university leaders" are diligently "fighting".

Such fears are not unfounded. Plagiarism, plagiarism and contract fraud are major issues in universities. As the 2022 study explains:

Judicial integrity is critical to academic integrity, and academic integrity is critical to any higher education. When students receive grades on someone else's academic test, it affects the qualification and the value of the qualification.

Threats to academic integrity are less common. The survey, which included 4,098 students from six Australian universities and six higher education providers, gave very high marks in all aspects of higher education.

This threat to the integrity of higher education has not gone unnoticed by university staff and the public. Contract manipulation and experiments using artificial intelligence prior to ChatGPT have been reported in the media. Most educators are aware of cases where students have partially copied an academic essay or Wikipedia and passed it off as their own work. This can happen even after some reminders have been made in class about the importance of communicating and acknowledging the work of others.

It is also a challenging and often dangerous environment for university professors to work. An article in Discussion 2021 states that 80 percent of faculty at Australian universities are staffed by non-permanent staff, such as those hired on "fixed" and short-term contracts with little guarantee of stable or permanent employment. . All scientists (regardless of job status) work in an environment characterized by redundancy, in which teaching is increasingly becoming one of the professions.

AI hasn't created a problem for industry, but it hasn't made life easier for many university professors either. Responding to violations of academic integrity can be time-consuming and emotionally draining for both faculty and students. Some violations - such as with ChatGPT - can go unnoticed by software designed to detect them, and can be very difficult for a teacher to prove.

Want the best of religion and ethics delivered to your inbox?

Subscribe to our weekly newsletter.

Beyond Techno Panic.

I'm concerned that ChatGPT's exclusive or main focus on issues of academic integrity could eventually lead to a tech panic. Technoterrorism is a technological development that threatens social problems and public safety, whether it be smartphones, social networks or AI.

Technopanic has several goals. They provide practical scapegoats for real and perceived social problems. This goat is easy to recognize; They are not human and therefore cannot respond (ChatGPT might be an exception here). Sensational tech horrors are perfect for the age of clicks; While such horrors predate Web 2.0, the "malicious video" hype of the 1980s is an example.

After all, this is the failure of techno-terror. By their nature, they are not interested in modeling the constructive use of technology and expect punitive and often unrealistic measures (such as deleting your social media accounts or banning AI from the classroom). Technological innovation is a negative determinant of human effort.

In fact, AI is nothing more than a human creation. Their use and misuse reflect and perpetuate social issues, values, belief systems and prejudices. A recent study found that addressing ethical issues related to artificial intelligence “requires educating us at the first stage of our interaction with AI, whether we are developers or users learning AI for the first time.”

constructive way

With that in mind, let me outline some of the ways universities can respond constructively to the advent of ChatGPIT. Some of them have already been implemented. All of these can certainly be integrated into areas outside of the ivory tower - for example, in primary and secondary schools.

  • Have a short talk with AI experts (academic researchers, media professionals) about ChatGPT and similar AI tools. This session may be offered privately to students and staff. They should provide an overview of how the technology works, potential disadvantages and advantages. It is important to consider these advantages because AI is not entirely without problems, and to suggest otherwise is — if not paranoid — naïve. We hope that these sessions will allow students and staff to express their concerns and learn something new. Members of both groups admit that ChatGipt has used this technology to the point where they have yet to make headlines.
  • Develop clear and transparent institutional guidelines for using AI to create assignments for students to grade.
  • Bring AI into the classroom to improve learning, prepare students for work, and provide insight into the ethical use of AI. Tama Lever confirmed this in a blog post about the Western Australian Department of Education's decision to ban ChatGipt in public schools. Lever is primarily talking about young people here, although this statement can be applied to students of all ages:

Education must equip our children with the essential skills to ethically use, evaluate, and disseminate the use and results of creative AI. Our education system is so paranoid that every student wants to use it to cheat in some way, so don't force yourself to try it at home behind closed doors.

  • Include mandatory ethics training in all courses, especially in the first year. This training may be a semester or semester course or integrated as a module into existing courses (eg Introduction to Computer Science, Introduction to Media and Communications). The decision to violate academic integrity—buying a deadline or using a chatbot to write an essay—is fundamentally ethical. It is a decision based on what is right and what is wrong. The same applies to decisions to use technology for good or otherwise.

Each of these proposals had implications for the university's very limited budget and limited time for students and scholars. Even the best AI researchers at a charity don't want to be constantly invited to stand in front of their colleagues on a chatbot and present other pressing needs that take up their time and attention.

However, these ideas are better than giving up and admitting defeat to our technological rulers.

Jay Daniel Thompson is Associate Professor of Professional Communications in the School of Media and Communications at RMIT University. Her research focuses on how to promote ethical online communication in an age of online misinformation and digital adversaries. He is co-author of Digital Media Content Production: An Introduction and Fake News in Digital Cultures.

print , updated

Related Posts

Post a Comment