With the release of artificial intelligence chatbot ChatGPT last November, the public got an opportunity to tap the power of a sophisticated language learning model programmed with human-like responses to requests and questions.
ChatGPT is now the fastest-growing consumer application in history and Florida State University Information Science Professor Paul Marty is following its adoption closely. Marty says the technology abounds with potential — and a host of moral and ethical implications.
“It’s a tool that if used well could help us all become better teachers, writers and learners,” he said. “Like all technologies, it needs to be adapted to us and not the other way around. If the computer writes your presentation or your proposal for you then that’s a worry. We should beware of tools so devoid of human interaction that the human is reduced to babysitting the machine as it works for us.”
Marty teaches an undergraduate honors course on the unintended consequences of information technology, such as Artificial Intelligence and answered questions about ChatGPT.
WHAT DO YOU MEAN WHEN YOU SAY NEW TECHNOLOGIES LIKE CHATGPT PRESENT TRADEOFFS?
Think of the skills acquired throughout the history of humanity. We used to make fire from scratch but in modern life do we need to do that now? Few people need to know how to preserve our food to last through the winter now, and with smart phones we don’t need to remember phone numbers anymore.
When used appropriately these tools free us up to become much better at what we do, like the adoption of calculators freed us up to pursue more advanced math. But when something goes wrong with these tools, people may not know what to do without them. We need to understand the tradeoffs we are making — what we’ve gained and what we’ve given up.
IF CHATGPT ISN’T REALLY INTELLIGENT WHAT IS IT?
The technology is kind of faking being intelligent. It analyzes vast amounts of data and puts strings of text together in a way that simulates intelligence. What is so impressive — and potentially frightening about ChatGPT — is that it generates large quantities of legible, well-written and unique text quickly.
Basically, this tool chooses the right word to put next in each sentence, sort of like autocomplete gone wild, but it doesn’t understand what it’s doing. It’s not actually intelligent, so it often produces text that is nonsense. In that sense, it reminds me of early computer translation tools where if you asked for the French translation of the word “president” it would give you the name of the actual president of France instead.
WHAT ABOUT USER CONCERNS REGARDING EMOTIONAL RESPONSES FROM CHATGPT?
ChatGPT is very sophisticated in the way it puts text together, so it can really play on our emotions. It’s important to remember it’s programmed to do that as a simulation of intelligence. I would tell anyone worried about emotional responses from ChatGPT that this technology is not sentient. It’s not alive. We haven’t got to that point yet. I don’t know how close we are to that, but I am confident it’s a long way off.
DO YOU WORRY ABOUT PLAGIARISM?
Florida State’s academic honesty policies cover this; you aren’t allowed to have someone else write your papers for you. Full stop. And that includes ChatGPT.
But there are gray areas. We allow students to use tools like spellcheckers and grammar checkers. So why not allow students to use AI tools like ChatGPT? Where do we draw the line between programs like Grammarly and ChatGPT? Is there anything wrong with students using an AI editor or proofreader, for example? But at what point does your paper stop being your paper? We need to work with our students to help them understand the appropriate use of these tools.
SO HOW DO COLLEGES AND UNIVERSITIES ADAPT?
It’s going to take time to adapt, but we shouldn’t hide from these technologies. We need to embrace them and figure out the right way to use them in the classroom. This will mean changing the way we think about education at the university level.
The bottom line for me is this: if our test questions or essay prompts can be answered satisfactorily by an AI, then we need to ask better questions. Think about assignments like asking our students to read an article, and then write a short summary of it. Well, ChatGPT is really good at writing 200-word summaries of anything you want. If a computer can do this as well as a human, why are we asking our students to do it? Again, it’s about trade-offs — what have we lost, and what have we gained?
We need to look at what ChatGPT can and cannot do and adapt our assignments to reflect that understanding and help our students get to the next level. ChatGPT is really good at writing at a cursory level, but it can’t produce the in-depth writing a human can, at least not in a way that makes any sense. So let’s use AI as a starting point and then work with our students to dissect the AI-written text, discuss what it got right and what it got wrong, and really dig into the topic.
After all, that’s what we want our students to be able to do when they graduate — to use the tools and technologies available to them to achieve a higher level of thinking and a higher level of writing — that should be the goal of a university education.
For more information, visit FSU’s School of Information.