AI Use is Growing

By Paul A. Djupe, Director of Data for Political Research

We’ve had access to large-language model AI chatbots for just over a year now. We’re in our second semester of asking about it. Last fall, when we knew nothing, everything was enlightening. At this stage, it’s important to assess how quickly attitudes and adoption are moving.

127 surveys asked the same set of questions in October (about 500 respondents) and late February (just over 400 respondents). A first set of questions asked whether they had used AI chatbots for any class-related activity. We were worried that respondents wouldn’t want to admit it, so the question included the caveat, “Note, this is not admitting evidence of academic misconduct as we do not know the kind of assignment or the class for which AI may have been used.”

As the following figure shows, 43 percent of October 2023 respondents said they didn’t use AI for any class-related task, which shrunk to 29 percent in late February/early March 2024. What went up? Basically everything. But the big winners were summarizing reading (29 to 39 percent), editing papers (16 to 22), and writing code (11 to 18). There were only small increases in students admitting that AI wrote a paper for them (7 to 11) and doing background research (29 to 32 percent). There is no activity that saw a majority of students engaging in it. Still, there are substantial portions of campus using AI in consequential academic activities. Again, there isn’t necessarily anything wrong with these responses since we don’t know what the class policies were governing AI use. I’ll bet there aren’t many that authorize writing a paper, but I also don’t know for sure.

One eye-popping finding from the October 2023 results (seen here) is that AI use is largely an international student phenomenon. There were huge gaps – only 16 percent of international students had not used AI, but 47 percent of domestics had not. By the recent survey, AI use among international students was almost unanimous (just 4 percent said no) compared to 32 percent among domestics. The gap is closing. Otherwise, big gaps remain among the particular activities. For instance, just 9 percent of domestics used AI to write a paper compared to 30 percent of internationals. There are majorities of internationals on some activities (research and summarizing), but otherwise the particular uses are pretty diffuse.

I wonder if student beliefs and values about AI are changing along with its growing use. So, here’s a surprise. There has been a modest growth in the sentiment that using AI undercuts the value of a college education. About 40 percent agreed or strongly agreed in October, but it bumped up to 45 percent by last week. It’s notable that a few more students strongly disagreed and fewer were on the fence (saying “neither”). We’ll have to watch this one carefully going forward to look for signs of polarization. This isn’t quite a governing value but there is some constraint involved – those who agree use AI less than those who disagree (1 activity for those who agree vs 2 activities for those who disagree).

One more for now. We’re still in the wild west stage of AI justice and surely course policies are all over the place. We asked whether students agreed or disagreed that, “Those who use AI for a paper (against the wishes of their professors) are likely to get caught.” It’s surprising to find that 43 percent agreed in October and…wait for it…43 percent agreed in March. Again, somewhat fewer were on the fence and more disagreed. These beliefs are linked to results we’ve seen previously. International students are more likely to disagree that they’ll get caught (46 percent – about double the percentage of domestics who think so). And these views are linked with AI usage as well – those who agree that they’ll get caught used it for fewer activities (~1) than those who disagreed that they’ll get caught (~2.2). Another one to keep our eye on.

We’re at such an early stage of AI in higher ed that all of this is in a great deal of flux. Faculty are probably not going to standardize their policies, and students are likely to keep talking and sharing their experiences. So, I presume that we’re moving toward greater AI integration into academic activities. What buoys my spirits is how few students are using it to write papers. Writing is so deeply connected to thinking that I stand firmly on the side of those who say that AI use demeans a college education. However, there are some activities that probably can be outsourced and even I use AI a little while I’m writing (Grammarly is an instance). The best we can do for now is keep talking about it and working toward clarifying our values and perhaps our policies governing its use.

Paul A. Djupe is a currently hobbled local cyclist who runs the Data for Political Research minor. He started onetwentyseven.blog a few years ago in a bid to subsidize collective action and spread accurate knowledge about campus and what goes on there. He also writes about religion and politics in the US.

Leave a comment