1 Why You Should Bring AI Into the Classroom
Student Voices at the Frontier: Navigating AI in Higher Education
“We are living through an incredible transformation… No one understands the journey of navigating this profound world of AI as much as students who are going through the same thing.”
Introduction: The Missing Perspective
In Fall 2024, sixteen teams of student researchers at John Jay College of Criminal Justice embarked on a unique project: documenting their lived experiences as they navigated the integration of artificial intelligence into higher education. Their insights, captured through podcasts and collaborative analysis, reveal a perspective largely missing from academic discourse- the voice of students themselves.

As one student research team put it, “As students, we are the right voice for this conversation and with how progressive technologies have become for our generation. Now is the perfect time for us to explore this idea.” This chapter brings their collective wisdom to the forefront, highlighting key findings that can help learning communities better understand and address the complex realities of AI in education. To see the full project – to hear all of the voices – visit the website, where you’ll also find all of the podcasts that were created.
Living Between Worlds: The Current Reality
The student researchers consistently described feeling caught between worlds—between traditional educational expectations and the new realities of AI-enhanced learning. Their research revealed that most students use AI tools daily, primarily for brainstorming, structuring assignments, and understanding complex concepts. As one podcast team observed:
“AI is not a replacement for teachers or education but a modern tool to help and assist students and education overall. Research has proven that a student’s perspective on AI is as productive as having an assistant.”
However, this adoption isn’t without conflict. Students expressed deeply mixed feelings about AI, recognizing both benefits and potential harms:
“It’s a 50/50 chance when it comes to the usage of AI, students really do use AI (ChatGPT) as a way to have outlined/brainstorm ideas but on the other hand, it can negatively impact skills that can potentially be lost.”
Importantly, the research identified the COVID-19 pandemic as a critical turning point that accelerated AI adoption in education:
“The turning point you may ask? We all agreed that it was COVID-19 and there is an immediate dependency on switching online. From there, we never switched back.”
This sudden shift to online learning, combined with reduced access to traditional support systems, created conditions where AI tools became essential rather than optional for many students.
The Trust Crisis: An Unexpected Finding
Perhaps the most surprising discovery from the student research was the extent to which AI has eroded trust between students and faculty. The podcasts repeatedly highlighted a growing cycle of suspicion that damages the learning environment:
“Every professor would always emphasize ‘well I know when you’re using AI’ or ‘I could tell when you’re using AI’, but in reality they actually don’t.”
Student researchers reported numerous instances of false accusations:
“Even some students that don’t use them are accused of being used, of using things like ChatGPT…and it makes you feel bad because you’re actually trying to learn.”
This breakdown in trust has severe consequences for learning and well-being:
“And I have like seen people and talked to people who have been accused of using AI and it really upset them. Imagine putting your heart and soul into an assignment that you did on your own that you thought was like the best thing and then your professor’s like, ‘Yeah, you’re getting the F because this is AI.’ Yep. That’s crazy… if they see like a something that’s like really well done and people put a lot of effort in, they might just assume automatically it’s AI.”
The students also identified a troubling double standard where faculty prohibit AI use while using similar tools themselves:
“I think that professors are now very big on scanning papers… which is crazy because we have this tool now that can do the work for us and sound better than we can… And then on the other hand, we have professors who are or teachers who are actually using it to be productive.”
Finding Balance: Student Wisdom on Responsible AI Use
Despite the challenges, the student researchers offer thoughtful guidance on finding balance in AI use. Their collective wisdom emphasizes moderation, learning-focused applications, and maintaining core skills:
“Everything in moderation… it should be mainly used for assistance and I think that you really encapsulated the idea of free thinking and everything that requires a human to do.”
They recognize the potential value of AI as a learning aid when used appropriately:
“I’ve used AI in general to kind of like format outlines for myself…what I should expand on, which paragraph should talk about what topic. I think ChatGPT in that sense and maybe other AIs… is definitely a helpful thing.”
But they also warn about the hidden costs of overreliance:
“I think the time that it takes to craft something also… develop certain creative writing skills, you develop you know sort of like a stamina for getting work done, for being able to write something, being able to brainstorm so that time that you are missing out on when you just leave AI to completely do your work for you kind of harms you too in a way.”
Recommendations for Faculty and Institutions
The student researchers don’t simply identify problems—they offer concrete recommendations for faculty and institutions navigating this new terrain. These include:
1. Open communication rather than prohibition
“I feel like it should stop being like the elephant in the room and be like put down by professors like, oh, don’t use AI to complete your work, blah blah blah, because in reality it’s actually here and they know.”
2. Teaching responsible AI use
“Professors should give their students smarter ways to use AI like other alternatives so they’re not just plagiarizing. For example, one of my professors gave us a list of ways that we can use AI where we’re not making AI do the work but like we are using AI to reword our work to make it sound more informational or just more professional.”
3. Adapting curriculum and assessment
“I think that it would be better if professors implemented tasks that allow us students to think deeply. So, I feel like assignments that would allow students to use creativity would be more beneficial because if assignments are solely based on just regurgitating information, then AI will just do it faster.”
4. Building faculty AI literacy
“As for educators they should look into the currently available AI and try to create a basic understanding they have to learn how to start being open-minded to it and actually learn how to use AI.”
5. Acknowledging student realities
“Professors should know that students will use AI, especially since technology is growing so fast that it’s going to be inevitable. Like people are going to use it.”
Messages for AI Developers
The student researchers also directly address AI developers, highlighting the responsibility they bear in shaping educational experiences:
“I feel like they also need to be held accountable for the ethics that they are lacking when they’re creating these tools because as we said earlier before, there’s students out here that are not using AI but are getting in trouble for doing so.”
They suggest specific feature improvements:
“When someone is looking for a solution or answer to any topic, chemistry, biology, math, whatever it is, in order for you to see the answer, you have to actually go through a step by step and understand it before you’re just given the answer.”
And emphasize the broader societal implications:
“There is a factor of social responsibility. Basically there is an impact of automation of jobs that are being addressed with AI right – there is this fear that AI is going to replace a lot of jobs. So whoever is creating this software to automate part of these manual processes, they need to come up also with ways to train individuals with upskilling initiatives.”
Conclusion: Building Better Educational Futures Together
The insights from these student researchers offer a unique window into this pivotal moment in educational history. They demonstrate that students are not passive recipients of educational technology but thoughtful participants actively navigating complex changes.
As we move forward, their voices must be central to conversations about AI in education. Their perspectives reveal that prohibiting AI is neither realistic nor beneficial. Instead, we need collaborative approaches that recognize the reality of AI’s presence while preserving the core values of education.
The students’ research makes clear that we need to rebuild trust between faculty and students, develop clear guidelines for ethical AI use, and create learning experiences that leverage AI’s strengths while continuing to develop essential human capabilities.
Most importantly, this work reminds us that the students most affected by these technological changes have valuable wisdom to contribute. By listening to their experiences and insights, we can navigate this transformation more thoughtfully and effectively.
As one research team concluded:
“While we all see AI in education in different ways, we all agree that if students and professors can come together to find a balance that works for both, it would make the topic much less intimidating. This collaboration could bridge the gap between two generations’ approaches to learning.”
It is in this spirit of collaboration and mutual respect that we can build educational futures that serve all learners well.
Media Attributions
- Screenshot 2025-03-29 at 9.04.47 AM