Weekly Reflection #3: Reflecting on AI, Learning, and Academic Integrity

This week, I dove into some heavy but super relevant topics: artificial intelligence, academic integrity, and their overlap in education. I watched both guest talks, explored the environmental costs of AI from the UNEP article, and took a look at real-world student concerns around using generative AI tools in university. As a computer science student and executive at UVicAI, this hit close to home—because while we love building cool things with AI, we also have to stay conscious about how and why we’re using them. There’s a lot of buzz around AI being a “learning partner,” but it comes with trade-offs.

Photo by Emiliano Vittoriosi on Unsplash

Generative AI tools like ChatGPT are incredible for productivity and personalized support, but they aren’t magic. As someone who’s coded with and learned from AI, I’ve realized it’s great at helping summarize dense concepts or offer starter code, but I always double-check. The ethical use of AI in coding or writing is a conversation that can’t be avoided—especially since we’ve all seen GitHub projects plastered with MIT Licenses, making it easier for developers to build on others’ work openly. AI feels like an extension of this ethos, but it has to be used responsibly, especially when academic work is involved.

This TEDx talk by Olivia Gambelin explores how the real threat of AI isn’t its intelligence, but the lack of ethical frameworks guiding its development. She emphasizes the importance of embedding human values into AI design to ensure it serves society responsibly and inclusively.

Where things get tricky is academic integrity. Just because AI can help doesn’t mean it should always be the go-to. In one of the talks, the concept of “cognitive offloading” was brought up, and that really made me pause. If we always rely on AI to do the thinking, are we really learning? At UVicAI, we emphasize AI safety and responsible use, so we encourage students to use tools like AI tutors after they’ve tried the problem themselves. It’s about supporting learning—not skipping it. Plus, not every instructor allows AI use, so respecting course guidelines is key.

Another concern that stuck with me was the environmental impact of AI. Data centers and the power they consume are no joke—especially when we’re using AI to do trivial things like generate memes or workout plans. The climate cost isn’t usually front-of-mind when writing prompts, but it probably should be. I think this adds another layer to being a digitally responsible citizen: knowing when AI use is genuinely helpful vs just wasteful. And honestly, for those of us pushing AI development forward, we need to lead by example here.

Some other videos I found that were great resources (for one reason or another!):

I found this video by University of Alberta Student Life to be a fun watch!
This presentation given by Tristan Harris and Aza Raskin touches on really important topics. Check out the pinned comment for timestamps.

Leave a Reply