This week, I dove into some heavy but super relevant topics: artificial intelligence, academic integrity, and their overlap in education. I watched both guest talks, explored the environmental costs of AI from the UNEP article, and took a look at real-world student concerns around using generative AI tools in university. As a computer science student and executive at UVicAI, this hit close to homeābecause while we love building cool things with AI, we also have to stay conscious about how and why weāre using them. Thereās a lot of buzz around AI being a ālearning partner,ā but it comes with trade-offs.

Generative AI tools like ChatGPT are incredible for productivity and personalized support, but they arenāt magic. As someone whoās coded with and learned from AI, Iāve realized itās great at helping summarize dense concepts or offer starter code, but I always double-check. The ethical use of AI in coding or writing is a conversation that canāt be avoidedāespecially since weāve all seen GitHub projects plastered with MIT Licenses, making it easier for developers to build on othersā work openly. AI feels like an extension of this ethos, but it has to be used responsibly, especially when academic work is involved.
Where things get tricky is academic integrity. Just because AI can help doesnāt mean it should always be the go-to. In one of the talks, the concept of “cognitive offloading” was brought up, and that really made me pause. If we always rely on AI to do the thinking, are we really learning? At UVicAI, we emphasize AI safety and responsible use, so we encourage students to use tools like AI tutors after theyāve tried the problem themselves. Itās about supporting learningānot skipping it. Plus, not every instructor allows AI use, so respecting course guidelines is key.
Another concern that stuck with me was the environmental impact of AI. Data centers and the power they consume are no jokeāespecially when weāre using AI to do trivial things like generate memes or workout plans. The climate cost isnāt usually front-of-mind when writing prompts, but it probably should be. I think this adds another layer to being a digitally responsible citizen: knowing when AI use is genuinely helpful vs just wasteful. And honestly, for those of us pushing AI development forward, we need to lead by example here.
Some other videos I found that were great resources (for one reason or another!):