Britt O’Meara, Adjunct Faculty, English Department
The influence of Artificial Intelligence (AI) extends beyond academics, deeply affecting the emotional well-being of teachers and students. Educators may feel unsupported and drained when dealing with AI misuse, while students face stress from inconsistent policies across courses – while being bombarded by social media messages promoting tools to evade detection and highlighting AI’s career relevance. But where are students supposed to pick up these skills if they’re not learning them in college? How are educators supposed to prepare students for an AI literate future while balancing rigor and emotional well-being?
The solution lies in policies and practices that promote clarity, empathy, and collaboration. By centering the needs of both students and faculty, we can foster a learning environment where AI empowers rather than overwhelms.
What to do now: Review your course policies and consider how they address AI use. Are they clear, empathetic, and aligned with your learning outcomes?
Clear guidelines reduce confusion and build trust. Policies should define how and when AI can be used, and be linkedto learning outcomes. Connecting policies to course outcomes focuses on learning instead of policing.
For example, AI might be permitted for brainstorming or editing but not for generating complete assignments. Banning AI outright is impractical, as monitoring its use is impossible. AI detectors are unreliable and have been shown to unfairly target English language learners, and alternatives like requiring process documentation can be bypassed by emerging technologies that mimic typing when text is pasted into a document, bypassing this kind of oversight. The Boise State AI in Teaching and Learning Committee—a group of faculty and staff from across the university—has developed sample statements about generative AI use that faculty may use on their syllabi.
Fostering open dialogue is essential. Students often see AI as a tool for cheating, but conversations about its ethical and practical uses can shift this perception. Banning technology will shut down any opportunity for open conversation; why would a student admit they cheated when the stakes are high? Safe spaces for discussion encourage responsible AI use and can help students navigate challenges.
Rather than creating new rules, existing academic integrity policies can address AI misuse. These policies often prohibit submitting work that is not original, which covers outsourcing to AI. Boise State’s Academic Integrity policy explicitly addresses this.
We can foster well-being through creating policies that balance transparency, fairness, and support. By centering well-being in our policies and practices, we can turn these challenges into opportunities for growth and learning.
What to do next: Advocate for institutional resources that support faculty development and student guidance on AI. Faculty need spaces to share experiences and learn from one another. Professional development opportunities and peer support networks can help educators feel more confident and less isolated as they navigate these tools.
For more resources and support, contact the Center for Teaching and Learning (CTL).
References
Boise State University. Academic integrity. Boise State University Registrar. Retrieved Jan 2025. https://www.boisestate.edu/registrar/general-information-and-policies/academic-integrity/
Incogniton Knowledge Center. https://incogniton.com/knowledge%20center/paste-as-human-typing-fingerprint/
Coffey, Lauren. Professors Cautious of Tools to Detect AI-Generated Writing. Inside Higher Ed. 9 Feb 2024. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/02/09/professors-proceed-caution-using-ai
Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, James Zou (2023). Ethical concerns of AI-generated text classifiers for educators and researchers. arXiv. https://arxiv.org/abs/2304.02819