Boise State University has a number of policies that safeguard institutional data, which university faculty, staff, students, and affiliates must follow. Those using generative AI in their work should consider what data they are using and whether or not such data usage is prohibited by university policy or otherwise generally cautioned against.
Boise State University supports the responsible use of AI tools and has approved the education editions of the following: Zoom AI Companion, Google Gemini (Education edition only), Gemini for Google Workspace, and OpenAI ChatGPT (Education edition only). These tools have been vetted to meet the University’s standards for security, privacy, compliance, and legal requirements. If you wish to use other AI tools, they must first be submitted for review through the appropriate University processes, including Procurement, SARB, and legal review, and receive approval in accordance with those procedures. To ensure the safety and integrity of University data and systems, please avoid using unapproved AI tools on the University’s network, devices, or with your Boise State credentials (your Boise State username and password).
Even if your use is authorized, you should not enter personally identifiable information, confidential, sensitive, private, or restricted data into any generative AI tool or service.
As with everything you do at the university, you must follow Idaho State Board of Education and University policies when using generative AI tools and services.
It is recommended that faculty and staff complete the self-paced Boise State University AI Training prior to using any university-endorsed AI tool.
General Policies Relevant to AI Use
Prohibited Use and Relevant Policies
In addition to violating university policies, many of the above uses also violate generative AI providers’ policies and terms.
Incident Reporting Policies
Any member of the university community who learns of a potential breach of data protection or confidentiality—including through the use of generative AI—must report the incident.
- University Policy 2020 (Student Code of Conduct) – Suspected plagiarism for failing to properly acknowledge or cite works or ideas produced through the use of generative AI may be reported through the Student Conduct Report form.
- University Policy 2250 (Student Privacy and Release of Information) – Suspected FERPA violations should be reported to the Office of Institutional Compliance and Ethics at complianceandethics@boisestate.edu or (208) 426-1258. Reports may also be made through the university’s Compliance Reporting Hotline.
- University Policy 7030 (Reporting Waste and Violations of Law, Regulation, or University Policy) – Employees may report, in good faith, any violation of law, regulation, or university policy without fear of adverse action or reprisal using the reporting guidelines outlined in University Policy 7030.
- University Policy 8060 (Information Privacy and Data Security Policy) – Suspected security breaches as defined in University Policy 8060 should be reported to the Help Desk at (208) 426-4357. The Information Technology Incident Response Procedure must also also be followed.
Trustworthy AI
For uses of generative AI that are not prohibited, university faculty, staff, students, and affiliates can help protect themselves and others by choosing tools and services that exhibit the National Institute of Standards and Technology’s (NIST’s) characteristics of trustworthy AI.
Additional AI Resources
- Artificial Intelligence in Education – AI in Education
- Chat GPT Terms of Use
- Google Gemini Terms of Service (references Google’s Terms of Service)
- Generative Artificial Intelligence and Copyright Law (Congressional Research Service)