0. Quick Start: What This Is, What It Isn’t, and Ready-to-Copy Templates
Overview. This page is a practical guidance companion to broad AI principles. It is designed to help faculty, staff, students, and researchers make consistent decisions about when GenAI is appropriate, what must not be shared, and how to document use—without relying on unreliable “AI detectors” or trying to keep up with every new tool.
This document provides: a set of stabilizing guardrails and ready-to-copy templates for common university work, including teaching, research, administration, and public-facing communications.
This document does not provide: (1) a tool-by-tool approval list, (2) an endorsement that AI improves learning or productivity in all contexts, or (3) a detector-based enforcement framework.
Governance note — Legal review recommended. Several sections of this guidance touch on areas with significant legal implications, including student data privacy and FERPA compliance (Section 3), human subjects protections (Section 4), copyright and intellectual property (Section 5), accessibility obligations under ADA and Title VI (Section 6), and employment and procurement considerations (Section 11). The committee recommends that the Faculty Senate request University Counsel review these sections before formal adoption to ensure alignment with federal and state law, institutional policies, and emerging regulatory frameworks. This is a governance best practice for any substantive policy guidance — not a caveat about the document's content.
Suggested consideration sequence. The committee recommends that the Senate consider this guidance in a sequence that establishes foundations first: begin with disclosure norms (§§ 1–2), data governance (§ 3), and oversight structures (§ 7) as an institutional baseline, then consider domain-specific sections (§§ 4–6, 8–8.5) incorporating reviewer feedback and legal review findings, followed by specialized topics (§§ 9–11). A detailed adoption sequence with risk implications is included in the Senate Brief (generated via the toolbar above). Full adoption within approximately two academic years is a reasonable target.
How to use this page
- Select your role using the Audience filter in the toolbar. Each section will display guidance tailored to your perspective.
- Start with Sections 0.5 and 1–4 for foundational context and core guardrails.
- Then navigate to the sections most relevant to your work using the links below.
- Use the ready-to-copy templates at the bottom of this section as starter language for syllabi, grants, and disclosures.
Navigate by section
Key definitions (to reduce ambiguity)
- AI (Artificial Intelligence): computational systems designed to perform tasks that typically require human cognition—including pattern recognition, prediction, language processing, and content generation. AI is a broad family of technologies, not a single tool. See § 0.5 for a detailed taxonomy.
- GenAI: a subset of AI that generates text, code, images, audio, video, or summaries in response to prompts.
- Substantive AI use: when AI meaningfully shapes content, analysis, interpretation, or final deliverables (not just spellcheck/autocomplete).
- Protected data: information classified as Internal, Sensitive, or High-Risk under the University's Data Classification Policy, including but not limited to data governed by FERPA, HIPAA, or IRB protocols, confidential or unpublished work, and proprietary vendor data. This guidance defers to the University's official classification framework rather than establishing independent data categories.
- Approved/enterprise tools: campus-vetted pathways (when available) that reduce risk around retention, training-on-your-data, and data exposure. This includes locally hosted models running on University infrastructure, which may reduce or eliminate data-sharing risks compared to cloud-based services.
Campus resources (what each is for)
- Illinois GenAI / campus on-ramp: tool landscape, training, and “safe default” guidance for getting started. genai.illinois.edu (opens in a new tab)
- Digital Risk Management (System-level framing): risk categories, governance, and procurement-facing considerations. VPAA Digital Risk Management (opens in a new tab)
- Library GenAI guide (teaching & learning): literacy, classroom approaches, citation/documentation norms, and critical evaluation of outputs. Library: Generative AI (opens in a new tab)
- Library research-with-AI guide: research workflows, documentation practices, and responsible use when searching, synthesizing, or coding. Library: Research with AI (opens in a new tab)
- GenAI research best practices: individual and organizational best practices for responsible AI use in research, informed by UIUC Library guidance and U of I System integrity policies. GenAI: Research Best Practices (opens in a new tab)
- Illinois Extension AI guidelines: practical AI guidance for communications and outreach staff, including hallucination awareness, image verification, and data protection. Extension: AI Guidelines (opens in a new tab)
Guiding principle: This guidance focuses on what can remain stable regardless of how quickly AI tools evolve: accountability, data governance, transparency norms, equity and workload protections, and safe procurement pathways.
Risk-Matched Verification Principle. The degree of AI involvement in any task should be matched by proportional verification, documentation, and human oversight. Routine AI-assisted editing requires minimal documentation; substantive AI shaping of research design, analysis, deliverables, or human-subject protections requires heightened transparency and validation. This principle applies across teaching, research, and administrative contexts and anchors the tiered disclosure framework throughout this guidance. This approach aligns with stage-based AI disclosure frameworks that map AI use across the research lifecycle and match verification rigor to AI involvement (AIR framework, Electv Training, 2026).
These principles are elaborated in the sections that follow, beginning with disclosure norms (§ 1) and proceeding through data governance (§ 3), oversight structures (§ 7), and workload protections (§ 8).
FAIR (Faculty Academic Integrity Reporting) data request mini-module
Use this text to request institutional data that would help the Senate community understand trends and avoid policy based on anecdotes. (Note: "FAIR" here refers to the campus Faculty Academic Integrity Reporting system, not the FAIR Data Principles—Findable, Accessible, Interoperable, Reusable—used in open-science contexts.)
Ready-to-copy policy templates (edit as needed)
Note: These templates are designed to be enforceable without relying on AI detectors, and to scale disclosure expectations based on risk and impact.