Skip to main content
Version: v2.1 | Last updated: February 2026

Introduction & Context

Purpose. This guidance supports shared governance decisions about artificial intelligence (AI) and data-intensive educational technology (ed-tech) at the University of Illinois Urbana-Champaign. It offers verbatim draft policy language, rationale, examples, and implications for alternative approaches so the Senate can adopt, adapt, or reference text consistently across units.

Personalize your view. Use the Audience filter in the toolbar above to see guidance tailored to your role. Each section includes role-specific callouts explaining what the guidance means for you—whether you’re faculty designing a course, a student navigating expectations, a researcher designing a study or managing data, or staff evaluating a tool. Print and Word export always include the full document for all audiences.

A note on roles: These audience categories describe roles, not identities. Many community members hold multiple roles—faculty are typically both instructors and researchers; graduate students are both students and researchers. Consult the sections relevant to your current activity rather than a single category.

Context. AI and ed-tech now permeate teaching, research, advising, and administration. There is excitement and institutional pressure around the prospect of these systems enhancing productivity and learning, alongside substantial disagreement and uneven evidence across contexts. At the same time, they pose risks for academic integrity, authorship, privacy, equity, accessibility, workload, and intellectual property. Consistent with UIUC's commitment to shared governance, transparency, and student success, these recommendations prioritize informed choice, accountability, and proportional safeguards.

Disclaimer. This page provides guidance and exemplar language for deliberation. It is not binding University policy unless and until formally adopted by the Senate.

Overview. This page is a practical guidance companion to broad AI principles. It is designed to help faculty, staff, students, and researchers make consistent decisions about when GenAI is appropriate, what must not be shared, and how to document use—without relying on unreliable “AI detectors” or trying to keep up with every new tool.

This document provides: a set of stabilizing guardrails and ready-to-copy templates for common university work, including teaching, research, administration, and public-facing communications.

This document does not provide: (1) a tool-by-tool approval list, (2) an endorsement that AI improves learning or productivity in all contexts, or (3) a detector-based enforcement framework.

Governance note — Legal review recommended. Several sections of this guidance touch on areas with significant legal implications, including student data privacy and FERPA compliance (Section 3), human subjects protections (Section 4), copyright and intellectual property (Section 5), accessibility obligations under ADA and Title VI (Section 6), and employment and procurement considerations (Section 11). The committee recommends that the Faculty Senate request University Counsel review these sections before formal adoption to ensure alignment with federal and state law, institutional policies, and emerging regulatory frameworks. This is a governance best practice for any substantive policy guidance — not a caveat about the document's content.

Suggested consideration sequence. The committee recommends that the Senate consider this guidance in a sequence that establishes foundations first: begin with disclosure norms (§§ 1–2), data governance (§ 3), and oversight structures (§ 7) as an institutional baseline, then consider domain-specific sections (§§ 4–6, 8–8.5) incorporating reviewer feedback and legal review findings, followed by specialized topics (§§ 9–11). A detailed adoption sequence with risk implications is included in the Senate Brief (generated via the toolbar above). Full adoption within approximately two academic years is a reasonable target.

How to use this page

  • Select your role using the Audience filter in the toolbar. Each section will display guidance tailored to your perspective.
  • Start with Sections 0.5 and 1–4 for foundational context and core guardrails.
  • Then navigate to the sections most relevant to your work using the links below.
  • Use the ready-to-copy templates at the bottom of this section as starter language for syllabi, grants, and disclosures.

Navigate by section

Key definitions (to reduce ambiguity)

  • AI (Artificial Intelligence): computational systems designed to perform tasks that typically require human cognition—including pattern recognition, prediction, language processing, and content generation. AI is a broad family of technologies, not a single tool. See § 0.5 for a detailed taxonomy.
  • GenAI: a subset of AI that generates text, code, images, audio, video, or summaries in response to prompts.
  • Substantive AI use: when AI meaningfully shapes content, analysis, interpretation, or final deliverables (not just spellcheck/autocomplete).
  • Protected data: information classified as Internal, Sensitive, or High-Risk under the University's Data Classification Policy, including but not limited to data governed by FERPA, HIPAA, or IRB protocols, confidential or unpublished work, and proprietary vendor data. This guidance defers to the University's official classification framework rather than establishing independent data categories.
  • Approved/enterprise tools: campus-vetted pathways (when available) that reduce risk around retention, training-on-your-data, and data exposure. This includes locally hosted models running on University infrastructure, which may reduce or eliminate data-sharing risks compared to cloud-based services.

Campus resources (what each is for)

Guiding principle: This guidance focuses on what can remain stable regardless of how quickly AI tools evolve: accountability, data governance, transparency norms, equity and workload protections, and safe procurement pathways.

Risk-Matched Verification Principle. The degree of AI involvement in any task should be matched by proportional verification, documentation, and human oversight. Routine AI-assisted editing requires minimal documentation; substantive AI shaping of research design, analysis, deliverables, or human-subject protections requires heightened transparency and validation. This principle applies across teaching, research, and administrative contexts and anchors the tiered disclosure framework throughout this guidance. This approach aligns with stage-based AI disclosure frameworks that map AI use across the research lifecycle and match verification rigor to AI involvement (AIR framework, Electv Training, 2026).

These principles are elaborated in the sections that follow, beginning with disclosure norms (§ 1) and proceeding through data governance (§ 3), oversight structures (§ 7), and workload protections (§ 8).

FAIR (Faculty Academic Integrity Reporting) data request mini-module

Use this text to request institutional data that would help the Senate community understand trends and avoid policy based on anecdotes. (Note: "FAIR" here refers to the campus Faculty Academic Integrity Reporting system, not the FAIR Data Principles—Findable, Accessible, Interoperable, Reusable—used in open-science contexts.)

Draft request to FAIR (Faculty Academic Integrity Reporting)
Requested FAIR (Faculty Academic Integrity Reporting) summary metrics (aggregated; no identifying data): 1) Is the overall number of reported academic integrity cases increasing in a way that correlates with the proliferation of generative AI (e.g., by term since 2019/2020)? 2) To what extent do GenAI-related cases appear to replace prior forms of misconduct versus adding to them? 3) For cases flagged as GenAI-related, what categories are most common (e.g., writing, coding, translation, problem sets, exams)? 4) What evidence types are currently used in GenAI-related cases (e.g., admission, process artifacts, version history), and what role (if any) do “AI detectors” play? 5) What proportion of GenAI-related cases result in findings of responsibility, and are outcomes changing over time?

Ready-to-copy policy templates (edit as needed)

Syllabus Template A: “No AI” (assessment-focused)
In this course, generative AI tools (e.g., ChatGPT, Claude, Copilot, image/video generators) are not permitted for graded work unless explicitly authorized by the instructor for a specific assignment. If you are unsure whether a tool or use-case is allowed, ask before submitting. Unauthorized use may be treated as an academic integrity violation.
Syllabus Template B: “Some AI allowed with disclosure”
Generative AI may be used for limited support (e.g., brainstorming, outlining, grammar edits) only when you disclose your use. Your disclosure must include: (1) the tool name and version (if known), (2) what you used it for, and (3) either the prompts used or a statement that prompts can be provided upon request. You remain responsible for accuracy, citations, and compliance with course rules.
Syllabus Template C: “AI encouraged with citation-style attribution”
Generative AI is permitted and may be encouraged in this course when used transparently. If you use AI, you must include an AI citation/attribution: tool + purpose + prompts (or prompts available upon request). If you include verbatim AI output, place it in quotation marks or a block quote. You remain responsible for the work’s accuracy and for verifying sources and claims.
Research / Grant Disclosure (faculty & staff)
AI Use Statement (template — adapt as needed): We used [TOOL] for [PURPOSE] in support of [TASK]. No Internal, Sensitive, or High-Risk data (as defined by the University's Data Classification Policy) were entered into non-approved systems. Prompts/inputs and major outputs can be provided upon request to support transparency and auditability (noting that GenAI outputs are non-deterministic and may vary across sessions and tool versions). Note: This template is not mandatory for all grants or publications. It is provided as starter language where AI disclosure is required or prudent. Disclosure requirements vary by sponsor, journal, and professional society; researchers should defer to applicable funder, publisher, or institutional policies and consult Sponsored Programs Administration (SPA) when relevant.

Note: These templates are designed to be enforceable without relying on AI detectors, and to scale disclosure expectations based on risk and impact.

Much of the current conversation about AI focuses on generative chatbots like ChatGPT, Claude, and Gemini. However, AI is not one thing—it is a broad and evolving family of technologies, many of which have been embedded in university operations for years. Understanding what AI is and how it has developed helps the campus community make more informed decisions about when, whether, and how to engage with these tools.

What do we mean by "artificial intelligence"?

At its broadest, artificial intelligence refers to computational systems designed to perform tasks that typically require human cognition—such as pattern recognition, prediction, language processing, decision-making, and content generation. AI is not a single technology but a spectrum of approaches, and the distinctions among them matter for governance.

A working taxonomy of AI types

Working Taxonomy of AI Types, Campus Examples, and Governance Considerations
AI Category What It Does Campus Examples Key Governance Considerations
Rule-Based / Expert Systems
1950s–present
Follows hand-coded rules and decision trees to solve structured problems. Degree audit systems, library catalog search logic, financial compliance checks, spam filters, spelling and grammar checkers. Transparent and auditable, but rigid. Limited ability to handle ambiguity or novel situations.
Machine Learning (ML)
1980s–present
Learns patterns from data to make predictions or classifications without being explicitly programmed for each case. Admissions scoring models, enrollment prediction, plagiarism detection (pre-GenAI), research data analysis, recommendation engines. Can encode historical biases present in training data. Requires ongoing evaluation for fairness and accuracy across demographic groups.
Deep Learning / Neural Networks
2010s–present
Multi-layered neural networks that process complex data (images, audio, video, text) and identify intricate patterns. Computer vision in research labs, speech-to-text transcription, medical imaging analysis, accessibility tools (e.g., automatic captioning). Often operates as a "black box"—high accuracy but limited explainability. Training requires large datasets and significant computational resources.
Generative AI (GenAI)
2022–present at scale
Produces new text, code, images, audio, or video in response to user prompts using large language or diffusion models. ChatGPT, Claude, Copilot, Gemini, DALL-E, Midjourney, Suno. Used for drafting, brainstorming, coding, summarizing, and content creation. Output can be fluent but inaccurate ("hallucination"). Raises questions about authorship, intellectual property, data privacy, and dependency. This is the category that prompted most of this guidance document.
Agentic AI
Emerging, 2024–present
AI systems that autonomously plan, execute multi-step tasks, browse the web, manage files, interact with other software, and make decisions with minimal human oversight. AI research assistants that autonomously search and synthesize literature, automated code review pipelines, scheduling and workflow agents, AI-driven procurement tools. Creates new accountability questions: who is responsible when an autonomous agent takes an action? Governance frameworks designed for prompt-and-response GenAI may not cover autonomous decision-making. This is an area the committee is actively monitoring.

Note: Bias and fairness considerations apply across all AI categories listed above, not only machine learning or generative models. Even rule-based systems can embed biased assumptions, and bias extends beyond demographic contexts to include any systematic distortion relevant to the domain of use.

Why this matters for governance

The university has used AI-powered systems for decades in admissions, advising, library services, IT security, research computing, and more. What changed with generative AI is not the presence of AI on campus but the interface: for the first time, any person with a web browser can interact directly with a powerful AI system, using natural language, with no technical training required. This accessibility is simultaneously GenAI's greatest promise and its greatest risk.

Effective governance requires distinguishing among AI types because the risks and benefits differ. A rule-based degree audit raises different concerns than a generative chatbot drafting a student's thesis. A machine learning model predicting enrollment raises different equity questions than an agentic system autonomously executing administrative tasks. This guidance focuses primarily on generative AI, but the principles of transparency, accountability, and human oversight apply across the spectrum—and the university should expect the governance challenges to expand as agentic AI matures.

What This Means for Faculty & Instructors

Understanding the AI taxonomy helps you make informed decisions about which tools are appropriate for your courses and whether AI-assisted approaches align with your learning objectives. When a student uses "AI," they might mean anything from grammar check to full text generation—this framework gives you shared vocabulary to set precise expectations in your syllabi and assignments.

What This Means for Students

AI isn't just chatbots. Your university has been using AI for years—in advising tools, library search, spam filters, and more. Understanding the different types helps you recognize when you're interacting with AI (sometimes without realizing it) and when disclosure is expected. The rules around a grammar checker are different from the rules around having a chatbot write your essay.

What This Means for Researchers

Your research may already use AI—machine learning for data analysis, NLP for text processing, computer vision for imaging, or code generation and testing. This taxonomy helps you articulate which AI category your tools fall into when writing IRB protocols, methods sections, and grant proposals. It also helps you anticipate governance requirements as agentic AI enters research workflows.

What This Means for Staff & Administration

Many administrative systems already incorporate AI (enrollment prediction, financial compliance, HR screening). This taxonomy helps you identify where AI is already operating in your workflows, understand emerging tools being proposed for your unit, and ask informed questions during procurement and implementation discussions.

A note on terminology

Throughout this document, "AI" refers to the full spectrum unless otherwise specified. "GenAI" refers specifically to generative AI tools. When governance implications differ by AI type, we note this explicitly.

General University Policy

Norm to adopt: Encourage transparency about substantive GenAI use, using a tiered model that scales expectations without relying on AI detectors. The goal is to reduce confusion about authorship and accountability while keeping disclosure realistic and enforceable.

Tiered Disclosure / Citation Model (operational default)

Tiered Disclosure and Citation Model for AI Use
Tier Typical Examples Suggested Transparency Expectation
Tier 0
Ambient/embedded AI
Autocomplete, spell/grammar check, search-engine summaries, basic accessibility features. No disclosure expected. Treat as “background infrastructure,” but remain cautious about accuracy and bias.
Tier 1
Non-substantive assistance
Brainstorming, outlining, formatting, translation drafts, grammar edits. Disclose tool + purpose (e.g., “Used [tool] for grammar/outline assistance”). Prompts not usually required.
Tier 2
Substantive contribution / high stakes
Generating or rewriting substantial text/code, summarizing sources, analyses, grading/feedback drafts, policy language, grant/manuscript drafting. Disclose tool + purpose + inputs/prompt(s) or “prompts available upon request.” If verbatim output is used, quote or clearly mark it. Humans remain accountable.

A note on enforcement: Detection of AI use is unreliable; this tiered framework is designed to function through disclosure norms, assignment design, and process evidence—not through “AI detectors” as a primary enforcement mechanism.

What counts as “substantive use” (quick rubric)

  • More likely non-substantive: grammar fixes, reformatting, short brainstorming prompts, generating a checklist.
  • Often substantive: drafting paragraphs, producing code used in production, summarizing readings for an assignment, creating study guides, generating peer-review language.
  • High-stakes substantive: grant proposals, official communications, hiring/performance evaluations, student grading decisions, peer review or evaluation of others' work (see § 4 on federal restrictions), analysis involving Internal, Sensitive, or High-Risk data, or any use in which AI substantively shapes human-subject protections, informed consent, methodology, or risk disclosures. (Note: Routine editing or readability adjustments to IRB materials are not categorically high-stakes; risk increases when AI shapes the substance of protections, consent language, or study design.)
What This Means for Faculty & Instructors

You set the disclosure expectations for your courses. Use the tiered model below to calibrate: decide which tier of AI use is acceptable for each assignment, state it in your syllabus, and tell students what disclosure looks like. You are also expected to disclose your own substantive AI use in materials you produce (e.g., AI-drafted exam questions, AI-assisted feedback).

  • Use the ready-to-copy syllabus templates in Section 0 as a starting point.
  • Be specific: "AI is not allowed" is less clear than "Generative AI may not be used for the final essay but may be used for brainstorming in the pre-writing assignment."
  • Consider discussing disclosure norms on the first day of class—this reduces confusion and integrity incidents.
What This Means for Students

When in doubt, disclose. If your instructor hasn't specified an AI policy, ask before submitting work that involved AI assistance. Disclosure protects you—undisclosed AI use can be treated as an academic integrity violation even if you didn't intend to cheat.

  • Tier 0 (spell check, autocomplete): No disclosure needed.
  • Tier 1 (brainstorming, outlining, grammar edits): Note the tool and what you used it for.
  • Tier 2 (drafting text, generating code, summarizing sources): Full disclosure—tool, purpose, and prompts (or "prompts available upon request").
  • If you include AI-generated text verbatim, put it in quotation marks.
What This Means for Researchers

Disclosure in research serves transparency and reproducibility. If AI meaningfully shaped your analysis, writing, or methodology, document it in your methods section, grant narrative, or supplementary materials. Many journals now require AI transparency statements—proactive disclosure during the research process saves time at submission.

  • Use the Research/Grant Disclosure template in Section 0.
  • For AI-assisted literature reviews or coding, maintain a log of prompts, outputs, and your editorial decisions.
What This Means for Staff & Administration

Administrative AI use also warrants transparency, particularly for communications, reports, policy drafts, or decisions that affect others. If you use AI to draft official communications, generate reports, or support decision-making, note it in your records—especially for documents subject to FOIA or audit.

  • Internal memos or talking points drafted with AI: note in file or footer.
  • Student-facing communications: review carefully before sending—AI can produce confidently wrong information.

College / Department Implementation

No-AI: "Use of generative AI tools in coursework, grading, or scholarship is prohibited unless specifically authorized in writing by the instructor or supervisor."
Balanced-AI: "AI tools may be used for brainstorming, editing, or coding assistance with transparent disclosure. Core intellectual or analytic work must be performed by the author(s)."
AI-Forward: "Thoughtful integration of AI is encouraged for creativity and efficiency. Users must disclose methods and evaluate accuracy and bias before dissemination."

Suggested Implementation (syllabus/handbook)

"Students and faculty are expected to disclose any use of AI tools (e.g., ChatGPT, Claude, Copilot, Gemini) by noting the tool, date, and role in the work. Failure to disclose constitutes a breach of academic integrity."

Rationale

Transparent disclosure preserves authorship clarity, supports evaluation of learning and scholarship, and reduces confusion when automated systems influence wording or structure. Different disciplines have varying tolerance for AI assistance, and clear disclosure allows instructors and supervisors to calibrate expectations appropriately.

Example Scenarios

LAS (Linguistics): A student uses an AI assistant to generate initial phonetic transcription suggestions, then manually verifies each symbol against audio recordings and discloses "AI-assisted transcription generation; all symbols human-verified."

Grainger (Computer Science): A graduate student employs GitHub Copilot for boilerplate code in a machine learning pipeline, noting in the project README: "Copilot v1.156, October 2025, used for data preprocessing scaffolding; all algorithms and validation logic authored independently."

ACES (Agricultural Sciences): A faculty member uses AI to improve grammar in a grant proposal but discloses this in the submission notes to ensure compliance with funder transparency requirements.

Implications

  • Without disclosure: Ambiguity about authorship, uneven expectations across courses/units, and potential academic integrity violations.
  • With disclosure: Enables fair evaluation, consistent norms, and informed assessment without prohibiting beneficial assistance.
  • Cross-disciplinary considerations: Engineering and computer science may normalize AI coding assistants, while humanities emphasize distinct human voice; disclosure allows appropriate contextualization.

General University Policy

AI systems shall not fabricate or alter data, citations, or peer-review responses. Authorship requires meaningful intellectual contribution that cannot be delegated to software. All work—student, staff, or faculty—must clearly distinguish between human and AI-generated content.

What This Means for Faculty & Instructors

You define what constitutes acceptable AI use in your courses—and you're the front line for setting clear expectations. The strongest protection against AI-related integrity incidents is assignment design that makes AI misuse impractical or obvious, not detection software.

  • Design assignments that require personal reflection, in-class components, iterative drafts, or oral defense.
  • AI detectors produce false positives (disproportionately flagging non-native English speakers) and false negatives. Do not rely on them as evidence.
  • If you suspect undisclosed AI use, focus on process evidence (drafts, version history, discussion of reasoning) rather than detector output.
What This Means for Students

Your degree certifies your knowledge and abilities. Using AI to bypass the learning process—not just to get a grade—undermines the value of your credential to future employers, graduate programs, and yourself.

  • Always check the syllabus for each course's AI policy. They vary.
  • If an assignment is meant to develop a skill (analytical writing, problem-solving, coding), doing it yourself is the point—even if it's harder.
  • Verify every claim and citation. AI fabricates references that look real. Submitting fabricated citations is a serious integrity violation.
  • When AI use is permitted, your job is to direct, evaluate, and take responsibility for the output—not to submit it uncritically.
What This Means for Researchers

Authorship requires meaningful intellectual contribution. AI cannot be listed as an author, and using AI to generate text you present as your own scholarship raises the same concerns as ghostwriting. Publication venues increasingly require AI use statements, and failure to disclose can result in retraction.

  • Verify all AI-suggested references against primary sources—hallucinated citations in published work damage your reputation and your co-authors'.
  • For graduate students: your thesis or dissertation must represent your original scholarship. Limited AI use for grammar and formatting is permitted with disclosure, but the intellectual core must be demonstrably yours.
What This Means for Staff & Administration

When you draft reports, proposals, or communications using AI, you remain accountable for accuracy and tone. AI-generated administrative documents can contain confidently stated errors that, if published or sent, reflect on the university.

  • Verify facts, figures, and policy references in any AI-assisted draft before distribution.
  • For documents involving personnel decisions, student records, or legal implications, human authorship and review are especially critical.

College / Department Implementation

No-AI: "All graded work and publications must be composed without the aid of generative AI."
Balanced-AI: "Limited AI assistance (grammar, syntax, formatting) is permitted with disclosure; conceptual and analytic work must be the author's own."
AI-Forward: "AI-assisted drafting may be permitted if disclosures and validation steps are documented. Authorship credit remains with the human(s) directing and verifying the output."

Suggested Implementation

"All written or coded work must accurately cite sources. Students and faculty must verify all AI-suggested references and accept full responsibility for the validity of citations and data."

Application to Different Work Types

Regular Coursework: Instructors define acceptable AI use for assignments. Core learning objectives (e.g., analytical writing, problem-solving) must be demonstrated by the student. AI-generated drafts that bypass learning goals constitute academic dishonesty.

Faculty Research and Publication: Co-authors must verify all claims, data, and citations. AI tools may assist with literature searches or formatting, but fabricated references or unverified data violate the University's Research Integrity Policy and can result in retraction. Researchers are also responsible for verifying AI-generated analysis code and computational outputs; use of AI does not shift authorship responsibility or accountability for data falsification, fabrication, or misrepresentation. Proportional verification is increasingly recognized as essential in AI-assisted research workflows (Electv Training, 2026).

Graduate Thesis/Dissertation: The Graduate College requires that theses represent original scholarship. AI tools shall not draft narrative, literature synthesis, or responses to peer review. Limited AI use for grammar and formatting is permitted with disclosure, but the intellectual core must be demonstrably the student's own work.

Rationale

Degree-granting assessments require demonstrated human scholarship. Prohibiting AI ghostwriting and citation fabrication protects the integrity of credentials, reduces advisor workload spent on remediation, and maintains trust in University degrees. Distinguishing between AI-assisted editing and AI-generated content preserves academic standards while allowing beneficial technology use.

Example Scenarios

iSchool (Information Sciences): A master's thesis on user experience includes 200 references. The student uses AI to format citations but manually verifies each DOI and cross-checks against the original papers. The thesis acknowledgments note: "Citation formatting assisted by AI; all references verified against primary sources."

GIES (Business): A faculty member submits a manuscript to a top-tier journal. After receiving reviewer comments, the faculty member drafts responses independently, then uses AI to improve clarity. The response letter includes: "Language edited with AI assistance; all substantive arguments and analyses are the authors' original work."

FAA (Music Composition): A doctoral student composes an original work and writes program notes. AI suggests historical context phrasing, which the student rewrites in their own voice. The dissertation committee requires an oral defense demonstrating deep knowledge of the compositional choices, confirming authentic authorship.

Implications

  • Permissive drafting: Faster text creation but risks hollow scholarship, fabricated citations, heavy advisor remediation, and potential retractions.
  • Human-authored core: Preserves rigor, reduces downstream rework, protects credential value, and allows limited AI copy-editing with transparency.
  • Field-specific norms: Creative and humanities fields emphasize original voice; STEM fields may accept more AI assistance in documentation while requiring rigorous data validation.

General University Policy

University-licensed data, library materials, or restricted datasets may not be uploaded to external AI tools or used for model training. All AI use must comply with data-handling agreements and applicable law (FERPA, HIPAA, GDPR).

What This Means for Faculty & Instructors

Do not upload student work, grades, roster information, or unpublished research to consumer AI tools (ChatGPT, Claude, Gemini, etc.) unless using a University-approved enterprise pathway with appropriate data protections. This applies even for seemingly innocuous uses like "summarize these student responses."

  • Student names + grades + any identifiers = FERPA-protected. Period.
  • If you want to use AI with student data, consult your IT unit about approved platforms.
  • Library-licensed PDFs and datasets have terms of service that typically prohibit upload to AI systems for model training.
What This Means for Students

Be cautious about what you share with AI tools. Anything you type into a consumer chatbot may be used to train future models—which means it's no longer private.

  • Never upload other students' work, group project contributions you didn't author, or materials shared under confidentiality (e.g., peer review).
  • If you're working with research data (as an RA or in a capstone), follow your lab's or advisor's data handling protocols—not your personal habits.
  • Be aware that pasting large sections of copyrighted textbooks or articles into AI tools may violate licensing terms.
What This Means for Researchers

Research data governance applies to prompts and inputs, not just stored datasets. If you paste interview transcripts, survey responses, medical records, or any identifiable data into a consumer AI tool, you may be violating your IRB protocol, your data use agreement, or both.

  • Use University-approved or locally hosted AI tools for any work involving human subjects data.
  • RAG systems and embedding pipelines should store data on University infrastructure with vendor-training disabled.
  • Check your data use agreements—many prohibit third-party access, which includes cloud AI APIs.
  • Peer review materials (manuscripts, proposals, applications) are typically shared under confidentiality. Uploading them to consumer AI tools may violate reviewer obligations and, for federal funders, explicit policy (see § 4 on NIH restrictions).

International collaborations require additional attention. De-identified data (where direct identifiers are removed but re-identification remains theoretically possible) and anonymized data (where re-identification is no longer reasonably achievable) are treated differently under international law. Under China's Personal Information Protection Law (PIPL), for example, data exported outside China must meet the higher standard of anonymization, not merely de-identification. Other jurisdictions (EU/GDPR, Brazil/LGPD) impose their own cross-border transfer requirements. Researchers conducting international studies involving AI-processed personal data should consult the University's export control and legal offices before data transfer, particularly where data originates in or flows through jurisdictions with heightened data sovereignty requirements.

What This Means for Staff & Administration

Administrative data is often more sensitive than it appears. Employee records, financial data, student information, internal deliberations, and draft policies should never be uploaded to consumer AI tools.

  • Even "anonymized" summaries can be re-identified when combined with other publicly available information.
  • If your unit is considering an AI tool for workflow automation, ensure IT and procurement have reviewed the vendor's data handling, retention, and training-on-your-data policies.
  • When in doubt, use the "newspaper test": would you be comfortable if this data appeared in a news story about a university data breach?

College / Department Implementation

No-AI: "AI services are disabled in course or lab environments where confidential or proprietary data are processed."
Balanced-AI: "Use sandboxed or University-approved AI platforms that prevent vendor training and store data locally."
AI-Forward: "Authorized research computing resources may support fine-tuned models under approved data-use and IRB protocols."

Suggested Implementation

"Never upload student records, unpublished data, or library-licensed PDFs to public AI tools. Use University-approved environments only."

Rationale

Reading or quoting licensed content is distinct from model training, which may constitute copyright infringement and license violations. Protecting licenses, privacy, and compliance reduces legal and ethical exposure. Many library licenses explicitly prohibit text and data mining for commercial AI development. FERPA, HIPAA, and IRB protocols apply to data in prompts, not just stored datasets.

Example Scenarios

University Library: A graduate student wants to summarize 100 journal articles using AI. The student exports metadata and abstracts (which are publicly available) but does not upload full-text PDFs from licensed subscriptions, respecting publisher terms of service.

AHS (Kinesiology): A research team builds a retrieval-augmented generation (RAG) system for exercise science literature. They store article embeddings on a University server with vendor-training disabled, ensuring compliance with database licensing agreements.

Engineering (Biomedical): A lab uses medical imaging data under a data-use agreement that prohibits third-party access. The lab deploys a local AI model on campus infrastructure rather than uploading images to a cloud-based service.

Implications

  • No training guardrail: Minimizes contract breach risk, protects patron privacy, maintains publisher relationships; requires slightly more technical effort.
  • Unrestricted training: Faster prototyping but creates licensing violations, data leakage risk, potential lawsuits, and erosion of library access agreements.
  • Compliance burden: Requires researcher education on FERPA/HIPAA applicability to AI workflows and proactive vendor contract negotiation by IT procurement.

General University Policy

Any AI use involving human data—including prompts, transcripts, video, or biometric inputs—must be reviewed by the Institutional Review Board (IRB). Uploading identifiable data to consumer AI platforms is prohibited.

What This Means for Faculty & Instructors

If you supervise research involving human participants and AI is part of the methodology—for data collection, analysis, or intervention delivery—ensure your IRB protocol specifically addresses the AI component. This includes specifying the tool, its hosting environment, bias evaluation, and data retention. Consent forms must disclose AI use in terms participants can understand.

What This Means for Students

If you're involved in research as a participant, you have the right to know if AI will be used to process your data. If you're conducting research (thesis, capstone, RA work), talk to your advisor about whether your use of AI tools requires IRB documentation—even tools that seem routine (like AI transcription) may need to be disclosed in your protocol.

What This Means for Researchers

This is your core compliance section. If AI meaningfully shapes data collection, analysis, or participant interaction—including prompts containing participant information, AI-assisted transcription, AI coding of qualitative data, or AI-driven interventions—protocols should describe:

  • Tool identity: the AI tool, its version, and its hosting environment (cloud vs. local).
  • Data handling: whether the vendor's terms allow your data use and whether training-on-your-data is disabled.
  • Human oversight: the role of human review in the analytic pipeline.
  • Bias considerations where applicable: how the tool was tested across demographic groups relevant to your study population.
  • Consent language: describe AI use in terms participants can understand.
  • For qualitative research: see the expanded methodology guidance below on epistemological considerations.

Routine or incidental AI use (e.g., grammar checking a protocol draft) does not require IRB documentation. The threshold is whether AI meaningfully shapes the research process or participant experience.

What This Means for Staff & Administration

If your administrative unit collects data from people—surveys, feedback forms, usage analytics—and you plan to use AI to analyze it, check whether IRB review is required. Under federal regulations (45 CFR 46), the threshold for human-subjects research hinges on whether the activity is designed to develop or contribute to generalizable knowledge—not solely on whether results are published. Internal program evaluation may still require review depending on design and intent. Consult the Office for the Protection of Research Subjects (OPRS) if uncertain.

College / Department Implementation

No-AI: "AI systems are not used for analyzing identifiable research data."
Balanced-AI: "AI analytics may be used on de-identified data under IRB-approved protocols."
AI-Forward: "Researchers may integrate AI into study design and intervention delivery after demonstrating safeguards against re-identification, bias, and data leakage."

Suggested Implementation

"When AI meaningfully shapes data collection, analysis, or participant interaction in human-subjects research, IRB protocols should include documentation describing the tool, its hosting, data handling practices, human oversight, bias considerations, and data-retention plan."

Rationale

Human-subjects protection applies to prompt contents, not just final datasets. Even de-identified data can be re-identified through AI-assisted inference attacks. Contracts and IRB review mitigate privacy, bias, and re-identification risk. Consent forms must disclose AI use so participants can make informed decisions. Publication venues increasingly require AI transparency statements.

Example Scenarios

AHS (Health Sciences): Researchers analyze gait videos to detect fall risk. They use a locally-hosted computer vision model, export only de-identified kinematic features, and document the pipeline in their IRB protocol. The consent form states: "Your movement patterns will be analyzed using AI software; identifiable video will not leave University servers."

Education (Learning Sciences): A team studies classroom discourse using speech-to-text transcription. They obtain IRB approval specifying the AI vendor, confirm the vendor contract prohibits training on transcripts, and include audio analysis in the informed consent process.

LAS (Psychology): A study uses AI to code interview transcripts for emotional tone. The IRB reviews the AI system's bias evaluation (tested on diverse demographic groups) and approves a data-retention plan requiring transcript deletion 3 years post-publication.

AI in Research Methodology: Beyond Compliance

IRB compliance is necessary but not sufficient. AI is changing the nature of research analysis—not just the speed—and the university should encourage researchers to engage with these epistemological implications, particularly in qualitative and interpretive traditions.

Qualitative research. A growing body of scholarship (in journals including Qualitative Psychology, Qualitative Research, and Academic Medicine) documents that AI tools can identify descriptive and concrete themes with reasonable accuracy but struggle with interpretive depth, reflexivity, and contextual meaning-making. There is a real risk that uncritical AI-assisted analysis represents a "return to positivism" in qualitative inquiry—privileging breadth and pattern-matching over the deep immersion and researcher reflexivity that define interpretive traditions. AI-generated codes and themes can feel authoritative precisely because they are systematic and comprehensive, which may undermine the researcher's own engagement with data.

Documentation and reproducibility. Norms for documenting AI-assisted qualitative analysis are still emerging. Researchers should maintain reflexive journals that log which AI-generated themes were adopted or rejected, document the rationale for those decisions, and preserve transparency about the researcher's own assumptions. This is distinct from quantitative reproducibility—it is about making the analytic process legible to reviewers and future researchers.

Privacy considerations specific to qualitative data. Qualitative transcripts, field notes, and narrative data are often rich, identifiable, and emotionally sensitive. Uploading such data to cloud-based AI tools creates exposure risks fundamentally different from using locally installed software like NVivo or ATLAS.ti. De-identification protocols (replacing names, removing locational markers) are essential before submitting any qualitative data to cloud-based GenAI. Local models offer a privacy-preserving alternative where computationally feasible.

AI as analytic partner, not replacement. The emerging consensus in the literature supports treating AI as a complement to—not a substitute for—human-centered analysis. AI may be useful for initial code generation, identifying overlooked patterns, or managing large datasets, but the interpretive work of qualitative analysis (contextualizing, questioning, theorizing) remains fundamentally human. Researchers should approach AI-assisted analysis with the same critical scrutiny they would apply to any methodological tool.

Federal funding agency restrictions. Researchers should be aware that federal agencies have imposed concrete restrictions on AI use. The NIH prohibits the use of generative AI in peer review (NOT-OD-23-149, June 2023), on the grounds that uploading application content to AI tools violates confidentiality requirements. More recently, the NIH issued guidance (NOT-OD-25-132, July 2025) stating that applications "substantially developed by AI" will not be considered original work and may be referred to the Office of Research Integrity for misconduct investigation. The NIH also noted that AI-generated applications risk plagiarism and fabricated citations. Researchers at UIUC should consult funder-specific AI policies before using generative AI in any grant-related or review-related activity.

"Researchers using AI in qualitative or interpretive research should document the AI tool's role in the analytic process, maintain reflexive records of which AI-generated outputs were adopted or rejected and why, and ensure that the interpretive core of the analysis reflects the researcher's own engagement with the data."

Implications

  • IRB-first approach: Clearer consent, fewer surprises during publication and audits, reduced risk of consent violations and reputational harm.
  • Upload-first approach: Speed advantage but compliance risk, potential for IRB violations, difficulty publishing in top journals, and vulnerability to data breaches.
  • Emerging best practice: Many journals now require AI transparency statements; proactive IRB documentation streamlines manuscript preparation and demonstrates research integrity.

General University Policy

Under University policy, faculty and students generally retain ownership of traditional academic works—including original instructional materials, scholarly publications, and creative works produced in the normal course of teaching and research—unless the work is patentable or defined otherwise by applicable agreements. Where University system resources "over and above those usually provided" are used in creating a work, the creator retains ownership, and the University System retains a perpetual, royalty-free, non-exclusive license for internal use. Vendors and platforms may not ingest or train on University content without explicit written permission. Research data, courses, and other institutional content fall under University IP policy and federal law; specific ownership questions—particularly involving patents, sponsored research, or substantial institutional resources—should be directed to the Office of Technology Management or University Counsel.

What This Means for Faculty & Instructors

Your course materials—lectures, slides, problem sets, videos, assessment items—are your intellectual property (and/or the University's, depending on your appointment terms). Protect them:

  • Review the Terms of Service (ToS) for any AI or ed-tech platform before uploading your materials. Many platforms claim training rights in their ToS. Where possible, use LMS access controls (authentication, streaming-only delivery) to limit exposure.
  • Use LMS access controls (authentication, streaming-only) rather than public posting when possible.
  • If you create materials you want to share openly, consider a Creative Commons license that explicitly addresses AI training (e.g., BY-NC with a no-training clause).
  • If you use AI to help create course materials, understand that AI-generated content may have limited or unclear copyright protection—you may not own what the AI produced.
What This Means for Students

Your original work belongs to you, but be aware of two important considerations:

  • If you paste your original writing, code, or creative work into a consumer AI tool, it may be used to train future models. You've effectively given the company a copy of your work with broad usage rights.
  • Content generated by AI may not be copyrightable. If your work heavily incorporates AI-generated material, the copyright status of the resulting work may be unclear.
  • Don't upload your instructors' materials (slides, exams, problem sets) to AI tools—this may violate both their IP rights and your academic integrity obligations.
What This Means for Researchers

Research IP has additional layers: funder requirements, data use agreements, collaborative agreements, and publication licenses. Before using AI tools in your research pipeline, verify that doing so doesn't violate any of these agreements—particularly for industry-sponsored research or classified work. AI-generated content in publications raises unresolved questions about patentability and copyright that may affect technology transfer.

What This Means for Staff & Administration

When procuring AI-powered platforms for your unit, pay close attention to vendor contracts regarding data reuse and model training. Negotiate clauses that prohibit the vendor from training on University content. If existing contracts don't address AI, flag them for renegotiation—many vendors have updated their terms to claim broader data rights.

College / Department Implementation

No-AI: "All instructional content must remain on University servers; external AI systems may not reuse it."
Balanced-AI: "Limited API-based interactions allowed where vendor contracts prohibit data reuse."
AI-Forward: "Units may license content to train local or consortium AI models with faculty consent and credit attribution."

Suggested Implementation

"Lecture slides, videos, and assessment materials are property of the instructor and the University; they may not be uploaded to or reproduced by external AI services."

Rationale

Preserving academic IP encourages innovation, prevents uncompensated reuse, and protects instructors from having their materials used to train competing products. Many learning management systems and video platforms have updated terms of service to claim training rights; faculty should review and negotiate contracts to preserve ownership. Publicly funded research may have additional disclosure requirements.

Example Scenarios

FAA (Theatre): A director creates annotated scripts and staging videos for a course on experimental performance. The materials are hosted in the University LMS with streaming-only access; vendor training on the content is contractually disabled to protect the creative work.

Grainger (Mechanical Engineering): A professor's problem sets and video lectures are highly regarded. The professor publishes them under a Creative Commons BY-NC license, allowing educational reuse but prohibiting commercial training without permission.

ACES (Food Science): Lab protocols and safety videos are institutional resources. The college negotiates LMS contracts that explicitly prohibit AI training on uploaded content, preserving the University's ability to license materials for revenue or consortium use.

Implications

  • IP protection: Supports course quality, fair sharing, faculty autonomy, and potential revenue from licensing high-quality materials.
  • Open ingestion: Risks loss of control, third-party repackaging, faculty resentment, erosion of instructional quality, and reduced incentive for educational innovation.
  • Negotiation leverage: Universities with strong IP clauses in vendor contracts can protect faculty work while still enabling beneficial technology adoption.

General University Policy

All AI tools used in instruction, assessment, or employment must meet accessibility standards (WCAG 2.1) and undergo bias evaluation. Equitable alternatives must be available for those unable or unwilling to use a given tool.

Opt-Out Rights and Principled Refusal

Equitable participation includes respecting principled decisions not to use AI tools. The "unwilling" in the policy above carries equal weight to the "unable." Specifically:

  • Faculty should not be evaluated negatively for choosing not to integrate AI into their teaching, research, or administrative work. Pedagogical autonomy includes the right to determine that AI tools are inappropriate for a given learning context.
  • Students should have genuine opt-out paths that do not result in academic penalty, reduced access to instruction, or additional burden (e.g., being required to complete a significantly more onerous alternative assignment as a consequence of opting out).
  • Staff should not be pressured to use AI tools that conflict with their professional judgment, values, or assessment of risk. If AI tools are mandated for specific workflows, the mandate should be accompanied by documented justification, training, and a clear process for raising concerns.
What This Means for Faculty & Instructors

When you adopt an AI tool for your course, you take on responsibility for ensuring all students can participate equitably. This means:

  • Verify the tool meets WCAG 2.1 AA accessibility standards before requiring it.
  • Provide a genuine alternative for students who cannot or choose not to use the tool—"genuine" means not significantly more burdensome.
  • Be aware that AI tools can perform differently across demographic groups—speech recognition accuracy varies by accent, computer vision by skin tone, and text generation by cultural context.
  • Your right to choose not to use AI in your teaching is protected. You should not face negative evaluation for preferring non-AI pedagogical approaches.
What This Means for Students

You have the right to equitable participation regardless of your ability or willingness to use AI tools:

  • If an AI tool required for a course is inaccessible to you (due to disability, technology limitations, geographic restrictions, or other barriers), you are entitled to an equivalent alternative. Talk to your instructor or contact DRES.
  • If you believe an AI tool is producing biased results that affect your academic evaluation (e.g., AI grading, AI proctoring), you have the right to raise concerns with your instructor and, if needed, with the department or college oversight committee.
  • Choosing not to use AI for principled reasons should not result in academic penalty—if it does, escalate through appropriate channels.
What This Means for Researchers

If your research uses AI tools that interact with human participants or analyze human data, bias evaluation is a research integrity issue, not just an ethical nicety. Where feasible, use University-pre-approved or institutionally vetted tools, which reduce the individual burden of bias evaluation. Where novel or unvetted tools are used, researchers should document how the tool was tested for differential accuracy or systematic error relevant to the study context (including but not limited to demographic groups). IRB reviewers and journal editors increasingly expect this. Institutional review and centralized tool vetting can complement—but do not replace—the researcher's responsibility for appropriate use in context.

What This Means for Staff & Administration

If your unit is deploying AI tools that affect students, employees, or the public (AI chatbots for advising, automated screening, workflow automation), ensure accessibility compliance and bias testing are completed before deployment, not after complaints. Work with Disability Resources and the relevant oversight committee to establish evaluation protocols.

Choosing not to use AI is a legitimate, evidence-informed position—not technophobia. Some contexts genuinely call for unmediated human engagement, and some individuals may have well-founded concerns about particular tools or vendors. A culture that treats non-adoption as deficiency undermines the informed choice this guidance seeks to promote. The AAUP's 2025 report Artificial Intelligence and Academic Professions—based on a survey of 500 faculty across nearly 200 campuses—reinforces this principle, finding that 76% of respondents reported declining job enthusiasm due to AI-related pressures and recommending that faculty be able to opt out of AI tools without penalty, that shared governance guide AI implementation decisions, and that impact assessments precede technology deployment.

College / Department Implementation

No-AI: "AI-driven grading and proctoring tools are not permitted."
Balanced-AI: "AI tools may supplement instruction only if accessibility and bias audits are documented annually."
AI-Forward: "Units may co-develop AI applications that demonstrably reduce barriers for disabled, multilingual, or under-resourced learners."

Suggested Implementation

"Instructors selecting new digital tools should verify accessibility compliance and provide equivalent options for all learners."

Rationale

Equitable participation requires accessible design and monitoring for biased failure modes. AI systems can exhibit disparate accuracy across demographic groups, body types, accents, and language backgrounds. In health, sport, and learning contexts, undetected bias can harm student success and violate ADA and Title VI obligations. Accessible alternatives ensure all learners can participate fully.

Example Scenarios

AHS (Athletic Training): A coaching app provides AI-generated exercise form feedback. The development team tests accuracy across diverse body types, mobility profiles, and common adaptive equipment. The app includes keyboard navigation, screen reader support, and alternative text-based coaching for users who prefer not to use video analysis.

Education (TESOL): An AI tutoring system for English language learners is evaluated for bias across native language backgrounds. The team discovers lower accuracy for speakers of tonal languages and implements corrections before deployment.

iSchool (Information Accessibility): A course uses an AI discussion analysis tool to identify engagement patterns. The instructor confirms the tool meets WCAG 2.1 AA standards and offers manual discussion tracking for students who opt out of AI analysis.

Implications

  • Audit & accommodate: Broader participation, fewer downstream grievances, compliance with ADA/Title VI, improved learning outcomes for all students.
  • No audit: Hidden harms, inequitable outcomes, legal vulnerability, reduced trust in University technology, potential OCR complaints and lawsuits.
  • Proactive design: Involving disabled students and multilingual communities in tool evaluation improves quality and demonstrates institutional commitment to inclusion.

General University Policy

The University shall establish a standing university-level Ed-Tech & AI Oversight Committee with elected faculty, staff, and student members. This committee serves as the primary body for AI governance, reviewing procurement, evaluating bias and workload effects, and issuing annual public reports. Colleges may establish or designate additional oversight committees if desired, and existing college committees (e.g., Website & Communications committees) may incorporate this function.

What This Means for Faculty & Instructors

You have a direct stake in AI governance. Oversight committees should include elected faculty members, and your voice should be heard in procurement decisions that affect your teaching, research, and workload. If your college doesn't have an Ed-Tech & AI Oversight Committee, advocate for one. If it does, participate—or at minimum, know who to contact with concerns.

What This Means for Students

You are affected by AI decisions made at every level of the university—from course-level AI policies to institution-wide tool adoption. Student representation on oversight committees ensures your perspective is included. If you experience problems with an AI tool used in a course or campus service, you can direct concerns to the relevant college's oversight committee.

What This Means for Researchers

Governance processes affect which tools are available for your research, what procurement pathways exist, and how quickly new AI capabilities can be adopted. Engage with oversight committees to ensure research needs are represented alongside teaching and administrative concerns—and that governance doesn't inadvertently create barriers to legitimate research use.

What This Means for Staff & Administration

You are often the first to encounter AI tools in administrative workflows and the first to notice when they don't work as promised. Oversight committees need your practical perspective on tool effectiveness, workflow disruption, and workload impacts. Participate where possible, and use formal feedback channels to report issues—your observations help prevent university-wide problems.

College / Department Implementation

No-AI: "All AI-related tools must receive prior approval from the college oversight committee."
Balanced-AI: "Oversight committee conducts annual review and collects feedback from faculty and students."
AI-Forward: "Committee collaborates with innovation units to pilot new AI tools under continuous evaluation."

Suggested Implementation

"Questions or concerns about AI or ed-tech adoption should be directed to the university-level Ed-Tech & AI Oversight Committee, or to a college-specific committee where one exists."

Rationale

Shared governance aligns technology adoption with educational mission, provides venue for course correction, and builds trust through transparency. Faculty, staff, and students have unique perspectives on tool effectiveness and harms. Oversight committees can identify problems early, negotiate better vendor contracts, and ensure workload impacts are considered in procurement decisions. Annual reporting creates accountability and institutional memory. Emerging scholarship on AI in workplace contexts emphasizes sociotechnical risk management and labor impacts, reinforcing the need for governance structures and workload safeguards (Howard & Schulte, 2024).

Example Scenarios

GIES (Business Analytics): Before adopting an AI proctoring service, the college committee reviews research on false-positive rates for students with movement disorders and anxiety. The committee recommends alternative assessment methods that maintain integrity without surveillance harms.

LAS (Languages & Cultures): The oversight committee receives complaints that a mandated language-learning app has poor speech recognition for non-native English speakers using it to learn additional languages. The committee conducts a review and successfully negotiates with the vendor for accent-training improvements and alternative assessment options.

Grainger (Engineering): The committee pilots an AI teaching assistant for introductory programming courses. After a semester trial with continuous feedback collection, the committee publishes a report on student learning outcomes, instructor workload changes, and accessibility considerations, informing university-wide policy.

Forward Look: Governance for Agentic AI

Most current AI governance assumes a prompt-and-response model: a human asks, an AI generates, and the human decides what to do with the output. Agentic AI systems—which autonomously plan and execute multi-step tasks, interact with other software, browse the web, send communications, and make decisions with minimal human oversight—are already entering commercial products and research workflows. These systems create governance challenges that current frameworks may not adequately address:

  • Accountability gaps: When an AI agent autonomously sends an email, modifies a database, or makes a purchase on behalf of a university employee, who is responsible for errors, misrepresentations, or policy violations?
  • Data governance at scale: Agentic systems may access and process information across multiple university systems autonomously. Existing rules about what data can be shared with AI tools assume a human is making each sharing decision.
  • Audit and transparency: Agentic workflows may involve dozens of intermediate steps that are difficult to reconstruct after the fact, complicating both oversight and incident investigation.

The committee does not propose specific agentic AI governance rules at this time, as the technology and its campus applications are still emerging. However, the committee recommends that oversight bodies begin tracking agentic AI deployments and developing frameworks for accountability, logging, and human-in-the-loop requirements before widespread adoption creates entrenched practices.

Implications

  • Committee with authority: Transparency, better vendor contracts, fewer deployment surprises, improved tool quality, reduced faculty and student frustration.
  • Admin-only decisions: Speed advantage but recurring trust gaps, missed opportunities to avoid ineffective tools, compliance blind spots, and erosion of shared governance.
  • Best practices: Effective committees include diverse membership, clear decision-making authority, access to procurement discussions, and administrative responsiveness to recommendations.

Context

Several UIUC colleges and departments have already developed their own AI guidance or policy documents—including the College of Liberal Arts and Sciences, Gies College of Business, the Grainger College of Engineering, the School of Information Sciences, and likely others. This is appropriate and expected: units closest to specific disciplines and student populations are often first to respond to emerging challenges. However, it creates a practical question: when the Senate adopts university-level AI guidance, how does it relate to what units have already done?

This section provides a framework for reconciliation. The goal is not uniformity—disciplinary contexts differ legitimately—but coherence: everyone at the university should be able to understand the minimum expectations and know where their unit's policies fit within the broader institutional framework.

The Floor Principle: Senate Guidance as Baseline

Senate-level guidance is recommended to function as a floor—a set of minimum standards and shared norms that apply university-wide. Units retain autonomy to build on this floor in ways appropriate to their disciplinary context, but should not adopt policies that weaken or contradict the baseline protections established at the Senate level.

In practice, this means:

  • Units may exceed the baseline. A college that requires more detailed AI disclosure than the tiered model in Section 1, or that restricts AI use in specific assessment contexts beyond what the Senate recommends, is operating within the framework. More protective is always permissible.
  • Units should not fall below the baseline. A department that permits the use of non-enterprise AI tools for FERPA-protected data, or that omits disclosure expectations entirely, would be in tension with the Senate guidance. These gaps need to be identified and addressed.
  • Units may adapt, not just adopt. The Senate guidance provides verbatim language that units can use directly, but adaptation to local context is expected and encouraged. A fine arts program and a clinical health program will appropriately interpret "AI-assisted work" differently—the reconciliation framework accommodates this.

Reconciliation Audit: A Step-by-Step Process

The following process is designed for a department chair, associate dean, or college-level committee tasked with aligning existing unit AI policies with Senate guidance. It can be completed in a single meeting or distributed across several weeks depending on unit size and policy complexity.

Step 1: Inventory existing policies. Gather all documents that address AI or educational technology in your unit: syllabi templates, college-level guidance, faculty handbook supplements, research protocols, procurement standards, and any informal norms communicated to faculty or students. Include policies that predate "AI" framing but may apply (e.g., plagiarism definitions, data handling procedures, software procurement).

Step 2: Map to Senate sections. For each Senate guidance section (1–11), identify whether your unit has a corresponding policy. Use three categories: (a) covered and consistent—your unit addresses this area and the substance aligns with or exceeds the Senate baseline; (b) covered but potentially conflicting—your unit addresses this area but the substance may differ from or fall below the Senate baseline; (c) not addressed—your unit has no policy in this area.

Step 3: Assess conflicts. For any areas marked "potentially conflicting," articulate the specific tension. Common conflict patterns include: unit policies that rely on AI detection tools as enforcement mechanisms (conflicting with Section 2's recommendation against this); unit policies that permit uploading student data to consumer-grade AI tools (conflicting with Section 3's data governance framework); and unit policies that lack disclosure expectations for faculty or staff AI use (falling below Section 1's baseline).

Step 4: Identify gaps. For areas marked "not addressed," determine whether the gap creates risk. Priorities include: data privacy and security (Section 3), disclosure norms (Section 1), and opt-out protections for faculty and students (Section 6). These three areas, if unaddressed, create the most immediate institutional exposure.

Step 5: Draft a reconciliation plan. For each conflict or gap, propose a resolution: adopt the Senate language directly, adapt it to local context, or (in rare cases) propose a principled exception with documented rationale. The plan should include a timeline and identify who is responsible for implementation.

Step 6: Report and iterate. Share the reconciliation plan with the university-level Ed-Tech & AI Oversight Committee (Section 7) and any college-level oversight body, as well as the Senate IT Committee. This creates a feedback loop that strengthens both unit and Senate-level guidance over time.

When Policies Conflict: Escalation

Most reconciliation can be handled at the unit level through the audit process above. For cases where genuine conflict exists and cannot be resolved locally—for example, where a unit believes the Senate baseline is inappropriate for its disciplinary context—the recommended escalation path is:

  1. Unit documents the conflict with a written rationale explaining why the Senate guidance does not fit and what alternative the unit proposes.
  2. College-level oversight committee reviews and either endorses the unit's position or recommends alignment with the Senate baseline.
  3. Senate IT Committee receives unresolved conflicts and may revise the Senate guidance, grant a documented exception, or affirm that the baseline applies.
  4. Faculty Senate is the final authority for disputes that cannot be resolved at the committee level. This should be rare—the framework is designed to prevent conflicts from reaching this stage by providing clear expectations and adaptation flexibility.

The intent is not to create a bureaucratic process but to ensure that when disagreements arise, there is a transparent path for resolution that respects both institutional coherence and unit autonomy.

What This Means for Faculty & Instructors

If your department or college already has AI policies, those don't disappear when the Senate acts. The reconciliation framework means your local policies will be reviewed for alignment with the Senate baseline. In most cases, existing unit policies will meet or exceed the baseline and require minimal changes. If your unit's policies are more restrictive than the Senate guidance (e.g., stricter disclosure requirements, more limited AI use in assessments), those remain valid. If you serve on a department curriculum committee or college policy body, you may be asked to participate in the reconciliation audit.

What This Means for Students

You may encounter different AI expectations across courses and departments. The reconciliation framework is designed to create a consistent floor—minimum protections and expectations that apply everywhere—while allowing discipline-specific variation above that floor. If you're uncertain about which rules apply in a specific course, the instructor's syllabus is your primary reference. If a syllabus conflicts with what you understand to be university-level expectations, you can raise the question with the instructor or your college's student governance representative.

What This Means for Researchers

Research-intensive units may have AI policies shaped by disciplinary norms, journal requirements, and funder expectations that go beyond what the Senate addresses. This is expected and appropriate. The reconciliation framework primarily affects you where Senate guidance intersects with research operations: data privacy (Section 3), IRB protocols (Section 4), and IP/copyright (Section 5). If your unit has established practices in these areas that align with or exceed the Senate baseline, no changes may be needed. If gaps exist—particularly around data classification or disclosure—the audit process will surface them.

What This Means for Staff & Administration

Staff involved in policy administration, compliance, or procurement should be aware of the reconciliation process, particularly if you support multiple units with differing AI policies. The floor principle simplifies your work: once the Senate establishes baseline expectations, you can apply those as defaults and accommodate unit-specific additions where documented. If you're involved in drafting or reviewing unit-level AI policies, the six-step audit process provides a structured approach.

General University Policy

The University will provide ongoing professional development on responsible AI and ed-tech use, including ethics, data privacy, and instructional design. Campus resources for AI-related training and research computing include the National Center for Supercomputing Applications (NCSA), Illinois Computes, Research IT, the Center for Innovation in Teaching & Learning (CITL), and the University Library's GenAI guides. Units may rely on these centralized resources rather than duplicating professional development efforts locally. Time devoted to required compliance training is considered part of normal workload.

What This Means for Faculty & Instructors

Professional development on AI should be available, supported, and recognized as part of your workload—not treated as an unfunded mandate. You should have access to:

  • Workshops on AI-aware assignment design, disclosure norms, and strategies for maintaining learning objectives.
  • Templates and model syllabi language (see Section 0) that you can adapt rather than build from scratch.
  • Consultation services for evaluating whether AI tools are appropriate for your specific disciplinary context.
  • Time: learning and implementing AI policies is real work. Advocate for this to be recognized in workload discussions.
What This Means for Students

You should have access to training that goes beyond "how to use ChatGPT." Effective AI literacy for students includes:

  • Understanding disclosure expectations across your courses.
  • Recognizing when AI output is unreliable, biased, or fabricated.
  • Developing critical perspectives on AI's role in your field and in society (see Section 8.5).
  • Knowing your rights regarding opt-out, accessibility, and data privacy.
What This Means for Researchers

Research-specific AI training should cover IRB documentation for AI use, responsible AI-assisted analysis (especially in qualitative traditions), vendor contract evaluation, and deployment of local models on secure infrastructure. If your lab or center is developing AI tools, Model Card documentation training helps standardize reporting.

What This Means for Staff & Administration

Training for staff should be practical and role-relevant—not generic chatbot demos. Focus areas include: data classification (what can and cannot be shared with AI tools), vendor evaluation basics, documentation practices for AI-assisted work, and escalation pathways for concerns. Required compliance training should be counted as part of normal workload.

College / Department Implementation

No-AI: "Basic orientation on AI risks and opt-out procedures required annually."
Balanced-AI: "Offer workshops on responsible classroom integration and data-privacy safeguards."
AI-Forward: "Support faculty and student innovation with seed funding, model cards, and peer consultation."

Suggested Implementation

"Units should maintain a resource library and consultation program for AI literacy and best practices."

Rationale

Consistent professional development reduces misuse, improves instructional quality, and prevents uncompensated "work creep" where faculty spend excessive time learning tools mandated by administration. Templates and model policies reduce reinvention across departments. Recognizing compliance activities as workload acknowledges that responsible AI use requires time and expertise, preventing burnout and improving adoption quality.

Example Scenarios

Center for Innovation in Teaching & Learning (CITL): CITL offers a workshop series on AI in education, covering disclosure norms, assignment design that preserves learning objectives, and strategies for detecting AI-generated work. Faculty receive a one-page "AI Disclosure Box" template to attach to syllabi.

Graduate College: PhD students developing AI research tools receive training on Model Card documentation, which standardizes reporting of training data, intended use, bias evaluation, and limitations. The Graduate College provides a template that students can adapt for their specific tools.

Research IT: A consultation service helps faculty evaluate whether their research workflows require IRB review for AI use, reviews vendor contracts for data-privacy clauses, and assists with deploying local AI models on secure University infrastructure.

Implications

  • Professional development & templates: Fewer errors, clearer expectations, reduced faculty burden, better student learning outcomes, improved research compliance.
  • No support: Uneven practices across units, preventable violations, faculty frustration and burnout, student confusion, and increased risk of integrity incidents.
  • Workload recognition: Acknowledging that learning and implementing AI policies takes time ensures faculty are compensated for the real work of responsible technology adoption, rather than treating it as an unfunded mandate.

AI literacy is frequently reduced to knowing how to use a chatbot—writing effective prompts, evaluating outputs, and understanding basic capabilities. While functional proficiency matters, it represents only a small part of what it means to be an informed user of AI systems. True AI literacy requires critical engagement with the broader systems in which these technologies are embedded. We do not use AI—or refuse to use AI—in a vacuum.

General University Guidance

The university community should cultivate a multidimensional understanding of AI that extends well beyond tool proficiency. Critical AI literacy means asking not only "How do I use this?" but also "Should I use this? For whom does this work well? Who benefits and who is harmed? What am I sustaining or enabling through my use?"

What This Means for Faculty & Instructors

You shape the next generation's relationship with technology. Integrating critical AI literacy into your courses—even briefly—helps students develop habits of mind that extend far beyond any single tool. This could be as simple as asking students to research who owns the AI tool they used for an assignment, or as involved as a structured unit on AI's societal implications. Your disciplinary expertise uniquely positions you to surface field-specific concerns about AI adoption.

What This Means for Students

Knowing how to write a good prompt is the beginning, not the end, of AI literacy. As a university-educated person, you should be able to answer questions like: Who made this tool? What business model sustains it? What happens to the data I share? Who benefits from my use—and who might be harmed? These questions aren't anti-technology; they're the same critical thinking skills your education is designed to develop, applied to the technologies you use daily.

What This Means for Researchers

Critical engagement with AI's political economy isn't separate from rigorous research—it's part of it. Understanding who funds AI development, what values are embedded in training data curation, and how AI tools may shape your analytical framework helps you maintain the independence and reflexivity your scholarship requires. Consider how your choice of AI tools may influence your findings and how to document those choices transparently.

What This Means for Staff & Administration

When your unit evaluates AI tools for procurement, critical AI literacy means asking questions beyond "Does it work?" and "What does it cost?" Consider: What are this vendor's data practices? What are their labor and environmental commitments? What happens to our data and workflows if this vendor changes terms, raises prices, or goes out of business? These questions protect the university's interests and align procurement with institutional values.

Dimensions of Critical AI Literacy

The following questions should inform professional development programming, course design, procurement decisions, and individual practice across the university. They are not exhaustive but represent essential lines of inquiry that are frequently absent from AI literacy discussions.

Dimensions of Critical AI Literacy
Dimension Core Questions Why It Matters for a University
Ownership & Corporate Governance Who owns the AI tool I am using? What are these owners' political stances, investments, and institutional relationships? Who sits on their board? What are their stated values, and do their business practices align? University purchasing decisions and individual tool choices constitute financial and political relationships. Institutional adoption at scale sends market signals and shapes the AI industry's trajectory. Faculty, staff, and students deserve to make informed choices about which entities they support through their usage.
Benefit & Harm Distribution Who profits from my use of this AI tool? Who is harmed—directly or indirectly? Are the people being asked to adopt these tools the same people who benefit from them? Productivity gains may accrue to institutions while labor displacement, surveillance, and data extraction disproportionately affect workers, students, and marginalized communities. Adoption mandates should consider who bears the costs of compliance and transition.
Labor & Supply Chain Who performed the data labeling, content moderation, and human feedback training? Under what conditions and for what compensation? What mineral extraction and manufacturing supports the hardware these systems run on? AI systems depend on human labor that is often invisible, poorly compensated, and psychologically harmful (especially content moderation work). A university committed to ethical sourcing in other procurement domains should extend that commitment to AI tools.
Environmental & Climate Impact What are the energy, water, and carbon costs of using this AI tool? How do collective usage patterns at institutional scale compound those impacts? How do I weigh convenience against environmental responsibility? Training large AI models requires enormous energy and water resources. Inference (each individual query) also has measurable environmental cost, and institutional adoption multiplies that cost across thousands of users. Universities with sustainability commitments should account for AI's environmental footprint in procurement and usage guidance.
Epistemic Effects How does routine AI use change what counts as knowledge, what counts as expertise, and how intellectual authority is distributed? What skills, habits of mind, or forms of human connection might atrophy with heavy AI dependence? Universities exist to cultivate independent thinking, deep expertise, and original scholarship. AI tools that provide fluent, confident answers can erode the productive struggle through which learning occurs. Over-reliance may flatten disciplinary diversity by channeling outputs toward the patterns most represented in training data.
Community & Societal Impact What are the potential positive impacts of my use of this AI tool on myself, my community, and society? What are the potential negative impacts? How do I weigh individual convenience against collective consequences? Decisions about AI adoption are not purely individual—they shape norms, expectations, and power structures across the institution and beyond. A culture of critical reflection helps the university community navigate these collective effects rather than treating AI use as a private consumer choice.

Suggested Implementation

"AI literacy programming should extend beyond functional tool proficiency to include critical engagement with the ownership, labor practices, environmental impacts, and societal implications of AI systems. Units are encouraged to integrate these dimensions into existing courses, orientations, and professional development activities."

College / Department Implementation

No-AI: "Critical AI literacy is taught as a standalone module covering AI's societal, environmental, and political dimensions, independent of tool training."
Balanced-AI: "Functional AI training is paired with structured critical reflection on ownership, labor, environment, and epistemic effects. Assignments and workshops include guided analysis of AI vendors' practices."
AI-Forward: "Critical AI literacy is embedded throughout the curriculum and professional development, including vendor evaluation criteria that incorporate labor, environmental, and governance standards."

Rationale

A university that promotes AI adoption without fostering critical understanding of AI's broader context risks producing graduates who are technically proficient but civically uninformed—capable of using tools without understanding the systems those tools sustain. This is analogous to teaching media production without media criticism, or teaching chemistry without lab safety ethics. Critical AI literacy is not anti-technology; it is the foundation of informed technology use, which is what a research university should model.

Example Scenarios

LAS (Sociology): An instructor assigns students to research the labor practices behind two competing AI chatbots—examining reports on content moderation working conditions, data labeling compensation, and corporate transparency reports. Students then write position papers on whether their findings should influence institutional procurement.

iSchool (Information Sciences): A graduate seminar on data ethics includes a module where students calculate the estimated carbon footprint of their AI usage over a semester using publicly available model efficiency data, then compare it to other energy expenditures and discuss institutional responsibility.

GIES (Business): An MBA course on technology strategy requires students to evaluate AI vendors not only on functionality and cost but on a scorecard that includes labor practices, environmental commitments, data governance, political donations, and supply chain transparency.

Education (Curriculum & Instruction): Pre-service teachers are asked to design a lesson plan that teaches middle school students to ask: "Who made this AI? Who does it work well for? Who does it not work well for? What happens to the data I give it?" before using any AI tool in the classroom.

Implications

  • With critical AI literacy: Graduates and employees make informed choices, institutional procurement reflects university values, the campus contributes to a more thoughtful public discourse about AI, and principled non-adoption is recognized as a legitimate stance.
  • Without critical AI literacy: Tool proficiency without context, uncritical adoption that may conflict with institutional values, missed opportunities to model responsible technology engagement, and a workforce that knows how to use AI but not how to evaluate whether it should.
  • Integration opportunity: Critical AI literacy aligns with existing general education goals around critical thinking, ethical reasoning, and civic engagement—it does not require new infrastructure so much as intentional integration into existing frameworks.

This section addresses a frequently overlooked risk: public posting of materials that can be crawled, indexed, or ingested by third-party systems. These steps reduce risk but do not guarantee exclusion from all scraping.

Operational defaults

  • Prefer access controls: if content is not intended for the public, use authentication (SSO/password) instead of relying on “polite” crawler rules.
  • Minimize sensitive data in public pages: remove rosters, identifiable participant details, internal deliberations, unpublished drafts, and proprietary content.
  • Assume screenshots and copying: even with crawler blocks, humans can download and repost.

Anti-crawler / anti-indexing checklist (web admins)

1) Add a robots meta tag in the HTML <head>:

<meta name="robots" content="noindex,nofollow,noarchive,nosnippet">

2) Add/adjust robots.txt (site root):

User-agent: * Disallow: /private/ Disallow: /drafts/ Disallow: /internal/

3) Consider the stronger header-level option (server config):

X-Robots-Tag: noindex, nofollow, noarchive, nosnippet

Note: robots.txt is advisory. Access controls and contractual/vendor controls are stronger than crawler directives.

Generative AI is increasingly used for images, video, music, design, and voice. Guidance should focus on intellectual property, consent/likeness, disclosure, and reputational risk.

AI-generated text and code are addressed in Sections 1 and 2. This section covers additional creative modalities with distinct IP, consent, and disclosure considerations.

Operational defaults

  • Disclosure: when AI materially contributes to a creative work distributed as part of coursework, research dissemination, or official comms, disclose tool + role in workflow (Tier 1/2 framing).
  • Consent & likeness: avoid using identifiable people’s faces/voices without permission (especially students/minors). Be cautious with voice cloning and deepfakes.
  • Copyright/licensing: understand platform licenses and terms; avoid implying exclusive rights if the tool’s terms conflict. For high-stakes use, consult campus legal/Library guidance.
  • Brand/official channels: apply stricter review for content that appears “official” or uses university marks.

Examples of “substantive” creative use (Tier 2)

  • AI-generated voiceover used to represent a real person.
  • AI-generated images/video used in official recruiting, fundraising, or research claims.
  • AI-generated music used commercially or as part of a funded deliverable.

This guidance intersects with existing policy documents. Rather than rewriting them here, the goal is to identify owners and recommend targeted updates.

Related Policies and Documents Requiring AI-Related Updates
Document / Policy Area Likely Owner Why GenAI Implicates It Typical Update Needed
Academic Integrity / FAIR (Faculty Academic Integrity Reporting) workflows Academic Affairs / Student Conduct Definitions of misconduct; reporting pathways; evidence standards Clarify acceptable uses, disclosure norms, and limits of detector evidence
Graduate College Handbook (thesis/dissertation) Graduate College Originality, authorship, disclosure expectations, committee review Add transparent AI-use language and reproducibility norms
Faculty bylaws / unit evaluation guidance Units / Senate / Provost Autonomy, expectations, and evaluation of AI-mediated work Clarify that unit stances do not automatically require AI use in classes
Acceptable Use / IT security & data classification IT / Security / Compliance Uploading sensitive/confidential data to third parties; retention Concrete “do not upload” rules + approved-tool pathways
Procurement / vendor risk management Procurement / Legal / IT Training-on-your-data, retention, model improvement clauses Standard contract language and review triggers for GenAI tools
Records retention / FOIA / discovery Records office / Legal Prompts/outputs may be institutional records in some contexts Retention guidance and “what to keep” norms for staff
Library licensing / research use of texts Library Uploading full-text PDFs may violate license terms Clear do/don’t examples and sanctioned workflows
Procurement ethics & vendor evaluation Procurement / Legal / Senate IT AI purchases create financial and political relationships; vendor labor practices, environmental commitments, and political activities may conflict with university values Expand vendor evaluation criteria beyond functionality and cost to include labor, environmental, governance, and supply chain standards
Vendor dependency & exit planning IT / Procurement / Academic units Workflows built around specific AI vendors create lock-in risk; pricing, terms, or service availability can change abruptly Require exit strategies, data portability provisions, and contingency plans for critical AI-dependent workflows before procurement approval
Student mental health & AI relationships Student Affairs / Counseling / Academic Affairs Students may form emotional dependencies on AI chatbots or use AI as a substitute for human connection and professional support Guidance for counseling services on AI-related dependency; awareness programming for residence life, advising, and student-facing staff
Synthetic media & institutional trust Public Affairs / Legal / IT Security Deepfakes of faculty, administrators, or official communications; AI-generated content that appears institutional without authorization Authentication standards for official communications; rapid response protocols for synthetic media impersonation; guidance on use of university marks in AI-generated content
International student & researcher access International Student Services / Graduate College / Research Some AI tools are geographically restricted, function differently by region, or create compliance conflicts with students’ home-country AI regulations Equity review for AI-dependent assignments and research workflows; alternative pathways for students with restricted access; guidance on cross-border data considerations

Senate IT “next actions”

  • Submit the FAIR (Faculty Academic Integrity Reporting) data questions as a formal request (trend, substitution vs addition, GenAI-specific signals).
  • Identify policy owners above and agree on a lightweight update plan (what needs revision before fall term).
  • Adopt tiered disclosure as the recommended default (avoid detector-based enforcement framing).
  • Initiate a review of AI procurement criteria to incorporate vendor ethics, environmental impact, labor practices, and exit planning standards.
  • Coordinate with Student Affairs and Counseling to develop awareness resources on AI-related emotional dependency and mental health considerations.
  • Begin tracking agentic AI deployments on campus and establish an initial framework for accountability and human-in-the-loop requirements.
  • Develop critical AI literacy programming in partnership with CITL and the Library that extends beyond functional tool proficiency to address the dimensions outlined in Section 8.5.

The following references support specific claims, frameworks, and recommendations throughout this guidance document. They are organized by topic area and keyed to the sections in which the relevant claims appear. This is not a comprehensive literature review — it is a targeted evidence base for assertions that reviewers or adopters may wish to verify.

AI Detection Accuracy & Bias (Sections 1–2)

Claims supported: AI detection tools produce unacceptable false positive rates; these tools disproportionately flag non-native English speakers; detection-based enforcement is unreliable for academic integrity proceedings.

  1. Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://arxiv.org/abs/2304.02819
    Finding: GPT detectors consistently misclassify non-native English writing as AI-generated due to reliance on perplexity metrics that correlate with linguistic sophistication. Simple prompting strategies bypass the detectors entirely.
  2. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), Article 26. https://doi.org/10.1007/s40979-023-00146-z
    Finding: Most detection tools scored below 80% accuracy on diverse text samples. Tools had a main bias towards classifying output as human-written. Content obfuscation techniques significantly worsened performance.
  3. Pratama, A. R. (2025). The accuracy-bias trade-offs in AI text detection tools and their impact on fairness in scholarly publication. PeerJ Computer Science, 11, Article e2953. https://doi.org/10.7717/peerj-cs.2953
  4. Giray, L. (2024). The problem with false positives: AI detection unfairly accuses scholars of AI plagiarism. The Serials Librarian, 85(5–6). https://doi.org/10.1080/0361526X.2024.2433256
    Finding: False positives disproportionately affect non-native English speakers and scholars with distinctive writing styles, causing significant harm to academic careers.
  5. Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2024). Can AI-generated text be reliably detected? https://arxiv.org/abs/2303.11156

Accessibility Disparities — Speech Recognition & Computer Vision (Section 6)

Claims supported: AI tools perform differently across demographic groups; speech recognition accuracy varies by accent and race; computer vision accuracy varies by skin tone; these disparities have practical consequences in educational and health contexts.

  1. Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C., Rickford, J. R., Jurafsky, D., & Goel, S. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689. https://doi.org/10.1073/pnas.1915768117
    Finding: ASR systems from Amazon, Apple, Google, IBM, and Microsoft showed an average word error rate of 0.35 for Black speakers compared with 0.19 for white speakers. Disparities were driven primarily by acoustic model limitations, not language differences.
  2. Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT* 2018), Proceedings of Machine Learning Research, 81, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
    Finding: Darker-skinned females experienced misclassification rates up to 34.7%, while lighter-skinned males had error rates as low as 0.8%. Benchmarking datasets were overwhelmingly composed of lighter-skinned subjects.
  3. Kulkarni, A., Kulkarni, A., Trancoso, I., & Couceiro, M. (2024). Unveiling biases while embracing sustainability: Assessing the dual challenges of automatic speech recognition systems. In Interspeech 2024.
  4. Graham, C. & Roll, N. (2024). Evaluating OpenAI's Whisper ASR: Performance analysis across diverse accents and speaker traits. JASA Express Letters, 4. https://doi.org/10.1121/10.0024876

Environmental & Climate Impact of AI (Section 8.5)

Claims supported: AI training and inference consume significant energy and water; institutional-scale adoption multiplies these costs; universities with sustainability commitments should account for AI's environmental footprint.

  1. de Vries-Gao, A. (2025). The carbon and water footprints of data centers and what this could mean for artificial intelligence. Patterns (Cell Press), 6(12). https://www.cell.com/patterns/fulltext/S2666-3899(25)00278-8
    Finding: AI systems alone may use between 312.5 and 764.6 billion liters of water annually — comparable to all bottled water consumed worldwide. AI-related carbon emissions estimated at 32.6–79.7 million metric tons of CO₂ equivalents.
  2. International Energy Agency. (2025). Energy and AI. IEA, Paris. https://www.iea.org/reports/energy-and-ai
    Finding: Data centers consumed an estimated 415 TWh of electricity in 2024 (~1.5% of global demand). AI-specific servers used an estimated 53–76 TWh in 2024, projected to reach 165–326 TWh by 2028. IEA projects data center energy use could more than double to 945 TWh by 2030.
  3. Luccioni, A. S., Jernite, Y., & Strubell, E. (2024). Power hungry processing: Watts driving the cost of AI deployment? Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). https://doi.org/10.1145/3630106.3658542
    Finding: Image generation is the most energy-intensive common AI task. Generating 1,000 images produces CO₂ emissions comparable to driving approximately 4.1 miles. Most of an AI model's lifetime carbon footprint comes from inference, not training, because popular models are deployed billions of times.
  4. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 🦜 Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 610–623. https://doi.org/10.1145/3442188.3445922

AI Labor Conditions & Supply Chain Ethics (Section 8.5)

Claims supported: AI systems depend on human labor that is often invisible, poorly compensated, and psychologically harmful; content moderators and data labelers experience occupational trauma; universities committed to ethical sourcing should extend that commitment to AI procurement.

  1. Perrigo, B. (2023, January 18). OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/
    Finding: Kenyan data labelers working for OpenAI through Sama were paid $1.32–$2 per hour to label tens of thousands of text passages depicting violence, hate speech, and sexual abuse. Workers reported developing PTSD, anxiety, and depression.
  2. Equidem. (2025). Scroll. Click. Suffer: The Hidden Human Cost of Content Moderation and Data Labelling. Equidem/Institute for Human Rights and Business. https://www.ihrb.org
    Finding: Based on interviews with 113 content moderators and data labelers across Colombia, Ghana, Kenya, and the Philippines. Documented over 60 cases of serious mental health harm including PTSD, depression, insomnia, anxiety, and suicidal ideation. Workers face unstable employment, lack fixed salaries, and are routinely forced into unpaid overtime.
  3. Gray, M. L. & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan/Houghton Mifflin Harcourt.
  4. Posada, J. (2022). The coloniality of data work: Power and inequality in outsourced data production for machine learning. Doctoral dissertation, University of Toronto. https://utoronto.scholaris.ca
  5. Brookings Institution. (2024, 2025). Moving toward truly responsible AI development in the global AI market; Reimagining the future of data and AI labor in the Global South. https://www.brookings.edu

Qualitative Research Epistemology & AI-Assisted Analysis (Section 4)

Claims supported: AI-assisted qualitative coding risks flattening interpretive depth; AI may reintroduce positivist assumptions into constructivist inquiry; reflexive journaling and human-centered analysis remain essential; emerging consensus supports AI as complement, not substitute.

  1. Messner, R., Smith, S., & Richards, C. (2025). Artificial intelligence and qualitative data analysis: Epistemological incongruences and the future of the human experience. International Journal of Qualitative Methods, 24. https://doi.org/10.1177/16094069251371481
    Finding: AI-assisted coding produced superficial theme lists resembling summaries of key points rather than the synthesized, theoretically grounded conceptual analysis characteristic of human qualitative work. Claims of enhanced robustness through AI layer positivist assumptions of "verifiable, objective truths" onto constructivist-interpretive inquiry, creating fundamental epistemological incongruences.
  2. Braun, V. & Clarke, V. (2022). Thematic analysis: A practical guide. SAGE.
  3. Prescott, M. R., et al. (2024). Comparing the efficacy and efficiency of human and generative AI: Qualitative thematic analyses. JMIR AI, 3, e54482. https://doi.org/10.2196/54482
    Finding: Human coders were better than GenAI at identifying nuanced and interpretative themes. Relatively lower reliability between human and AI coding suggests hybrid approaches are necessary.
  4. Jowsey, T., Braun, V., Clarke, V., Lupton, D., & Fine, M. (2025). We reject the use of generative artificial intelligence for reflexive qualitative research. Qualitative Inquiry. https://doi.org/10.1177/10778004251401851
    Finding: Position statement endorsed by 419 experienced qualitative researchers from 32 countries. Argues that reflexive qualitative analysis is an inherently human meaning-making practice, that GenAI's algorithmic pattern-matching cannot substitute for interpretive reflexivity, and that uncritical adoption risks reinforcing dominant paradigms while silencing marginalized voices. Also raises social and environmental justice objections to GenAI use.
  5. Levitt, H. M. (2026). A consideration of the ethics and methodological integrity of generative artificial intelligence in qualitative research: Guidelines for Qualitative Psychology [Editorial]. Qualitative Psychology, 13(1), 1–5. https://doi.org/10.1037/qup0000353
    Note: Editorial guidelines from the editor-in-chief of APA's Qualitative Psychology, addressing ethical and methodological integrity standards for GenAI use in qualitative research.

AI Hallucination & Citation Fabrication in Research (Section 4)

These studies document the prevalence and consequences of AI-generated fabricated citations and factual errors in scholarly outputs — a risk that spans all disciplines and methodologies.

  1. Walters, W. H. & Wilder, E. I. (2023). Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports, 13, 14045. https://doi.org/10.1038/s41598-023-41032-5
    Finding: 55% of GPT-3.5 citations and 18% of GPT-4 citations were entirely fabricated across 42 multidisciplinary topics. Among real (non-fabricated) citations, 43% (GPT-3.5) and 24% (GPT-4) contained substantive errors such as wrong authors, titles, or publication years.
  2. Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J.-L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews: Comparative analysis. Journal of Medical Internet Research, 26, e53164. https://doi.org/10.2196/53164
    Finding: When tasked with replicating human-conducted systematic reviews, ChatGPT and Bard exhibited high hallucination rates — generating references with plausible bibliographic details that were entirely fictitious. Hallucination rates varied by model and prompt specificity, underscoring the need for manual verification of all AI-generated citations.

Higher Education AI Policy Frameworks (General)

These institutional and international frameworks informed the structure, scope, and recommendations throughout this guidance document.

  1. Robert, J. (2024). 2024 EDUCAUSE AI Landscape Study. EDUCAUSE. https://library.educause.edu
    Finding: Only 23% of institutions had AI-related acceptable use policies in place. Nearly half of respondents disagreed that their institution had appropriate policies for ethical AI decision-making.
  2. Robert, J. & McCormack, M. (2024). 2024 EDUCAUSE Action Plan: AI Policies and Guidelines. EDUCAUSE. https://www.educause.edu
  3. Robert, J. & McCormack, M. (2025). 2025 EDUCAUSE AI Landscape Study: Into the Digital AI Divide. EDUCAUSE. https://library.educause.edu
  4. UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. Adopted by the General Conference at its 41st session, November 2021. https://www.unesco.org
  5. UNESCO/Miao, F. & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO Publishing. https://unesdoc.unesco.org
  6. American Association of University Professors. (2025). Artificial Intelligence and Academic Professions. Report of the ad hoc Committee on Artificial Intelligence and Academic Professions. https://www.aaup.org
    Finding: Survey of 500 faculty across ~200 campuses found that 71% reported AI decisions are led overwhelmingly by administration without faculty input; 76% reported declining job enthusiasm; 62% said AI worsened teaching environments. Recommends shared governance, opt-out protections, impact assessments before deployment, and prohibition of AI in high-stakes employment decisions without independent verification.

Federal Funding Agency AI Guidance (Sections 4, 9)

These federal notices establish binding restrictions on AI use in grant-funded research processes. Faculty and sponsored programs offices should incorporate these requirements into institutional compliance workflows.

  1. National Institutes of Health. (2023). The use of generative artificial intelligence technologies is prohibited for the NIH peer review process (NOT-OD-23-149). https://grants.nih.gov
    Policy: Prohibits NIH peer reviewers from using LLMs or other generative AI technologies to analyze or formulate review critiques. Uploading application content to AI tools violates confidentiality requirements. Accessibility exceptions may be granted with prior DFO approval.
  2. National Institutes of Health. (2025). Supporting fairness and originality in NIH research applications (NOT-OD-25-132). https://grants.nih.gov
    Policy: Applications substantially developed by AI will not be considered original work. If AI-generation is detected post-award, NIH may refer the matter to the Office of Research Integrity for misconduct investigation and take enforcement actions including cost disallowance, suspension, or termination. Also limits PIs to six new/renewal/resubmission applications per calendar year. Effective September 25, 2025.

Environmental Justice & Sustainability (Section 8.5)

  1. Federation of American Scientists. (2025). Measuring AI's energy/environmental footprint to assess impacts. https://fas.org
    Note: Proposes standardized metrics frameworks including energy per AI task, lifecycle carbon accounting, and water usage effectiveness measures.
  2. Bowdoin College Hastings AI Initiative. (2025). The hidden footprint of AI: Climate, water, and justice costs. https://www.bowdoin.edu
    Finding: GenAI models, while accounting for less than a third of corporate AI use cases, drive 99.9% of total AI energy consumption. Data centers in drought-prone areas shift environmental burdens onto communities already facing water insecurity.

Note: All URLs were verified as of February 2026. Peer-reviewed publications are preferred throughout; institutional reports and primary journalism are included where peer-reviewed sources are not yet available on emerging topics (e.g., AI labor conditions, agentic AI governance). The committee encourages reviewers to suggest additional or alternative sources. Where possible, archived versions (e.g., via archive.org) may be used alongside live URLs to support long-term reference permanence.