Policies
AI and Emerging Technology Literacy
1. Purpose
Prepare graduates, faculty, and staff to excel in an AI-driven future by integrating responsible, discipline-specific applications of artificial intelligence and emerging technologies into every program and operational area. The policy fosters a culture of professional curiosity, ethical experimentation, and continuous learning³ÉÈËÊÓÆµ”encouraging all members of the university community to adopt, test, and refine emerging tools in ways that advance teaching, learning, research, and service.
2. Scope
Applies to all degree programs at the undergraduate and graduate levels. Requirements are fulfilled at the program level. Programs in any college may meet requirements through generative AI, machine learning environments, statistical software, code assistants, computational design tools, domain-specific automation, or comparable technology-of-field instruments, selected for disciplinary relevance and ethical fitness. The policy also applies to student-facing university staff, faculty, and students.
3. Definitions
- AI and emerging technologies: Generative systems, predictive models, statistical and computational tools, automation platforms, and code assistants used to augment inquiry, design, analysis, or production.
- AI-enabled activity: Any learning task that permits or encourages tool use for ideation, analysis, drafting, coding, visualization, simulation, critique, or revision.
- Unaided mastery check: Any assessment that verifies individual capability without AI or other prohibited tools, such as proctored exams, oral defenses, in-class performances, live labs, or supervised critiques.
- AI Awareness Note: Concise disclosure by a student that lists tool name, purpose of use, prompt or configuration summary, and verification steps.
4. Policy Statement
a) Student learning: Every graduate will encounter structured opportunities to learn responsible, discipline-relevant use of AI or technology-of-field and to practice verification, ethics, and workflow integration.
b) Faculty learning: Every faculty member will pursue baseline familiarity with AI impacts and technology-of-field developments relevant to one³ÉÈËÊÓÆµ™s discipline in order to advise students on current practice and constraints (see Section 9).
c) Assessment authority: Faculty retain full authority to determine where AI is permitted, limited, or excluded within their courses and to design verification processes for authorship and mastery. AI literacy is strongly encouraged across all courses but required only at the program level, ensuring exposure without mandating uniform integration.
d) Data protection: Use of tools must comply with institutional privacy, security, and intellectual property policies; non-public data requires approved environments.
e) Staff learning: All university staff are encouraged to explore and adopt AI and emerging technologies responsibly in ways that improve operational efficiency, communication, and service to students, consistent with data security and ethical guidelines.
5. Program Level Requirements
Each degree program will file and maintain an AI and Technology Map updated annually during the standard assessment cycle along with their annual assessment program reports guided by program leadership in consultation with the AI Strategy and Governance Committee. The Map should demonstrate the following distribution:
- Foundational touchpoint: At least one general education, gateway, or equivalent orientation course uniform and required of all students introduces prompting or configuration, verification strategies, ethical use, and limitations.
- Two discipline-specific touchpoints: Two discipline-specific touchpoints are required; the first must be a core course common to all majors in first-year/sophomore year, ensuring shared development of essential AI competence beyond the foundational touchpoint; one must be a capstone or equivalent experience in junior/senior that enables students to apply AI tools and concepts authentically within their field, with AI literacy strongly encouraged across the curriculum.
- Industry alignment statement: Within the assessment narrative, programs describe the tools, workflows, or standards common in relevant sectors and explain curricular linkage.
- Assessment plan: Integrated into regular program assessment reporting³ÉÈËÊÓÆµ”no separate documentation required. Programs document at least one AI-enabled activity in their assessment reports, evaluating its impact on learning and workforce readiness; results are reviewed at the program level with oversight and synthesis by each college³ÉÈËÊÓÆµ™s assessment unit in collaboration with the AI Strategy and Governance Committee.
- Graduate Attribute alignment: Programs show in their learning outcomes assessment how program outcomes map to the institution-wide AI and emerging technology literacy attribute.
6. Classroom Practice and Assessment Guidance
a) Instructor rule-setting: Aside from the required AI discipline-specific touchpoints, courses may adopt No Use, Limited Use, or Freely Permitted models. Instructors must state explicitly in the syllabus when and how AI tools are permitted, including assignment-by-assignment expectations, so that students are not confused about the steps required in their work. Clear examples of permitted uses (e.g., idea generation, coding assistance) and prohibited uses (e.g., replacing unaided mastery tasks) should be included to prevent ambiguity. Faculty are encouraged to experiment with new AI and emerging technologies as they become available. Responsible exploration, even when imperfect, is supported as part of the university³ÉÈËÊÓÆµ™s commitment to learning through innovation.
b) Two assessment modes:
- AI-enabled workflows: Tool use for brainstorming, outlining, drafting, debugging, analysis, simulation, critique, or revision, accompanied by verification and reflection.
- Unaided mastery checks: Proctored or supervised demonstrations that verify individual capability without tools prohibited for that assignment.
c) Verification options: Oral defenses, version histories, notebooks or logs, keystroke or commit traces, build artifacts, lab notebooks, and process portfolios may be used to verify authorship and learning outcomes.
d) Documentation requirement: When AI is allowed, instructors may require students to include an Appendix with:
- The prompts or instructions submitted to the AI tool;
- The AI-generated outputs received;
- A written evaluation of the AI³ÉÈËÊÓÆµ™s accuracy, limitations, and potential biases.
Example: A student using AI to generate an organizational chart should assess whether the system overlooked HR compliance, generalized role responsibilities, or failed to account for cultural and legal contexts.
e) Ethics and bias awareness: Faculty should teach the responsible and ethical use of AI as part of academic integrity. Students are expected to reflect critically on how AI tools may reinforce biases, omit context, or misrepresent data, incorporating this reflection into their submitted work where appropriate.
f) Equity and access: Because AI tool use may be optional or dependent on paid access, instructors and programs should strive for equitable learning outcomes for all students, providing comparable non-paid alternatives when institutional licenses are unavailable. The University continues to evaluate institution-wide licensing or fee-based models to ensure consistent and fair access across programs.
g) Timeline flexibility: No fixed week-based trigger applies; sequencing of AI activities follows course design, studio cycles, clinical calendars, or lab schedules to maintain pedagogical alignment.
7. Data Protection and Approved Tools
This section is aligned with the existing Use of Generative AI in Employment Policy.
- Public-data training models: Faculty and students may use tools that train on public data, provided that training features are turned off or that the system is configured not to store or reuse entered data. Turning training off resolves the concern for most widely used AI tools.
- Prohibited uses: Uploading confidential, personal, or protected data into open models is not allowed. This includes information covered by FERPA (student identifiers, grades, advising records), HIPAA (protected health information), or export-controlled data.
- Non-public or regulated data: Must be confined to approved, institutionally procured systems or configurations that guarantee compliance.
- Open systems: May be used for coursework when input data is non-sensitive or synthetic.
- Program responsibility: Programs will identify discipline-specific secure tools and configurations required for coursework, clinics, or labs.
8. Ethics and Misconduct
The purpose of this section is to reinforce learning and growth, not punishment. Faculty, staff, and students are strongly encouraged to engage with AI and emerging technologies wherever appropriate.
Unethical use includes:
Students
- Using AI where prohibited by an assignment.
- Not disclosing AI use when assignment requires disclosure.
Employees
- Introducing non-public or protected data (e.g., FERPA- or HIPAA-covered information, confidential records) into unapproved systems.
- The use of AI (closed or open models) to evaluate student work without personal identifying information or sensitive personal student data is permitted. This does not constitute a breach of student privacy or academic standards.
- Misrepresenting generated content as independent work when disclosure or verification is required.
9. Faculty Development
Academic Affairs and the Lindenwood Learning Academy, in coordination with colleges and departments, will provide:
- role-appropriate workshops and exemplars for discipline-aligned tools;
- guidance on verification design and authorship checks;
- privacy and security training linked to approved configurations;
- shared resource banks for prompts, configurations, datasets, and rubrics.
Departments will mentor adjunct faculty and share templates to promote consistency without constraining disciplinary judgment. Faculty and staff development will emphasize creative risk-taking, pilot testing, and reflective learning from both successes and failures, reinforcing a growth-oriented culture of innovation in teaching and operations.
10. Governance, Monitoring, and Review
- Ownership: Academic Affairs holds policy ownership with consultation through the AI Strategy and Governance Committee and Faculty Council.
- Monitoring: Monitoring will include annual AI and Emerging Technology Map submissions for each academic program, sampling review of approximately ten percent of syllabi per cycle selected by deans, and periodic analysis of anonymized assessment artifacts for alignment with program plans. These activities will be conducted in coordination with each college³ÉÈËÊÓÆµ™s assessment unit and the AI Strategy and Governance Committee. In addition, employees should track participation in AI and emerging technology training, ensuring that professional development efforts across academic and administrative areas align with institutional policy, security standards, and continuous improvement goals.
- Review cadence: Biennial policy review to incorporate technological change, accreditation feedback, and assessment findings.
- Curricular process: New or substantially revised program maps that affect learning outcomes proceed through standard curriculum governance, including, when applicable, University Curriculum Committee approval.
11. Roles and Responsibilities
- Programs: Maintain maps; ensure distribution of touchpoints; collect exemplars; coordinate industry alignment.
- Faculty: Set course rules; design assessments; implement verification; practice horizon scanning within disciplines and share findings with colleagues.
- Students: Follow course rules; document allowed use; verify outputs; protect data; demonstrate unaided mastery when required.
- Academic Affairs and IT: Maintain approved tool lists and secure configurations; support procurement; operate training and knowledge bases.
Appendix
This implementation pathway represents minimum institutional expectation for systematic adoption. Programs may choose to advance more rapidly or incorporate additional discipline-specific enhancements, provided they meet or exceed the benchmarks established herein for comprehensive and sustainable integration.
Implementation and Phasing (1-Year)
- Phase 1: Pilot + Advisory Rollout (Spring³ÉÈËÊÓÆµ“Fall 2026)
During Spring 2026, each college identifies one program to participate in an initial pilot exploring approaches to documenting AI and emerging technology touchpoints. These pilot programs submit early versions of their AI and Technology Maps as part of the June 15 assessment plan cycle. Programs may indicate where early-level and advanced-level student experiences could be incorporated, without prescribing specific course placement. Pilot results and examples are shared during a Summer Workshop Week to highlight emerging models, lessons learned, and discipline-appropriate practices. Beginning in Fall 2026, all programs are invited to draft AI and Technology Maps on a developmental, exploratory basis, with flexibility to determine appropriate scope, pacing, and depth. Colleges may choose how they monitor progress, share updates, or integrate discussions into existing meeting structures. Faculty training opportunities are made available throughout the year but are not tied to specific timelines or meeting requirements. The goal of this phase is to support experimentation, build capacity, and identify what works, not to establish uniform expectations or mandate specific implementation steps. - Phase 2: Full Policy Status (AY 2026³ÉÈËÊÓÆµ“2027)
By the start of Fall 2026, all pilot programs have filed their maps. Compliance shifts from advisory to required by Spring 2027. Sampling reviews of syllabi and assessments are integrated into the assessment cycle rather than added separately. Findings from the pilot/advisory phases inform policy refinement during Summer 2027, resulting in a single year from pilot to full adoption.