Institutional Standards for AI Use
The University shall support the use of and training, research, and experimentation with AI. The University also recognizes that University faculty determine appropriate AI use within each of their courses or teaching contexts.
Students may be exposed to AI use in multiple different scenarios and are encouraged to use and experiment with AI through classwork, projects and research opportunities. The University shall ensure that students have an opportunity to use and learn with AI throughout their academic career.
To support these goals, the University has adopted this policy. The University shall ensure students have the opportunity to learn about and with AI in a structured environment with support from university personnel.
Colleges, departments or programs may have policies that are more specific and have more restrictive guidelines on appropriate use of AI within their area or unit. Unit policies or guidelines must adhere to the established policy hierarchy.
Definitions for this policy:
AI – The capability of a computer system to perform tasks that would typically require human intelligence capabilities including learning, reasoning, language processing and pattern recognition
AI Generated – Content that is created entirely by an AI tool or system without direct human authorship, this includes text, images and audio. – See Guidance documents for more explicit definition and examples
AI Assisted – Content that is created by a human with the aid of an AI tool or tools. The human retains authorship and uses the AI for suggestions, edits or partial content generation (such as autocomplete or suggestion features and grammar assistance) – See Guidance documents for more explicit definition and examples
AI Outputs – Content that has been either entirely or partially generated by an AI tool
AI Tools – Software application, platforms, web domains or systems that use AI to perform tasks.
Ethical Use of AI
All use of AI must conform to the following ethical framework.
Inclusion of AI generated content shall be disclosed in all cases. Disclosure of AI assistance is encouraged, but not required, to promote transparency.
The use of AI outputs requires critical review prior to use, or inclusion in a finished work to assess potential bias, vet for factual accuracy and source verification.
The human author of AI assisted work retains their authorship and intellectual property status. This work shall be treated in a manner consistent with all existing intellectual property policies, laws, regulations and standards.
Where no specific guidelines are offered, the expectation is that individuals will follow industry and/or disciplinary standards for AI use and attribution.
Data Classification and Impact on Use
Minnesota State has established three data classification levels in Operating Instruction 5.23.2.1 Data Security Classification - highly restricted, restricted, and low.
Providing or using highly restricted or restricted data in any third-party application or service, including generative AI services, requires a contractual agreement with the third party that ensures adherence to data security and data sharing protocols.
Examples of highly restricted, restricted, and low data elements include:
- Highly Restricted – Social Security Numbers, personal health/medical information, banking, or credit card information, etc.
- Restricted – Student grades, transcripts, class schedule, employee personal contact information, individual demographics including age, race, ethnicity, gender, etc.
- Low – Data that by law is available to the public upon request.
Note: Consult ITS and/or Institutional Review Board for clarification on the use of Deidentified Restricted data.
No Misuse of AI
Misuse of AI means using AI in a way that circumvents or violates any University or Minnesota State policies, state or federal laws, regulations or professional or academic standards. Misuse includes actions that are intentional, reckless or in general disregard for established University and other applicable guidelines and AI use frameworks.
Misuse of AI will be handled in a manner consistent with the violation as it relates to the applicable policy (See Related policies).
Examples of misuse of AI include, but are not limited to:
- Using AI tools to stalk, harass or otherwise cause harm to an individual.
- Using AI tools to generate fictional representations with the intent to harm, defraud or deceive.
- Using AI tools to engage in any criminal or illegal act.
- Failing to disclose use of AI when required.
- Providing non-public data to an AI tool without the required data protections enabled.
- Training an AI model or agent in a way that will intentionally cause harm, including failing to address bias or other intentional shortcomings in training data.
- Using AI generated output without proper attribution when required.
- Attempting to claim, or succeeding in claiming, wholly or primarily AI generated content as your own unique scholarly work.
Regular Review
In recognition that this is a fast-moving technology with rapidly evolving laws and standards, this policy shall be reviewed at the end of both Spring and Fall semesters, for implementation in the next term. This review will be completed by the AI Working Group or equivalent and submitted to the University Policy Committee to ensure that the policy reflects current technology, law and best practices.
AI Resources
The University shall provide resources and training opportunities to the university community to assist in learning and applying the ethical use of AI (See Supporting Documents).
Resources shall include beginner-friendly and advanced documents, frameworks and guides. The resources shall be reviewed and updated periodically to reflect emerging best practices and new technology.
Students, faculty and staff are encouraged to provide feedback, recommend additional resources or make other suggestions to support continuous improvement and equitable access.
Use of AI within a Course
The determination of whether AI use is allowed, and within what parameters, in each course is the responsibility of the course instructor.
Expectations for the use of AI shall be detailed within a syllabus statement or otherwise published or distributed to the students each semester and include how the faculty member will respond to the use of AI outside of their established parameters.
The use or misuse of AI does not, by itself, require that an assignment be addressed through the Academic Integrity process. Work that fails to meet assignment requirements may, as always, be graded according to the instructor’s academic judgment.
Use of AI Detection Tools
Consistent with intellectual property rights and expectations, the use of AI detection tools not authorized by the University or Minnesota State is prohibited.
The use of AI detection software as the sole or primary basis for alleging an act of academic dishonesty is prohibited. Such tools have been demonstrated to produce biased and unreliable results[1].
A basis for an allegation of an act of academic misconduct shall rely on documented evidence. The Academic Integrity Policy calls for a preponderance of evidence standard and the presented evidence should be consistent with that evidentiary standard (including but not limited to: drafts, revision history, assignment inconsistencies, student communication or other available evidence that supports the allegation). Allegations without such evidence shall not proceed.
Use of AI within Research
Student Research:
Student research shall be directed by course policies in effect for the course for which they are conducting research.
Student research that is not otherwise governed by an applicable course policy must adhere to the AI Ethical Use guidelines of this policy, in addition to applicable laws, policies, regulations and discipline, departmental or school specific standards or restrictions.
All Other Research Uses:
All faculty/staff conducting research are expected to adhere to the requirements outlined in the Ethical Use of AI section above. This policy is not applicable to Faculty/Staff research beyond requiring adherence to the requirements outlined in the Ethical Use of AI section above.
Researchers who use Generative AI must clearly disclose its use in their methods, acknowledgements, or any other relevant sections of their research and scholarly work.
Researchers must follow all applicable policies set by journals, funding agencies, and professional societies when disseminating their work.
Human and Animal Subject Considerations:
- Any research involving human subjects shall follow policy and review procedures of the university Institutional Review Board (IRB); https://www.stcloudstate.edu/irb/.
- Any research involving animal subjects shall follow policy and review procedures of the university Institutional Animal Care and Use Committee (IACUC); https://www.stcloudstate.edu/iacuc/.
[1] https://teachingsupport.umn.edu/what-faculty-should-know-about-genai-detectors