St. Cloud State University Policies & Procedures

AI Use at St Cloud State University Link☍

Current Status: Approved

Policy Type: All University

Effective Date: 04/23/2026

Last Updated: 04/23/2026

Applies To: Students, Faculty, Staff, Contractors, Vendors

Responsible University Officer: President

Policy Owner: Provost/Vice President for Academic Affairs, Vice President for Technology Strategy & CIO

Policy Contact: Vice President for University Affairs and Advisor to the President, Vice President for Technology Strategy & CIO

Rationale

St. Cloud State University (SCSU, the University) recognizes the growing and transformative use of artificial intelligence (AI) technologies in higher education. The power of AI to break down barriers and assist students, faculty and staff at all levels and backgrounds is increasingly apparent. However, the potential misuse of AI is a concern for the entire University community. Recognizing the power of AI to assist the University community and the need to have strong guidelines around its use, St. Cloud State University adopts the following policy.

-Disclosure: This policy was developed using the assistance of Copilot AI and Claude AI. No AI generated content was directly incorporated in the policy. AI was used to assist in pinpointing gaps and generating topics to research further. Any sources cited in AI content generation were accessed directly for verification.

 

Policy

Institutional Standards for AI Use  

The University shall support the use of and training, research, and experimentation with AI. The University also recognizes that University faculty determine appropriate AI use within each of their courses or teaching contexts.

Students may be exposed to AI use in multiple different scenarios and are encouraged to use and experiment with AI through classwork, projects and research opportunities. The University shall ensure that students have an opportunity to use and learn with AI throughout their academic career.

To support these goals, the University has adopted this policy. The University shall ensure students have the opportunity to learn about and with AI in a structured environment with support from university personnel.

Colleges, departments or programs may have policies that are more specific and have more restrictive guidelines on appropriate use of AI within their area or unit. Unit policies or guidelines must adhere to the established policy hierarchy.

Definitions for this policy:

AI – The capability of a computer system to perform tasks that would typically require human intelligence capabilities including learning, reasoning, language processing and pattern recognition

AI Generated – Content that is created entirely by an AI tool or system without direct human authorship, this includes text, images and audio. – See Guidance documents for more explicit definition and examples   

AI Assisted – Content that is created by a human with the aid of an AI tool or tools. The human retains authorship and uses the AI for suggestions, edits or partial content generation (such as autocomplete or suggestion features and grammar assistance) – See Guidance documents for more explicit definition and examples  

AI Outputs – Content that has been either entirely or partially generated by an AI tool

AI Tools – Software application, platforms, web domains or systems that use AI to perform tasks.

 

Ethical Use of AI

All use of AI must conform to the following ethical framework.

Inclusion of AI generated content shall be disclosed in all cases. Disclosure of AI assistance is encouraged, but not required, to promote transparency.

The use of AI outputs requires critical review prior to use, or inclusion in a finished work to assess potential bias, vet for factual accuracy and source verification.

The human author of AI assisted work retains their authorship and intellectual property status. This work shall be treated in a manner consistent with all existing intellectual property policies, laws, regulations and standards.

Where no specific guidelines are offered, the expectation is that individuals will follow industry and/or disciplinary standards for AI use and attribution.

Data Classification and Impact on Use

Minnesota State has established three data classification levels in Operating Instruction 5.23.2.1 Data Security Classification - highly restricted, restricted, and low.

Providing or using highly restricted or restricted data in any third-party application or service, including generative AI services, requires a contractual agreement with the third party that ensures adherence to data security and data sharing protocols.

Examples of highly restricted, restricted, and low data elements include:

  • Highly Restricted – Social Security Numbers, personal health/medical information, banking, or credit card information, etc.
  • Restricted – Student grades, transcripts, class schedule, employee personal contact information, individual demographics including age, race, ethnicity, gender, etc.
  • Low – Data that by law is available to the public upon request.

Note: Consult ITS and/or Institutional Review Board for clarification on the use of Deidentified Restricted data.

No Misuse of AI

Misuse of AI means using AI in a way that circumvents or violates any University or Minnesota State policies, state or federal laws, regulations or professional or academic standards. Misuse includes actions that are intentional, reckless or in general disregard for established University and other applicable guidelines and AI use frameworks.

Misuse of AI will be handled in a manner consistent with the violation as it relates to the applicable policy (See Related policies).

Examples of misuse of AI include, but are not limited to:

  • Using AI tools to stalk, harass or otherwise cause harm to an individual.
  • Using AI tools to generate fictional representations with the intent to harm, defraud or deceive.
  • Using AI tools to engage in any criminal or illegal act.
  • Failing to disclose use of AI when required.
  • Providing non-public data to an AI tool without the required data protections enabled.
  • Training an AI model or agent in a way that will intentionally cause harm, including failing to address bias or other intentional shortcomings in training data.
  • Using AI generated output without proper attribution when required.
  • Attempting to claim, or succeeding in claiming, wholly or primarily AI generated content as your own unique scholarly work.

Regular Review

In recognition that this is a fast-moving technology with rapidly evolving laws and standards, this policy shall be reviewed at the end of both Spring and Fall semesters, for implementation in the next term. This review will be completed by the AI Working Group or equivalent and submitted to the University Policy Committee to ensure that the policy reflects current technology, law and best practices.

AI Resources

The University shall provide resources and training opportunities to the university community to assist in learning and applying the ethical use of AI (See Supporting Documents).

Resources shall include beginner-friendly and advanced documents, frameworks and guides. The resources shall be reviewed and updated periodically to reflect emerging best practices and new technology.

Students, faculty and staff are encouraged to provide feedback, recommend additional resources or make other suggestions to support continuous improvement and equitable access.

Use of AI within a Course

The determination of whether AI use is allowed, and within what parameters, in each course is the responsibility of the course instructor.

Expectations for the use of AI shall be detailed within a syllabus statement or otherwise published or distributed to the students each semester and include how the faculty member will respond to the use of AI outside of their established parameters.   

The use or misuse of AI does not, by itself, require that an assignment be addressed through the Academic Integrity process. Work that fails to meet assignment requirements may, as always, be graded according to the instructor’s academic judgment.

Use of AI Detection Tools

Consistent with intellectual property rights and expectations, the use of AI detection tools not authorized by the University or Minnesota State is prohibited.

The use of AI detection software as the sole or primary basis for alleging an act of academic dishonesty is prohibited. Such tools have been demonstrated to produce biased and unreliable results[1].

A basis for an allegation of an act of academic misconduct shall rely on documented evidence. The Academic Integrity Policy calls for a preponderance of evidence standard and the presented evidence should be consistent with that evidentiary standard (including but not limited to: drafts, revision history, assignment inconsistencies, student communication or other available evidence that supports the allegation). Allegations without such evidence shall not proceed.

Use of AI within Research

Student Research:

Student research shall be directed by course policies in effect for the course for which they are conducting research.

Student research that is not otherwise governed by an applicable course policy must adhere to the AI Ethical Use guidelines of this policy, in addition to applicable laws, policies, regulations and discipline, departmental or school specific standards or restrictions.

All Other Research Uses:

All faculty/staff conducting research are expected to adhere to the requirements outlined in the Ethical Use of AI section above. This policy is not applicable to Faculty/Staff research beyond requiring adherence to the requirements outlined in the Ethical Use of AI section above.

Researchers who use Generative AI must clearly disclose its use in their methods, acknowledgements, or any other relevant sections of their research and scholarly work.

Researchers must follow all applicable policies set by journals, funding agencies, and professional societies when disseminating their work.

Human and Animal Subject Considerations:
  • Any research involving human subjects shall follow policy and review procedures of the university Institutional Review Board (IRB); https://www.stcloudstate.edu/irb/.  
  • Any research involving animal subjects shall follow policy and review procedures of the university Institutional Animal Care and Use Committee (IACUC); https://www.stcloudstate.edu/iacuc/.  

 

[1] https://teachingsupport.umn.edu/what-faculty-should-know-about-genai-detectors

 

Procedure

Procedure for Alleged Misuse

Within a Course

Faculty members may implement a grade reduction on an individual assignment on the basis of AI misuse instead of using the Academic Integrity process, provided that this grade reduction does not reduce the overall course grade by more than 10 percentage points.

If the faculty member feels a further reduction of grade or other formal sanction is appropriate, the alleged misuse must be addressed through the University’s Academic Integrity process.

Faculty who have identified potential AI misuse may engage students in informal conversations or educational interventions if no formal sanction is intended or in situations where the suspected misuse does not constitute an academic integrity violation.

To refer potential misuse for resolution under the procedures of the Academic Integrity policy a faculty member must include an AI use statement that was communicated to the student.

Allegations must be documented with evidence consistent with the Academic Integrity Policy.

Outside of Coursework

The use of AI does not create new violations not otherwise covered by existing policy nor does it serve as a defense for conduct that otherwise would violate policy, standard or law. AI is a tool and the use of any tool does not fundamentally change the existence of a violation or the pathways through which a violation is addressed. Existing standards of evidence, procedure and responsibility frameworks will remain unaltered regardless of AI use.

The misuse of AI outside of coursework does not constitute a separate or new category of misconduct. When AI is used as a tool to violate existing University policies, standards, or laws, those violations shall be addressed through the established procedures of the applicable policy.

The following is a list of examples, this list is not exhaustive:

  • Using AI to stalk or harass would be addressed through the 1B.3.1 procedure as stated in the Sexual Violence, Relationship Violence and Stalking Policy.
  • Creating artificial representations of real people to defame or deceive shall be addressed through the Student Code of Community Standards or Respectful Workplace Policy as applicable.
  • Using AI tools to generate fictional representations with the intent to harm, defraud or deceive, including using AI tools to generate or falsify documents, will be dealt with under the existing University policies or standards, Minnesota State Board Policies, law or employment agreement as appropriate for the specific violation.
  • Using AI to violate data privacy shall be addressed under Ensuring Safety of Non-Public Data Policy.

Any conduct that violates University policy, Minnesota State Board Policy, or law will be addressed under the relevant authority without regard to whether AI was used in the commission of the violation.

Faculty and Staff Misuse:

AI misuse by faculty or staff is subject to all applicable employment policies, collective bargaining agreements, and Minnesota State procedures. The use of AI as a tool does not alter existing employment standards, disciplinary procedures, or contractual obligations.

 

Guidelines

Use of AI within Research and Scholarly Activities 

Entering or uploading confidential or unpublished information into an AI tool may allow the model to incorporate it into future responses, potentially exposing sensitive data, compromising privacy, or limiting future intellectual property protections.

Publication and Funding Agency Considerations:
  • Some funders, including the National Science Foundation and National Institutes of Health, explicitly prohibit uploading unpublished manuscripts or grant proposals into AI tools for peer-review activities.
  • Some journals and publishers limit or prohibit the use of AI-generated text, figures, images, and graphics in submitted manuscripts.
  • Additional guidance for the use of AI- in sponsored research, scholarship and creative works are at <<RSP link coming soon>>.

 

Frequently Asked Questions

Q. Where can I get more specific information or guidance

A. Please see the attached Guidance documents for your specific role or check out the Knowledge Base Article under supporting URL's

 

Supporting Documents (Forms, Instructions)

Feb 25 2026 4:23PM
Mar 26 2026 1:08PM
Mar 26 2026 1:08PM
Mar 26 2026 1:08PM

Related St. Cloud State University Policies

Supporting URLs

Websites, Related External Documents, Statutes

Definitions

AI

The capability of a computer system to perform tasks that would typically require human intelligence capabilities including learning, reasoning, language processing and pattern recognition

AI Assisted

Content that is created by a human with the aid of an AI tool or tools. The human retains authorship and uses the AI for suggestions, edits or partial content generation (such as autocomplete or suggestion features and grammar assistance)

AI Generated

Content that is created entirely by an AI tool or system without direct human authorship, this includes text, images and audio.

AI Outputs

Content that has been either entirely or partially generated by an AI tool

AI Tools

Software application, platforms, web domains or systems that use AI to perform tasks.

Contacts

Responsible University Officer
   Tomso, Gregory J.
   President
 
gregory.tomso@stcloudstate.edu
--
Owner
   Pattit, Katherina G.
   Provost/Vice President for Academic Affairs
 
katherina.pattit@stcloudstate.edu
320-308-3213
Owner
   Thorson, Philip J.
   Vice President for Technology Strategy & CIO
 
pthorson@StCloudState.edu
320-308-2065
Contact
   Siminoe, Judith P.
   Vice President for University Affairs and Advisor to the President
 
jpsiminoe@stcloudstate.edu
320-308-2124
Contact
   Thorson, Philip J.
   Vice President for Technology Strategy & CIO
 
pthorson@StCloudState.edu
320-308-2065

To make a comment or suggest changes to this policy:

St. Cloud State University Users: Login
Non-St. Cloud State Users: Email comments to policy@stcloudstate.edu