Artificial Intelligence Policies & Guidelines

Office of Information Technology

Using Generative AI at ND

Notre Dame encourages safe exploration and use of generative AI tools to further our teaching, learning, research, and other pursuits. Keep these guidelines in mind:

  • Work within current University policies.
  • Protect confidential, copyrighted, and personal data.
  • Any AI tool used with institutional data must follow approval processes.
  • Routinely revisit specific usage policies.
  • Act ethically with AI.
  • You are 100% responsible for the output you use.
  • Be transparent and document usage of AI.
  • When in doubt, reach out.

Full AI Policies and Guidelines

A&L Resources

A Hub for Concerns and Collaborations - A&L Office of Digital Strategy

The College recently created the Office of Digital Strategy to support the evolution of the College's use of digital technologies, including generative AI. John Behrens, the director, provides workshops and consultations on generative AI for departments, small groups, or individuals. In addition, he leads a student research group that tracks the latest developments in the application of AI and will be hosting a monthly webinar series on ThinkND. John also consults with each of the groups mentioned below. Please email John ( if you have any requests or suggestions about addressing the challenges or opportunities of generative AI, increasing communication and collaboration, or want to join an affinity group.

Assessment & Instruction - Notre Dame Learning

In collaboration with faculty from around the University (including Nathaniel Myers and Whitney James from the Writing Center and Katie Walden from American Studies), Notre Dame Learning is sponsoring a series of workshops and presentations throughout the semester. They also have a web page of instructionally related resources including links within and outside of Notre Dame. The resource page gives some examples of how you might word your syllabus to address working with generative AI.

Academic Integrity - Office of Academic Standards

During the Spring, the Director of the Office of Academic Standards, Ardea Russo, convened a group of technologists, learning specialists, and faculty from across campus to discuss the implications of generative AI (primarily ChatGPT) which produced an Official Statement Regarding Generative AI as well as AI Recommendations for Instructors. We recommend you read these two documents. The committee included a broad range of inputs from our faculty and from across the University.

In the week ahead, the Office of Academic Standards will also release a statement to students about generative AI that will be posted on and include the following text: "With this in mind, remember that representing work that you did not produce as your own, including work generated or materially modified by AI, constitutes academic dishonesty. Use of generative AI in a way that violates an instructor’s articulated policy, or using it to complete coursework in a way not expressly permitted by the faculty member, will be considered a violation of the Honor Code." Please remember that Ardea ( can be contacted for any concerns related to academic dishonesty regardless of whether it is related to generative AI or not.

We want to emphasize that the honor code is based on the idea of violating "an instructor's articulated policy, or using it to complete coursework in a way not expressly permitted by the faculty member". This means it is important for faculty to think about the range of possible uses of the technologies in completing class assignments. Text-based generative AI such as ChatGPT (also, Bing, Bard, Claude, Llamma2, and others) are used for a wide range of activities such as web search (Bing, Bard), summarization of text (all of them), evaluation and editing of text (all) and brainstorming or background research. The more clearly you delineate what is acceptable and not, the easier it will be to agree on what is appropriate and what is not.

Privacy, Security & Infrastructure - OIT

As is often the case with free products, free use is supported because the sponsoring organization is benefiting from the data captured during user interactions. Because ChatGPT USES THE TEXT YOU INPUT, AND YOUR SEQUENCE OF RESPONSES, TO CHANGE ITS FUTURE BEHAVIOR, YOUR INPUTS MAY POTENTIALLY APPEAR IN SOMEONE ELSE'S OUTPUTS! While this is a general concern for individuals, it is an especially acute concern in FERPA-regulated environments such as the University. Jane Livingston, ND Vice President and Chief Technology Officer, is forming a task force to address the cross-enterprise issues related to these infrastructure, data, and behavioral risks and opportunities.

Some Things You Should Know


  • ChatGPT continues to evolve in quality and appropriateness of responses. However, how it responds to any individual prompt is difficult to predict and needs to be checked empirically. The quality of the work depends on the type of work requested and how it is requested.
  • Because the systems are probabilistic and cannot be perfectly predicted, the outputs must always be evaluated and accepted or modified by appropriately proficient humans.
  • Even AI experts who created ChatGPT continue to be surprised by some of its behavior.
  • While ChatGPT is the most well-known text-generating AI system, there are others including Bing, Bard, LLama2, Claude, Bloom, and so on. When you talk about text generation systems, it is generally best to talk about the class of "software-based text generation" rather than specific products such as ChatGPT.
  • ChatGPT can be embellished through the use of plug-ins. A plug-in is a data linkage to another software program such as a database (kayak travel) or a math solver (Wolfram Alpha). By combining the text generation power of ChatGPT with the factual power of a database or solving system, the power of either is dramatically increased.
  • In addition to text generators, there are also image, video, and sound generators that are now readily available.
  • Most text-generation software companies are currently in litigation as the legal appropriateness of the training documents is under question in many cases. In addition, text generated primarily by these systems cannot currently be copyrighted as a machine cannot be given a copyright.

A Note About Assessment

Students sometimes fail to realize that in many cases, the work product we are asking them to create is not the goal itself, but that it is being requested to force thinking and experience and to provide evidence of the hidden processes. For example, the assessment purpose of an essay may not be to practice essays, but rather to generate the experience of struggling with clear communication and provide evidence of the thinking via the essay. Accordingly, some students may ask why they need to "learn to write" when computers will do that for them in the future. One response is to point out that the goal may not be writing per se, but rather the goal is to provide evidence of the thinking required for understanding the world whether that is expressed in writing or not. A second possible response is that while these systems can always act as writing aids, they cannot generate guaranteed valuable work and we will always need to be sufficiently proficient so that we can evaluate the system. A third possible response is that the world is changing quickly and while text generation can always be plugged into some process, the world needs leaders, such as them, who understand the processes with suitable depth to be able to use the tools in the most valuable and most appropriate ways.

Related Links

Teaching in the Age of AI

Official Statement Regarding Generative AI

The Honor Code