Guidelines for using generative AI at Wayne State University

Definitions*

AI: Artificial intelligence (AI) applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take action.

Generative AI: AI techniques that learn a representation of artifacts from data, and use it to generate brand-new, unique artifacts that resemble but don’t repeat the original data. These artifacts can serve benign or nefarious purposes. Generative AI can produce novel content (including text, images, video, audio, and structures), computer code, synthetic data, workflows, and models of physical objects. Generative AI also can be used in art, drug discovery, or material design.

LLM: A large language model (LLM) is a specialized type of artificial intelligence (AI) that has been trained on vast amounts of text to understand existing content and generate original content.

Narrative

Generative AI systems use LLMs that are trained on the content available to the systems. For example, the ChatGPT products offered by OpenAI at no cost largely use information that is available on the public internet. Any data provided to these systems will also be incorporated into the model, this is meant to improve the ability of the AI to respond to questions or prompts.

These guidelines intend to not discourage the use of technology that contains AI functionality but to allow for the safe exploration of how these tools can enhance the academic experience or provide efficiencies for administrative tasks. The intent is to ensure doing so does not have an adverse impact on the confidentiality of university data or the privacy of the entire WSU community.

Several initiatives are being taken at the university already, for example, C&IT is currently testing AI features that have been incorporated into some of the more popular tools used on campus including Zoom AI companion, CoPilot from Microsoft, and upcoming enhancements to Canvas. Updates will be provided via tech.wayne.edu as these become available.

WSU is also exploring options to procure private generative AI tools that will provide the appropriate safeguards to protect university data. More information will be published as it becomes available.

Data security and privacy guidelines

AI is a quickly growing and changing technology, some products have incorporated options to not use user-submitted data or allow user-submitted data to be removed from the underlying LLM. While these options may exist, they vary from product to product and should be validated with the vendor before using any generative AI tool. Even if those options are available, you should avoid using any non-publicly available data (i.e. exist in the public WSU website) when using any product that has incorporated generative AI technology.

Examples

  • Any enterprise data that may have been generated from reporting tools (Cognos, Power BI) or directly from enterprise applications (Banner, ODS, STARS) even if you have been granted access to the data.
  • Any data associated with a student that would normally be considered FERPA protected, including grading student assignments or identifying work of a student that may have been generated by AI.
  • Any information containing PII, PHI, financial data, HR data, or research data that has not been fully sanitized or contains a cell size of less than ten.
  • Any information that contains copyrights or trademarks, including those of the University.
  • Any video, audio recordings or AI generate transcripts that might contain discussions about sensitive data or have an impact on the privacy of those participating.

AI companions for online meetings

Zoom, Teams, and other online meeting tools now allow for features that will summarize a meeting or provide other helpful features. The tools also use LLMs and require additional consideration.

  • All AI tools must first be approved by C&IT, third-party AI bots are not approved for use at this time. Exceptions must follow the purchasing process for technology purchases.
  • While not always required, these tools are most useful when the meeting is recorded. Before recording a meeting all policy and guidelines on recording meetings should be followed.
  • Respect the privacy of the individuals in attendance, including those of attendees outside the university who may have a different policy or rules that need to be followed.
  • AI is not perfect and often makes mistakes during transcription. For example, when a conference room of several people joins a meeting attribution for the entire room is given to a single entity. Most tools allow for editing and reviewing of the AI-generated content before publishing, this should always be done by the meeting organizer before publishing the results.
  • WSU cannot control the use of AI with meetings hosted outside by someone outside the University or with technology not provided by C&IT. If joining a meeting hosted by a third-party that is using AI where confidential information may be discussed or you do not feel comfortable with the use of the technology you should first try to negotiate another solution with the meeting organizer if a solution cannot be found consult with your supervisor on the necessity of attending the meeting.
  • If you are hosting a meeting and do not wish to have AI bots joining the meeting the waiting room features should be used to permit attendees and deny AI bots from participating.

Other security risks of AI technology

  • Generative AI is already being used in phishing attacks and online scams. As technology advances it will become easier to spoof identities and allow for better-crafted messages. These advances make identifying red flags more difficult. for example, messages may contain fewer grammatical errors or unusual phrasing caused by language translation errors. Take extra care to Identify other red flags, such as creating a sense of urgency, season messages, or offers that are just too good to be true. Some signs of the use of AI are misusing slang or industry jargon and abnormal abbreviations.
  • Generative AI also extends to images and video. This technology, especially with video, is effective but still maturing. If you suspect you are in a video conference with someone some signs to look for are artifacts during quick movement or improper display of body parts (i.e. back of someone’s head) are signs AI is being used.
  • Several generative AI tools now offer mobile apps available through Apple or Google app stores. Fake apps that mimic well-known tools or others that are intentionally meant to steal data are becoming more prevalent. Caution should always be taken to only install apps from a trusted source but the best course of action to avoid data theft from the apps is to follow the guidelines in the document.

If you ever suspect that you have been scammed by any source of technology that has impacted university data or your WSU account, contact the Help Desk immediately.

Purchasing or signing up for accounts to access tools containing AI technology

Per APPM 9.8. technology purchases, including those that are considered to be free or open source, require approval by C&IT. OGC may also need to review legal terms and conditions.

AI components are increasingly being added to existing tools that may have already been purchased and gone through this review. It is important to work with a vendor to identify when these changes take place as they often come with addendums to existing agreements. These terms may at times come as pop-ups or click-through agreements, please do not agree without review and approval. It is also important to regularly review release notes which may identify these updates. Those new features and terms should also be reviewed by C&IT and OGC.

If you are interested in purchasing a tool containing AI technology or have a new set of terms for using AI or other enhancements, please contact WSU procurement to start the process.

*Definitions from Gartner