Security and Privacy Statement on Artificial Intelligence

Artificial Intelligence (AI) technology is rapidly evolving and the university is currently exploring AI tools that could be utilized in our educational, research and innovation, and health care endeavors. This guidance will be updated as new tools and vendors are brought into the university’s technical environment. 

The use of generative AI tools, like ChatGPT, Bard, Copilot, and others, has increased rapidly over the past year and will continue to grow. While the university has already begun incorporating AI tools into our educational spaces, Digital Security and Trust within the Office of Technology and Digital Innovation (OTDI) is working to better understand how these systems protect the security and privacy of information they collect, especially as it pertains to institutional data. 

Institutional Data and AI Use

University community members should not enter any institutional data that is categorized above the S1 (public) level into generative AI tools, except when using the protected environment of Copilot, meaning that you logged in with your university credentials and see the green “Protected” button in the upper right-hand corner.

Even when using the protected version of Copilot, it is best practice to only put S1 or S2 (internal) institutional data into the tool. S3 (private) and S4 (restricted) data should not be entered into any AI platform.

Ohio State’s institutional data is information created, collected, maintained, transmitted, or recorded by or for the university to conduct university operations. The Institutional Data Policy (IDP) establishes the need to protect institutional data and requires that all institutional data is assigned one of four security classification levels (S1 – S4). More information about the IDP and what types of information falls into each institutional data category can be found on OTDI’s IDP webpage.

Privacy and Information Integrity

Those who wish to use the unprotected version of Copilot or another AI platform should think carefully about what happens to the information entered into these tools before engaging with them. Many AI companies state that that they have access to all information entered into the system, including account information and any inputs used to generate a response. This data could be breached and used by cybercriminals to create malware, phishing email campaigns, or other cyber scams. Additionally, information entered in a prompt could be used to train the underlying large language model, meaning that whatever data is in the prompt could then be inadvertently exposed to another user.

When using any generative AI tools, including Copilot, be sure to cross-reference any information the tool gives you to make sure it’s accurate, as these systems have been known to make up, or “hallucinate”, data. You should also consider whether the information you received from an AI tool is copyrighted and thus subject to certain regulations regarding its use.

We are all responsible for keeping our institutional data secure, especially as AI becomes more widely available. If you ever have concerns about your Ohio State account being compromised, you can reach out to the IT Service Desk online or by calling at (614) 688-4357 (HELP).

Learn More

Cybersecurity for You (C4U), the university’s cybersecurity awareness platform, has a special achievement focused on AI. The activities within this achievement will help faculty, staff, and students navigate everything from considerations to take before using AI for work or course assignments to the effectiveness of AI detectors.

The Teaching and Learning Resource Center has created a comprehensive resource to guide AI use in our academic environments.

Questions about AI usage pertaining to information security and privacy considerations can be directed to the Digital Security and Trust team at otdi-dst@osu.edu.

Updated: February 19, 2024