Syracuse University Vendor AI Policy

Syracuse University is committed to ensuring the ethical and responsible use of artificial intelligence (AI) technologies across its campus. This policy establishes guidelines for vendors’ use of institutional data to train their AI models.

Scope: This policy applies to all vendors providing services to Syracuse University, including, but not limited to, those offering generative AI, machine learning, and predictive analytics tools.

1. Data Usage for AI Training

1.1. Explicit Consent Required: Vendors must obtain explicit written consent from the University’s Data Governance Committee before using institutional data to train their AI models.

1.2. Approved Use Cases: When consent is given, vendors may only use institutional data for AI training in pre-approved use cases specified in the agreement.

1.3. Data Minimization: Vendors must use only the minimum data necessary for the approved training purposes.

1.4. Data Security and Privacy: Vendors must comply with Syracuse University’s data security and privacy policies, which require that data be encrypted in transit and at rest. Vendors must also implement strict access controls and monitoring to prevent unauthorized access and data breaches.

1.5. Data Usage Agreement: Vendors must sign a Data Usage Agreement (DUA) outlining the permitted uses of the data, the security measures in place, and the procedures for data deletion after the training period.

1.6. Periodic Audits: Syracuse University reserves the right to conduct periodic audits of vendors to ensure compliance with the terms of the DUA and the University’s data protection standards. In an audit, vendors should provide transparency in the decision-making processes of their AI models, allowing for audits of the algorithms used and the data that inform them.

1.7. Compliance with Future Regulations: Syracuse University is committed to maintaining the highest data privacy standards and ethical AI practices. As such, vendors must agree to promptly adapt and comply with any new laws, regulations, or standards that may be enacted regarding artificial intelligence, data protection, and privacy. This includes, but is not limited to, changes in federal, state, and local laws and any new guidelines or best practices established by relevant governing bodies or industry groups.

2. Ethical AI Practices: Syracuse University is dedicated to the responsible and ethical use of artificial intelligence. Vendors are expected to adhere to the following principles when using institutional data for AI training:

2.1. Fairness: AI models must be designed to provide equitable outcomes that do not illegally discriminate against any individual or group based on race, gender, sexual orientation, disability, or other characteristics.

2.2. Accountability: Vendors must be accountable for the AI systems they develop and deploy. This includes ensuring that AI outputs can be explained and justified in human terms.

2.3. Transparency: When applicable, the AI training processes, including the datasets used, the algorithms applied, and the decision-making criteria, must be transparent.

2.4. Privacy: Vendors must prioritize individuals’ privacy and ensure that personal data is not misused or exposed during AI training or deployment.

2.5. Security: AI systems must be secure against unauthorized access and malicious use that could harm individuals or Syracuse University.

2.6. Sustainability: AI practices should promote environmental sustainability, minimizing the carbon footprint of AI training and operations.

2.7. Continuous Learning: Vendors must commit to continuous learning and improvement of their AI systems, incorporating feedback and adapting to new ethical insights and standards.

If you have questions or need more information about this policy, please contact the Syracuse University IT Security Office at itsecurity@syr.edu.