Legal, security and data protection
The use of generative AI raises several legal issues of which staff should be aware – if in doubt, further advice and guidance should be sought.
- Inputs: Be cautious about what data is input into AI. Uploading University-owned or paywalled content may breach licensing agreements
- Intellectual property (IP): Outputs may infringe third-party IP. Users are legally responsible for how they use AI outputs
- Reliability: AI outputs are not guaranteed to be accurate or up to date. Do not rely solely on them for critical decisions
- Data Protection: Do not upload personal data unless you can ensure compliance with the University’s Data Protection Policy and have completed a Data Protection Impact Assessment if required. For advice, please consult the Data Protection Officer
- Security: Use enterprise-protected AI tools (eg Copilot with the green shield icon) to ensure data is secure and not used for further AI training.
The University and its staff are all responsible for ensuring that we adhere to relevant data protection laws. In addition, all staff should ensure that they continue to adhere to wider, related policies – eg the IT Acceptable Use Policy (staff only). See also more information about our institutional platform, Copilot.
- When uploading personal data to AI tools, the University will not always have control over how the data is used, how it is kept secure and how long it is retained for
- Not all AI products will be compliant with relevant data protection law which limit the purposes for which personal data can be used – with free-to-air (versions of) tools often taking data and using it to train their own models in breach of such purpose limitation rules.
Data protection laws also require organisations to protect individuals from automated decision making and ensuring the accuracy of any data. AI tools can therefore carry risks when being relied upon solely to make decisions on individuals e.g. employment screening of candidates without human analysis.
Any use of generative AI must be compliant with UK General Data Protection Regulations (GDPR). No personal information should be uploaded or shared with any AI tool unless it is necessary, and appropriate safeguarding and protection measures are in place. Users should always be aware of the information that they provide to AI systems to generate material and should not share any information that could identify individuals and/or could potentially violate privacy rights. Others’ work should also not be uploaded to any AI platform, even if that work does not identify them.
Staff should not input any personal data into any AI tool or use AI to interpret or analyse personal data, unless they can ensure that any use will comply with the University’s Data Protection Policy and, where appropriate, complete a Data Protection Impact Assessment (staff only), and/or seek guidance from the University’s Data Protection Officer.