1
Classify data before choosing tools
- Define what data is public, internal, sensitive, and restricted before anyone uses AI tools.
- Match deployment options to data sensitivity: public tools for public data, enterprise or self-hosted for anything sensitive.
- Review data processing agreements and understand where prompts, outputs, and training data are stored.
2
Set clear acceptable use expectations
- Publish simple guidance on what AI tools staff can use and for what purposes.
- Provide role-specific examples: drafting emails is different from processing pupil data or financial records.
- Create an escalation path for edge cases so staff ask rather than guess.
3
Govern and iterate
- Log AI decisions and assumptions so governance stays coherent as tools evolve.
- Review policy quarterly and update when new tools are adopted or risks change.
- Train staff on both capability and limitation, especially hallucinations, bias, and data leakage.