Street Support Network AI Policy

Introduction

This policy sets out how Street Support Network uses artificial intelligence tools responsibly and effectively. It's designed for our team, trustees, partners, and anyone who wants to understand our approach to AI.

We've been using AI across our organisation for over a year - from meeting transcription to our virtual assistant that connects people to local support. This policy captures what we've learned about where AI helps and where human expertise is absolutely essential.

The goal is simple: give our team confidence to use AI safely and effectively, while protecting the people we serve and maintaining the trust we've built.

Quick Reference

Our Approach in Five Points

1. Human-led, AI-supported

We use AI to save time on routine tasks — never to replace human insight or relationships.

2. Fix before you speed

We audit the process first. No point making a broken system faster.

3. Be strategic, not just curious

We ask "Should we?" before "Can we?"

4. Personal data is off-limits

No identifiable info goes into consumer AI tools. Ever.

5. Humans sign off

Everything gets a human check before use — except for our virtual assistant, which follows a different process.

Traffic Light System

Green Light — Go ahead (Safe to do without checking)

  • Internal drafts using public info
  • Research from public sources
  • Summarising internal meetings
  • Strategic thinking, planning, and brainstorming
  • Admin tasks with non-sensitive data

Amber Light — Check first (Get a second opinion)

  • External-facing content
  • Aggregated or anonymised data
  • New AI tools or features
  • Operating system or browser features that can capture or interpret on-screen content (for example Windows Copilot Vision, Windows Recall, or similar "screen understanding" tools). These must stay disabled unless the Managing Director confirms that no personal or sensitive data could be visible.
  • Anything that feels even slightly grey

Red Light — Don't do it (Off-limits under this policy)

  • Real people's personal information
  • Financial data, passwords or access credentials
  • Sensitive or confidential internal material
  • Anything that could harm trust or safety

What We Never Do With AI

  • Use AI platforms to process personal data about people seeking support
  • Make decisions about individuals without human input
  • Replace human conversations, care, or direct support
  • Use AI to assess, triage, or respond to safeguarding concerns — any concerns about someone's safety must always go directly to a human
  • Let AI negotiate partnerships or service agreements
  • Rely on AI in emergencies or crisis situations

Our Virtual Assistant (IBM WatsonX)

  • Gives real-time recommendations about local homelessness services
  • Pulls only from verified, regularly updated services
  • Doesn't store or collect personal data
  • Always recommends local organisations that provide broader human advice beyond Housing Options
  • Reviewed monthly for accuracy and tone

Not Sure? Ask.

Contact the Managing Director right away for:

  • Anything involving personal data
  • A new tool or use case
  • Uncertainty about AI-generated content
  • Any incident or concern

👉 We'd always rather hear a "quick check" question than fix a preventable mistake later.

Executive Summary

We use AI to support our people, not replace them. This policy sets out how we use AI in line with our values of compassion, collaboration and practical support. It's here to give our team confidence, so we can experiment and innovate safely, knowing where the boundaries are.

What this policy does:

  • Sets clear principles for ethical AI use
  • Defines where human input is essential
  • Encourages safe experimentation
  • Protects people's personal information
  • Aligns with our strategy to build shared, digital-first infrastructure

Who We Are and Why This Matters

We're Street Support Network. We connect people seeking support to help, and help organisations work better together.

AI helps us do that more effectively, but only when it reflects our values, centres human connection, and protects people's dignity. This policy exists so our small team can use AI confidently, knowing what's okay, what's not, and why.

Our Core Principles

1. Audit Before Automate

Speed doesn't help if the process is broken. Before using AI, we look at whether the task is actually working first.

2. Be Selective and Strategic

We don't use AI just because we can. For each use case, we ask: Should we use AI here?

3. Preserve Human Expertise Where It Matters

Some things require lived experience, emotional intelligence, or ethical judgement. AI can support, but never replace, these areas.

4. Protect Personal Information

No identifiable information goes into public AI tools. That includes names, case notes, contact details or anything that could identify someone seeking support.

5. Human Review of AI Outputs

Everything AI generates is reviewed by a human before it's shared externally — except for our virtual assistant, which has built-in safeguards and oversight.

6. Lead With Human Value, Not AI Capability

We don't brag about using AI. We say: "Our people make this work. AI just helps us do more of what matters."

7. Grounded in IBM's Ethical AI Framework

Our approach draws on proven principles: explainability, fairness, robustness, transparency, and privacy. This isn't just good practice. It's how we build trust.

What We Never Use AI For

Clear boundaries protect everyone:

  • Processing personal data in consumer platforms
  • Making final decisions about individuals without human input
  • Replacing human conversations or direct support
  • AI must never be used to assess, triage, or respond to safeguarding concerns — any concerns about someone's safety must always go directly to a human
  • Negotiating partnerships or service agreements
  • Managing emergencies or crisis situations
  • Anything involving vulnerable groups without proper human review

Where Human Expertise is Non-Negotiable

These areas always require people, not machines:

  • Direct support conversations with people seeking support
  • Safeguarding decisions and risk assessments
  • Complex case work requiring understanding of individual circumstances
  • Crisis response and emergency situations
  • Partnership-building and relationship management
  • Strategic service decisions and board-level choices
  • Final approval of all public communications
  • Quality assurance for our virtual assistant responses

Where AI Helps (With Oversight)

AI adds value in these areas, but always with a human in the loop:

Admin & Operations

  • Calendar scheduling and meeting coordination
  • Meeting transcription and summaries
  • Data entry and basic reporting
  • Email and template drafts

Content Creation

  • Drafting policies and internal documents
  • Social media and blog content (reviewed against our tone of voice)
  • Funding bid first drafts (extensively edited by humans)
  • Newsletter and communication templates

Analysis & Strategy

  • Meeting analysis and action point extraction
  • Research and information summarisation
  • Trend spotting from aggregated data
  • Strategic planning support and thinking partnerships

Virtual Assistant: Real-Time Help with Safeguards

Virtual Assistant Overview:

  • Tool: IBM WatsonX
  • Purpose: Real-time, localised service recommendations for people seeking support
  • Governance: Regular human review, no personal data processed
  • Human Support: Always recommends local organisations that provide broader advice beyond Housing Options

Given the higher-stakes nature of real-time advice, our virtual assistant follows different protocols:

  • Pulls from verified, regularly updated service databases
  • Doesn't collect or store personal information about users
  • Always includes recommendations to local organisations that provide broader human advice beyond Housing Options
  • Where such organisations don't exist locally, we work to establish them or temper AI expectations accordingly
  • Quarterly assessment of recommendation quality and human support pathways
  • Immediate incident reporting for concerning responses

Detailed governance framework for our Virtual Assistant is outlined in our AI Governance Plan, which follows IBM's five pillars of responsible AI

Data, Email & Cloud Safeguards

We've made deliberate choices about AI access to our systems:

Google Workspace Decision

We've disabled Google Workspace AI features (Smart Compose, Gemini) because they can access sensitive information across our email and file systems. This protects people seeking support from having their information processed by AI without their knowledge.

Email and Cloud Connectors

Consumer tools like ChatGPT, Claude or Gemini are great for drafting and ideas — but they're not safe places for anything involving real people's data.

Enterprise AI tools might be appropriate, but only after:

  • Auditing our existing data permissions
  • Understanding exactly what the tool can access
  • Securing proper data processing agreements
  • Getting Managing Director approval

The Key Principle

If someone has access to sensitive data they shouldn't, AI makes it searchable through natural conversation. We fix permissions first, then consider AI connections.

GDPR Compliance

Our no-personal-data approach automatically ensures GDPR compliance (see our Data Protection Policy for full details). If personal information accidentally gets into an AI tool, report immediately to the Managing Director and follow our existing data breach procedures.

All AI use must comply with GDPR requirements, particularly given our work with vulnerable populations. Personal data of people seeking support requires the highest protection standards. Full GDPR compliance procedures are detailed in our Data Protection Policy.

Oversight & Quality Control

Human Review Process

  • Everything AI produces gets reviewed before external use and follows our Information, Communication and Social Media Policy approval processes
  • We fact-check claims, align with our tone of voice, and check for bias
  • We review AI-generated content for warmth, clarity and sensitivity, because the way we speak can make a real difference to how safe and supported someone feels
  • Content must sound authentically like Street Support Network
  • All outputs must reflect our values and brand guidelines

Virtual Assistant Monitoring

  • Monthly review of user interactions and response quality
  • Quarterly assessment of service recommendations accuracy
  • Regular testing with known scenarios
  • Clear escalation pathways for concerning responses

Governance Structure

  • Managing Director oversees all AI use and policy implementation
  • Report any concerns directly to Managing Director
  • Monthly team check-ins about AI effectiveness and challenges
  • Annual policy review incorporating new learning and tools

How We Talk About AI

We use a layered approach to transparency, based on research showing blanket "created with AI" disclosures can damage trust:

1. Organisation-wide (website, reports)

"Our team's expertise drives everything we do. We use AI for drafting and admin tasks, so our people can focus on relationships, strategy, and direct support."

2. In Relationships (partners, funders)

We're open about our AI principles and safeguards when building partnerships — demonstrating thoughtfulness, not just disclosure.

3. Per Output (specific documents)

We disclose AI use only when legally required, relevant for verification, or directly asked.

Accessibility and Inclusion

We commit to reviewing all AI-generated content for:

  • Plain language and cognitive accessibility
  • Inclusion and sensitivity of tone
  • Freedom from harmful assumptions or bias
  • Alignment with our values of dignity and respect

Our communications should always be easy to understand and welcoming to everyone.

Incident Response

If personal information gets put into an AI tool, or if there are concerns about AI use:

  1. Report immediately to the Managing Director
  2. Document what happened — which tool, what information, when
  3. Take action to limit potential impact
  4. Follow our Data Protection Policy breach procedures if personal data is involved
  5. Learn and improve — update guidance based on what happened
  6. Support the person involved — no blame, we're all learning

Training and Support

For New Team Members

  • Clear AI guidance during induction
  • Examples of appropriate and inappropriate use
  • Connection to our tone of voice and brand guidelines
  • Ongoing support and mentoring

Ongoing Development

  • Monthly team discussions about emerging AI tools
  • Sharing what works (and what doesn't)
  • Learning from other organisations' approaches
  • Regular updates as technology evolves

Measuring Success

We'll know this policy is working when:

  • Team feel confident using AI safely within clear boundaries
  • Routine tasks get done faster, freeing time for relationships
  • Our communications still sound authentically human
  • People seeking support receive better, more timely help
  • No incidents involving personal data
  • Our AI use consistently reflects our values

Implementation Plan

Immediately:

  • Share with all team members and include in team meeting discussion
  • Add to induction process and contractor briefings
  • Present summary to trustees for approval

Ongoing:

  • Monthly team reviews of AI use and effectiveness
  • Quarterly virtual assistant assessments
  • Annual policy review incorporating new learning
  • Regular incident response practice and team check-ins

Questions and Support

If you're unsure about anything, ask. We'd rather have too many questions than one preventable mistake.

This policy is a living document that grows with our experience. It's here to protect people and enable innovation — not block it.

This policy reflects our values of compassion, collaboration, and practical solutions. It aligns with our mission to connect people to help and help organisations work better together.

Version 2.0 — October 2025

Next review: October 2026

Share this page:Share on FacebookShare on X