Why This Matters Now
Since ChatGPT’s public debut, employee usage of generative AI (GenAI) has exploded—and so have the slip-ups. A recent study found that a decent percentage of everything pasted into ChatGPT contains confidential company data. Another review of real-world prompts showed that nearly one in ten GenAI requests risk exposing customer or payroll information. Even large vendors stumble: Scale AI accidentally left internal project docs about Google and Meta open on the public internet this month!
The takeaway is simple: enthusiasm is outpacing controls. Before your staff connect Microsoft 365 Copilot, Bard, or OpenAI’s APIs to live business data, put firm guardrails in place. Here are six that belong in every small-to-mid-sized business (SMB) playbook.
1. Update the Acceptable-Use Policy for AI
Most AUPs cover email, web, and social media—but stop short of GenAI. Add clear language that prohibits pasting:
- Customer or employee PII
- Regulated data (HIPAA, PCI, FERPA, etc.)
- Proprietary source code, designs, or legal agreements
Mandate internal tagging (e.g., [Public], [Internal], [Restricted]) so staff can quickly tell what may or may not be shared. A living AUP minimizes “I didn’t know” defenses when audits arrive.
2. Enable Data-Loss Prevention (DLP) & Cloud Access Security Broker (CASB) Filters
Modern DLP and CASB tools can fingerprint sensitive data and block or redact it before it reaches OpenAI endpoints. SentinelOne warns that data travels through multiple networks and “the risk is high that it will be intercepted or exposed in transit” if left uninspected.
Start by placing GenAI domains into a dedicated CASB category and applying stricter upload rules. If you’re a Microsoft 365 shop, configure Endpoint DLP to watch for credit-card numbers or patient IDs in browser sessions.
3. Adopt Role-Based Access & Safe-Prompting Plug-Ins
Not every employee needs the same AI privileges:
- Marketing: redact live customer PII by default.
- Engineering: allow code snippets, block private keys.
- Finance/HR: restrict to an on-prem, fine-tuned model.
Browser extensions like Microsoft’s “tenant-restricted mode” or third-party safe-prompting plug-ins can inject redaction scripts or banner warnings when high-risk text appears.
4. Log, Monitor & Alert on GenAI Traffic
Treat GenAI requests like any other SaaS activity. Pipe proxy or DNS logs into your SIEM and set alerts for anomalous spikes, after-hours usage, or unusual export volumes. Harmonic Security found 27 percent of sensitive GenAI prompts relate to employee HR data—an indicator that insiders may be outsourcing policy decisions to bots.
5. Provide an Approved, Private Sandbox
Shadow AI grows when official channels are clunky. Deploy an internal chat interface that:
- Uses your own Azure OpenAI or Anthropic tenant
- Strips metadata, logs every prompt, and enforces retention limits
- Offers pre-vetted “recipes” (summarize a call, rewrite an email, generate code comments)
With a sanctioned alternative, employees are less tempted to gamble with public sites.
6. Train Humans—Then Test Them
IBM’s 2024 Cost of a Data Breach report pins 88 percent of breaches on human error. Build quarterly micro-trainings that cover:
- Real examples of leaked prompts (anonymized but relatable)
- The difference between public and private models
- Hands-on red-team exercises where staff spot policy violations in sample prompts
Follow up with phishing-style “unsafe prompt” simulations to measure adoption.
Next Steps with Affant
Rolling out GenAI securely doesn’t have to overwhelm a lean IT team. Ready to put guardrails around innovation? Schedule a no-cost “AI Health Check,” and get a one-page roadmap you can act on this quarter—whether you partner with us or not.