Audit Logging for AI Assistants
AI assistants influence pricing pages, security answers, and customer trust. Without audit logs you cannot prove compliance or debug incidents. Here is how to build a useful log stream.
Core events to capture
- Prompt changes: tenant_id, previous prompt hash, new prompt hash, editor, reason.
- Crawl jobs: domain, run_id, page counts, success/failure, triggered by (schedule, manual, IndexNow).
- Threshold and policy edits: old value, new value, actor, justification.
- Admin impersonation: who impersonated whom, start/end time, actions taken.
- Billing updates: plan tier changes, quota overrides, grace period toggles.
- Kill switches: embed disablement, reason, scope (tenant vs global).
Event structure
| Field | Description |
|---|---|
| event_id | Unique identifier for traceability |
| event_type | e.g., prompt.update, crawl.run, billing.grace.start |
| actor | User ID or service account |
| tenant_id | Scope of the action |
| payload | JSON with before/after values |
| timestamp | ISO string in UTC |
| source_ip | (Optional) for security events |
Storage and access
- Write to an append-only log table in your database or warehouse.
- Stream critical events to Google Cloud Logging or AWS CloudWatch.
- Provide export endpoints (CSV/JSON) for customers needing regular audits.
- Set retention to at least 365 days; allow longer retention per contract.
Alerting and review
- Trigger Google Chat alerts for high risk actions: prompt updates, impersonations, kill switches.
- Require dual approval for sensitive actions (e.g., disabling a tenant).
- Schedule monthly log reviews to satisfy SOC 2 or ISO audits.
CrawlBot implementation
CrawlBot’s admin-ops service logs every action with actor IDs, timestamps, and payloads, and exports them to GCS/BigQuery. Mimic this approach so your assistant deployments remain auditable.***