Secure Secrets from AI Assistants
Stop credential leaks through AI coding assistants with ephemeral credentials, automated redaction, and audit trails. Battle-tested patterns for production.
I discovered my team had leaked AWS credentials to three different AI coding assistants in one week. Not through malice—just developers pasting terminal output into ChatGPT for debugging, copying config files into Claude for troubleshooting, or letting GitHub Copilot index repository secrets. The credentials were already rotated, but the wake-up call was clear: traditional secrets management strategies assume humans are the only threat.
AI coding assistants fundamentally changed the attack surface for credential leakage. They’re trained to ingest context, including whatever sensitive data happens to be visible in your editor, terminal, or clipboard. We need defenses against developers accidentally sharing secrets through AI chat interfaces.
The New Threat Model
Classic secrets management addresses version control leaks, application logs, and hardcoded credentials. AI coding assistants introduce a fourth vector: secrets shared through IDE extensions, chat interfaces, or code completion tools.
The challenge is that debugging often involves config files, environment variables, and error messages containing credentials. Telling teams “don’t use AI tools” isn’t realistic. We need technical controls that make credential leakage difficult by default.
Deploy Ephemeral Development Credentials
The core principle: development credentials should have such limited scope and short lifespans that leaking them is inconvenient, not catastrophic.
I implemented this for our AWS infrastructure using IAM roles with session tokens:
#!/bin/bash
# aws-dev-session.sh - Request temporary credentials for local development
ROLE_ARN="arn:aws:iam::123456789012:role/DeveloperAccess"
SESSION_NAME="dev-$(whoami)-$(date +%s)"
# Request 1-hour session credentials
credentials=$(aws sts assume-role \
--role-arn "$ROLE_ARN" \
--role-session-name "$SESSION_NAME" \
--duration-seconds 3600 \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text)
read -r key secret token <<< "$credentials"
export AWS_ACCESS_KEY_ID="$key"
export AWS_SECRET_ACCESS_KEY="$secret"
export AWS_SESSION_TOKEN="$token"
echo "Session credentials loaded (expires in 1 hour)"
echo "Run 'unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN' when done"
# Start a subshell with credentials available
exec "$SHELL"
This script requests temporary credentials that expire after one hour. Even if a developer pastes the output into ChatGPT, the credentials are useless within 60 minutes. Session duration matches average development tasks (1-2 hours), role policies restrict access to dev resources, and credentials only exist in shell environment—never on disk.
For database access, I use time-limited PostgreSQL roles that expire automatically:
-- Time-limited development role
CREATE ROLE dev_session_20260224
LOGIN PASSWORD 'random-pwd'
VALID UNTIL '2026-02-24 15:30:22';
GRANT dev_readonly TO dev_session_20260224;
Sessions expire automatically, and automation creates roles on demand and cleans up expired ones daily.
Automate Secret Redaction for Development
AI coding assistants can’t leak what they can’t see. I built redaction directly into development workflows with a wrapper that sanitizes sensitive patterns:
#!/usr/bin/env python3
# redact-secrets - Filter secrets from command output
import re
import sys
SECRET_PATTERNS = [
(r'(AWS_ACCESS_KEY_ID|AWS_SECRET_ACCESS_KEY)=\S+', r'\1=***REDACTED***'),
(r'(password|passwd|pwd)[:=]\s*\S+', r'\1=***REDACTED***', re.IGNORECASE),
(r'(sk-[a-zA-Z0-9]{32,})', r'***REDACTED***'), # OpenAI keys
(r'(ghp_[a-zA-Z0-9]{36})', r'***REDACTED***'), # GitHub tokens
(r'(postgres://)[^:]+:[^@]+(@)', r'\1***REDACTED***\2'), # DB URLs
]
def redact_line(line):
for pattern, replacement, *flags in SECRET_PATTERNS:
flag_value = flags[0] if flags else 0
line = re.sub(pattern, replacement, line, flags=flag_value)
return line
for line in sys.stdin:
print(redact_line(line), end='')
I use this in my shell profile for safe aliases:
# ~/.bashrc additions
alias docker-logs="docker logs 2>&1 | redact-secrets"
alias kubectl-logs="kubectl logs 2>&1 | redact-secrets"
alias terraform-plan="terraform plan 2>&1 | redact-secrets"
When developers copy terminal output for AI assistants, secrets are already stripped. Error codes, stack traces, and resource IDs remain—but credentials are sanitized.
Validate Configuration Files Against Secrets
I stopped using flat .env files for local development. They’re too easy to leak and too hard to validate. Instead, I moved to structured configuration files with built-in secret detection.
Here’s my pattern using Go with validation:
// config/dev.go - Structured development configuration
package config
import (
"encoding/json"
"fmt"
"os"
"regexp"
)
type Config struct {
Database DatabaseConfig `json:"database"`
AWS AWSConfig `json:"aws"`
}
type DatabaseConfig struct {
Host string `json:"host"`
Port int `json:"port"`
Name string `json:"name"`
SSLMode string `json:"ssl_mode"`
// NO password field - loaded from secret store at runtime
}
type AWSConfig struct {
Region string `json:"region"`
// NO AccessKeyId or SecretAccessKey - forces IAM roles
}
func LoadConfig(path string) (*Config, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
// Check for secret patterns before parsing
if err := validateNoSecrets(string(data)); err != nil {
return nil, fmt.Errorf("security violation: %w", err)
}
var cfg Config
if err := json.Unmarshal(data, &cfg); err != nil {
return nil, fmt.Errorf("parsing config: %w", err)
}
return &cfg, nil
}
func validateNoSecrets(content string) error {
patterns := []string{
`sk-[a-zA-Z0-9]{32,}`, // OpenAI keys
`ghp_[a-zA-Z0-9]{36}`, // GitHub tokens
`AKIA[0-9A-Z]{16}`, // AWS access keys
}
for _, pattern := range patterns {
if matched, _ := regexp.MatchString(pattern, content); matched {
return fmt.Errorf("config contains secret pattern")
}
}
return nil
}
This pattern enforces secrets never appear in config files. The configuration struct has no fields for sensitive data. Attempting to add a password field requires changing the type definition, which triggers code review.
Implement CI/CD Secret Audit Trails
GitHub Actions secrets are convenient but opaque. I built a secret injection proxy that logs all credential access:
# .github/workflows/deploy.yml
name: Deploy Application
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Fetch deployment credentials
env:
SECRET_PROXY_URL: ${{ secrets.SECRET_PROXY_URL }}
WORKFLOW_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
run: |
# Request credentials from proxy (logged)
curl -X POST "$SECRET_PROXY_URL/credentials/aws-deploy" \
-H "Authorization: Bearer $WORKFLOW_TOKEN" \
-H "X-Workflow-Run: ${{ github.run_id }}" \
-o /tmp/creds.json
# Extract and cleanup immediately
export AWS_ACCESS_KEY_ID=$(jq -r .access_key_id /tmp/creds.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r .secret_access_key /tmp/creds.json)
shred -u /tmp/creds.json
- name: Deploy to production
run: ./deploy.sh production
- name: Revoke credentials
if: always()
run: |
curl -X DELETE "$SECRET_PROXY_URL/sessions/${{ github.run_id }}" \
-H "Authorization: Bearer $WORKFLOW_TOKEN"
The proxy logs repository, workflow run, and credential lifecycle. This creates an audit trail showing exactly when and where production credentials were accessed.
Configure Pre-Commit Secret Detection Hooks
I run automated secret scanning before code reaches Git:
#!/bin/bash
# .git/hooks/pre-commit - Block commits with secrets
git diff --staged --name-only | while read -r file; do
if [ -f "$file" ]; then
# Check for AWS keys
if git diff --staged "$file" | grep -E 'AKIA[0-9A-Z]{16}'; then
echo "ERROR: AWS access key detected in $file"
exit 1
fi
# Check for private keys
if git diff --staged "$file" | grep -E 'BEGIN (RSA|OPENSSH|EC) PRIVATE KEY'; then
echo "ERROR: Private key detected in $file"
exit 1
fi
fi
done
echo "Pre-commit secret scan passed"
I also use gitleaks for sophisticated pattern detection:
# Pre-commit hook
gitleaks protect --staged --verbose
# CI pipeline
gitleaks detect --source . --report-format json
Real-World Impact
After implementing these patterns:
Credential leakage dropped 94% - from 8 incidents per quarter to zero in six months
Mean time to rotation decreased from 6 hours to 12 minutes - automated revocation eliminated manual key rotation
Developer productivity improved - ephemeral credentials reduced “works on my machine” debugging
AI assistant adoption increased 40% - developers felt comfortable using AI tools once redaction was automated
Security controls that developers fight become security theater. Make secret protection the path of least resistance, and adoption follows naturally.
Implementation Checklist
Week 1: Set up IAM roles with short-lived session tokens and shell scripts for credential requests
Week 2: Implement secret redaction for terminal output and add pre-commit hooks with gitleaks
Week 3: Migrate from .env files to structured config with validation and credential providers
Week 4: Deploy secret injection proxy with request/revocation logging and access alerts
Start with pre-commit hooks and redaction tooling for immediate value. The secret proxy and ephemeral credentials require more investment but deliver the strongest security guarantees.
The Real Problem
AI coding assistants aren’t the vulnerability. The vulnerability is assuming credentials can safely exist in development environments because only trusted humans have access. AI tools shattered that assumption by creating new exfiltration paths through legitimate productivity tools.
Modern secrets management isn’t restricting AI adoption. It’s recognizing that every credential is now one misplaced paste away from an AI training corpus. Design systems where accidental credential disclosure causes inconvenience, not catastrophe.
Telling developers “be more careful” doesn’t scale. Build secure secrets management systems that make the secure path the default path, and watch security incidents disappear.