Using LLM APIs but worried about sending client data? Built a proxy for that.
OpenAI-compatible proxy that masks personal data and secrets before sending to your provider.
Mask Mode (default):
You send: "Email sarah.chen@hospital.org about meeting Dr. Miller"
LLM receives: "Email <EMAIL_1> about meeting <PERSON_1>"
You get back: Original names restored in response
Route Mode (if you run a local LLM):
Requests with PII → Local LLM
Everything else → Cloud
OpenAI-compatible proxy that masks personal data and secrets before sending to your provider.
Mask Mode (default):
Route Mode (if you run a local LLM): What it catches: Uses Microsoft Presidio for PII detection. ~500MB RAM, 10-50ms per request.Works with Cursor, Open WebUI, LangChain, or any OpenAI-compatible client.
Docs: https://pasteguard.com/docs
Feedback on edge cases welcome.