Stop Rebuilding the Same Integration Plumbing in Every AI App
OAuth, secrets, MCP — four SDK calls, one registration step.
Every AI app we built rewrote the same plumbing. The OAuth dance for Slack. Encrypted storage for an API key. Refresh-token logic for the third call that finally fails after an hour. Wiring up an MCP client to a server that lives behind a bearer token someone pasted into a Notion page.
We'd write it, copy-paste it into the next app, watch it rot. Each app built by a different teammate, slightly differently, with slightly different bugs. We were a small team and the integration code became most of the code.
The pattern under all of it
Strip away the providers and the AI-specific bits, and every app needed the same four things from the platform:
- Env vars — a database URL, a Stripe key, the boring stuff. Not in a
.envfile in a Docker image. Not in a CI secret. Somewhere the app can ask for at runtime. - Pre-built integrations — Gmail, Calendar, Drive. The user logs in once on the platform; every app gets typed access on their behalf.
- Custom OAuth — the providers we don't pre-build. Slack, Notion, the company's SSO. The customer holds the
client_id/secret; their app shouldn't. - Custom MCP — internal MCP servers, third-party MCPs. The customer holds the URL and the bearer token; their app shouldn't.
That's the spine of the SDK. Four primitives, every app you deploy uses some of them, none of them require integration code in your app.
Register once at the org level
The flip is registration. The company (org owner on Leash) registers their things one time on the dashboard:
- Drop a Slack
client_id+client_secretinto the “Custom OAuth providers” card. Encrypted with the org's KMS key. The app never sees it. - Drop the URL of an internal MCP server + a bearer token into the “Custom MCP servers” card. Same treatment.
- Connect Doppler / 1Password / GCP Secret Manager as a secret source — or just type secrets into the dashboard.
Now every app you deploy in that org gets typed access through four SDK calls.
The four calls
import { LeashIntegrations } from '@leash/sdk/integrations'
const client = new LeashIntegrations({ apiKey: process.env.LEASH_API_KEY })
// LEASH_API_KEY: created on the org page (/dashboard/organization), one per org.
// 1. Env vars — pulled at runtime, no rebuild needed when they change
const dbUrl = await client.getEnv('DATABASE_URL')
// 2. Pre-built integration — Gmail/Calendar/Drive
const messages = await client.gmail.listMessages({ maxResults: 5 })
// 3. Custom OAuth provider — registered at the org level
// Returns the user's fresh access token; refresh handled for you.
const slackToken = await client.getAccessToken('slack')
// 4. Custom MCP server — registered at the org level
// Returns { url, headers } including the bearer Authorization.
const mcp = await client.getCustomMcpConfig('acme-tools')That's it. No client_secret in your code. No refresh logic. No MCP boilerplate. The same four calls work in TypeScript, Python, Go, Ruby, Rust, and Java.
Your .env collapses to one line
The thing we noticed only after living with it for a while: once you're using Leash, the only secret your app's .env actually needs is LEASH_API_KEY. Everything else — DATABASE_URL, the Stripe key, the OAuth client secret, the bearer token for an internal MCP — comes through the SDK at runtime.
# .env (yes, this is the whole thing) LEASH_API_KEY=lsk_live_...
No more .env.example drift. No more “did we set DATABASE_URL in staging?” debugging at 11pm. Rotation happens at the source; no rebuild, no redeploy.
What it deliberately doesn't do
- Doesn't proxy MCP traffic. We hand you the URL and the auth headers; your app calls the MCP directly. Leash isn't in the request path. That keeps latency honest and means we're not a bottleneck for the LLM's tool calls.
- Doesn't require you to use Leash for secrets. If you'd rather hold them in Doppler or 1Password, point Leash at your existing source.
getEnvresolves through whichever source the org configured. - Doesn't pretend to handle every cloud. Single-region GCP today. Customers running on Leash are betting on a small surface area, not a multi-cloud promise.
Why this shape
The shape comes from a constraint we kept hitting. Customer apps can't hold credentials. Their AI agent runs on someone's laptop, in CI, on a Cloud Run revision someone's about to redeploy. Putting client_secret in the app means rotating it everywhere whenever it leaks. So we put the credential in one place and gave the app a thin retrieval call instead.
The same logic for MCP. The bearer token for a customer's internal tool server isn't something we want their AI app to know. The app gets a config dictionary right before it calls the MCP. That's as long as the credential lives anywhere near user code.
The four-primitive surface area is small on purpose. Anything else (token caching, retries, pagination on Gmail, etc.) lives in the SDK or in the customer's code, not in the platform contract. We'd rather grow the SDK than the API.
Where to look
- SDK overview — the four pillars, with deeper code samples per language.
- Env vars · Pre-built integrations · Custom OAuth · Custom MCP
- Pricing — built-ins are free; custom OAuth and custom MCP are gated to the Growth plan.
Try it.
curl -fsSL https://leash.build/install.sh | sh leash login leash deploy
Or just create an org on the dashboard, register a Slack app or an internal MCP, and call the SDK from any project.