Back to Blog
serverlesstutorialsecurity

Protecting API Keys in Serverless Functions

October 5, 2024 5 min read | By Redshift Team
Share:

If you've deployed a Lambda or a Vercel function, you've had the moment: where do I put the API key? You can't just drop a .env file on a filesystem that doesn't persist. Hardcoding it is obviously out. And every platform has its own way of handling this, which means the "right" answer changes depending on where you're deploying. Here's what we've seen work.

Why Serverless Makes This Harder

Regular servers are straightforward -- put secrets in env vars, config files, or pull from a vault at boot. Serverless blows that up in a few ways. There's no persistent disk, so anything you write is gone next invocation. Cold starts mean your function might need to re-fetch secrets from scratch. You usually can't install whatever you want in the runtime. And if you're deploying across Lambda, Vercel, and Workers, congratulations -- you now get to learn three different secret storage systems.

None of these are unsolvable, but they do mean you have to think about when secrets enter the picture, not just how.

Pattern 1: Build-Time Injection

The simplest approach: pull secrets during your build step and bake them into the deployment as environment variables. This is what most teams start with, and honestly it works fine for a lot of cases.

# In your build script or CI
$ redshift run -e production -- npm run build

# Or download and pass to your bundler
$ export $(redshift secrets download -e production --format env)
$ npm run build

The tradeoff is that your secrets end up in the deployment artifact. If someone gets access to the built bundle, they get the secrets too. For many internal tools this is acceptable risk. For anything handling payment keys or PII, you probably want one of the other patterns.

Pattern 2: Platform Secret Storage

This is the most common pattern we see in production: keep Redshift as the source of truth, but sync secrets into whatever native storage your platform provides. Your functions read secrets the way the platform expects, and you avoid any runtime dependency on external services.

Vercel

# Sync Redshift secrets to Vercel
$ redshift secrets download -e production --format env | while IFS='=' read -r key value; do
    vercel env add "$key" production <<< "$value"
done

AWS Lambda

# Sync to AWS Secrets Manager
$ redshift secrets download -e production --format json | \
    aws secretsmanager put-secret-value \
    --secret-id my-app/production \
    --secret-string file:///dev/stdin

Cloudflare Workers

# Sync to Cloudflare secrets
$ redshift secrets download -e production --format env | while IFS='=' read -r key value; do
    echo "$value" | wrangler secret put "$key"
done

The downside is obvious: your secrets now live in two places. You need a sync step in CI, and if someone updates a secret on the platform directly, it'll get overwritten next deploy. Discipline helps. Making Redshift the only place anyone edits secrets helps more.

Pattern 3: Runtime Fetching

Instead of baking secrets in at build or syncing them ahead of time, you fetch them when the function starts up. This means secrets are always current -- no stale values sitting in platform storage from a deploy three months ago. The cost is cold start latency and a network dependency.

// Inside your serverless function
import { fetchSecrets } from './redshift-client';

let secrets: Record<string, string> | null = null;

export async function handler(event) {
    // Fetch once per cold start
    if (!secrets) {
        secrets = await fetchSecrets('my-project', 'production');
    }

    const apiKey = secrets.API_KEY;
    // ... use secrets
}

We cache outside the handler so warm invocations skip the fetch. On a Lambda with a ~500ms cold start, adding a relay query might push that to 700-800ms. Whether that matters depends entirely on your use case. For a webhook handler, nobody cares. For a user-facing API, maybe test it first.

Security Best Practices

Never Log Secrets

This one bites people constantly. Serverless logs tend to go to centralized logging services where retention policies are generous and access controls are lax. One console.log(process.env) during debugging and your keys are sitting in CloudWatch for the next 90 days.

// Bad - secrets might appear in logs
console.log('Config:', process.env);

// Good - log only what you need
console.log('Function initialized');

Use Least Privilege

Scope your secrets per function. The function that sends welcome emails doesn't need your Stripe secret key. This is annoying to set up and absolutely worth it when something goes wrong.

Rotate Regularly

Serverless functions have a sneaky property: they can run unchanged for months because nobody redeploys them. If you rotated a key but didn't redeploy the function that uses build-time injection, you've got a stale secret in production and a fresh one in Redshift and neither of you knows about the mismatch until something breaks.

Encrypt in Transit

If you're fetching secrets at runtime, make sure the connection is encrypted. HTTPS for REST APIs, WSS for Nostr relay connections. This should be the default, but verify it -- especially in local development where it's tempting to skip TLS.

Platform-Specific Notes

AWS Lambda

  • Secrets Manager and Parameter Store both work; Parameter Store is cheaper for static secrets
  • Lambda extensions can pre-fetch secrets before your handler even runs, which is nice for cold starts
  • IAM roles are how you control which functions can access which secrets -- use them, don't just give everything secretsmanager:GetSecretValue on *

Vercel

  • Environment variables are encrypted at rest, which is good
  • You can set different values for preview vs. production, which helps prevent the "tested in staging, deployed prod credentials to preview" mistake
  • Edge functions run in a more restricted environment than serverless functions -- check what APIs are available before assuming your secret-fetching code will work there

Cloudflare Workers

  • Workers Secrets are the right choice for sensitive values -- don't put secrets in KV, it's readable through the API
  • The V8 isolate model means less risk of cross-request leakage compared to traditional containers
  • The wrangler secret put CLI is straightforward but doesn't support bulk import, hence the loop in the sync script above

What Most Teams End Up Doing

In practice, most teams we've talked to land on some version of pattern 2: Redshift as the canonical store, platform secrets synced during CI/CD, platform-native access at runtime. It's not the most elegant architecture on a whiteboard, but it works reliably and doesn't add latency to function invocations.

Some teams add runtime fetching for secrets that change frequently (API keys that get rotated weekly, feature flags). Some skip the platform sync entirely and do build-time injection because their deploys are frequent enough that staleness isn't a concern. There's no single right answer -- it depends on how often your secrets change, how sensitive they are, and how much cold start latency you can tolerate.

Get Started

If you want to try any of these patterns, the quickstart will get you from zero to synced secrets in a few minutes. The CLI docs cover the secrets download command and its format options in detail.

Ready to try Redshift?

Own your secrets with decentralized, censorship-resistant secret management.