Skip to content
GitHubXDiscordRSS

Worker

Learn how to deploy, configure, and manage Cloudflare Workers using Alchemy for serverless functions at the edge.

A Cloudflare Worker is a serverless function that runs on Cloudflare’s global network.

Deploy a minimal Worker with a single HTTP handler:

import { Worker } from "alchemy/cloudflare";
export const worker = await Worker("api", {
name: "api-worker",
entrypoint: "./src/api.ts",
});

The simplest possible Worker is just a function that returns a Response:

./src/api.ts
export default {
async fetch(request: Request): Promise<Response> {
return new Response("OK");
},
};

See the Local Development documentation for more details.

Use bindings to attach KV, R2, Durable Objects, secrets, and other resources to the Worker. Configure Worker properties for compatibility, observability, and runtime limits:

alchemy.run.ts
import alchemy from 'alchemy';
import { Worker, KVNamespace, R2Bucket } from "alchemy/cloudflare";
const cache = await KVNamespace("cache", { title: "cache-store" });
const storage = await R2Bucket("storage", { name: "user-storage" });
export const worker = await Worker("api", {
name: "api-worker", // Worker name (defaults to ${app}-${stage}-${id})
url: true, // Enable workers.dev preview URL (defaults to false; created unless using a dispatch namespace)
cwd: "./apps/my-api", // Project root directory (defaults to process.cwd())
entrypoint: "./src/api.ts", // Main entrypoint for bundling
format: "esm", // Module format: 'esm' (default) or 'cjs'
// compatibilityDate defaults to the SDK's pinned date; override only when you need a specific runtime
compatibilityDate: "2025-09-13", // Workers runtime version (uses DEFAULT_COMPATIBILITY_DATE when omitted)
compatibilityFlags: ["nodejs_compat"], // Low-level runtime flags; consider using `compatibility` preset instead
compatibility: "node", // Compatibility preset (optional) — expands flags for common use-cases like Node.js compatibility
adopt: true, // Adopt existing Worker if present; falls back to scope.adopt when undefined
bindings: {
CACHE: cache, // KV Namespace binding
STORAGE: storage, // R2 bucket binding
API_KEY: alchemy.secret("secret-value") // Plain-text env binding (converted to plain_text)
// Other binding types include: DurableObjectNamespace, Queue, Secret, Assets, Container, Workflow
},
observability: {
enabled: true // Enable worker logs / observability (default: true)
},
// Upload source maps to improve stack traces (set to false to disable)
sourceMap: true,
// Optional runtime limits — increase only when necessary after monitoring
limits: {
cpu_ms: 50_000 // Max CPU time in ms (default: 30_000)
},
// Other Advanced 🧪 & commonly used options
placement: { mode: "smart" }, // Smart placement to optimize network placement for latency
assets: { path: "./public", run_worker_first: false }, // Static assets configuration
crons: ["0 0 * * *"], // Scheduled cron triggers (standard cron syntax)
eventSources: [ // Background event sources (queues, streams)
{ queue: taskQueue, settings: { batchSize: 15, maxConcurrency: 3 } }
],
routes: ["api.example.com/*"], // Route patterns to bind the worker
domains: ["example.com"], // Custom domains to bind (can be objects with zoneId/adopt)
version: "pr-123", // Publish as preview version with label
dev: {
port: 8787, // Hard-code the port used by the local worker development server (default: one is derived)
tunnel: true, // Create a Cloudflare Tunnel and proxy requests to the local development server (default: false)
remote: true, // Run the worker in Cloudflare instead of locally (default: false)
},
// Deploy to dispatch namespace (string | DispatchNamespace)
namespace: await DispatchNamespace("my-dispatch-ns", {
namespace: "production-dispatch"
}),
noBundle: true, // Disable bundling (default: false)
rules: [{ globs: ["**/*.wasm"] }], // Additional bundle rules
logpush: true, // Enable Workers LogPush for trace event export.
});

Bind custom domains directly to your worker for a simpler routing setup:

import { Worker } from "alchemy/cloudflare";
const worker = await Worker("api", {
name: "api-worker",
entrypoint: "./src/api.ts",
domains: ["api.example.com", "admin.example.com"],
});
// Access the created domains
console.log(worker.domains); // Array of created CustomDomain resources

Create a worker and its routes in a single declaration:

import { Worker, Zone } from "alchemy/cloudflare";
const zone = await Zone("example-zone", {
name: "example.com",
type: "full",
});
const worker = await Worker("api", {
name: "api-worker",
entrypoint: "./src/api.ts",
routes: [
"backend.example.com/*",
{
pattern: "api.example.com/*",
zoneId: zone.id,
},
{
pattern: "admin.example.com/*",
// will be inferred from `admin.example.com/*` with an API lookup
// zoneId: zone.id,
},
],
});

Alchemy doesn’t use code-generation. Instead, your Worker’s environment types can be inferred from the infrastructure configuration in your alchemy.run.ts script.

There are two ways to infer the environment types:

1. Specify the type of env in your worker (simplest):

Section titled “1. Specify the type of env in your worker (simplest):”
src/worker.ts
import type { worker } from "../alchemy.run.ts";
export default {
async fetch(request, env: typeof worker.Env) {
await env.CACHE.get("key");
},
};

2. Cast env from the cloudflare:workers module:

Section titled “2. Cast env from the cloudflare:workers module:”

If you need access to the env at the top level, create a file called env.ts re-export env casted to the worker’s environment type:

./src/env.ts
import { env } from "cloudflare:workers";
export type Env = typeof Env;
export const Env = env as typeof worker.Env;

Then import and use the Env type in your worker and initialize your clients, etc.

src/worker.ts
import { Env } from "./env.ts";
const myClient = new MyClient(Env.DB);

Durable Objects enable stateful serverless applications with coordination capabilities.

Create a DurableObjectNamespace in your alchemy.run.ts script and bind it to your Worker:

alchemy.run.ts
import { Worker, DurableObjectNamespace } from "alchemy/cloudflare";
const counter = DurableObjectNamespace("counter", {
className: "Counter",
sqlite: true, // Enable SQLite storage
});
export const worker = await Worker("api", {
entrypoint: "./src/worker.ts",
bindings: {
COUNTER: counter,
}
});

Then export a class that extends DurableObject and has "Counter" as the class name to match the DurableObjectNamespace you created:

./src/worker.ts
import { DurableObject } from "cloudflare:workers";
export class Counter extends DurableObject {
async increment(): Promise<number> {
let count = (await this.ctx.storage.get("count")) || 0;
count++;
await this.ctx.storage.put("count", count);
return count;
}
async fetch(request: Request): Promise<Response> {
const count = await this.increment();
return Response.json({ count });
}
}

Finally, in your fetch handler, you can get a Durable Object instance via the COUNTER binding:

export default {
async fetch(request: Request, env: typeof worker.Env) {
const id = env.COUNTER.idFromName("global-counter");
const obj = env.COUNTER.get(id);
return obj.fetch(request);
}
}

Workflows enable orchestration and automation of long-running tasks with built-in state management and retries.

Create a Workflow in your alchemy.run.ts script and bind it to your Worker:

alchemy.run.ts
import { Worker, Workflow } from "alchemy/cloudflare";
const orderProcessor = Workflow("order-processor", {
workflowName: "order-processing",
className: "OrderProcessor",
});
export const worker = await Worker("api", {
entrypoint: "./src/worker.ts",
bindings: {
ORDER_PROCESSOR: orderProcessor,
}
});

Then define the workflow class in your Worker entrypoint:

./src/workflow.ts
import { WorkflowEntrypoint } from "cloudflare:workers";
export class OrderProcessor extends WorkflowEntrypoint {
async run(event, step) {
const order = await step.do("validate", async () => {
return validateOrder(event.payload);
});
await step.do("charge", async () => {
return chargeCustomer(order);
});
await step.do("fulfill", async () => {
return fulfillOrder(order);
});
}
}

Access the workflow in your fetch handler:

src/worker.ts
// make sure the Workflow class is exported from the root of your Worker script
export * from "./workflow.ts";
export default {
async fetch(request: Request, env: typeof worker.Env) {
const instance = await env.ORDER_PROCESSOR.create();
return Response.json({ id: instance.id });
}
}

Load and execute workers dynamically at runtime using WorkerLoader:

alchemy.run.ts
import { Worker, WorkerLoader } from "alchemy/cloudflare";
export const worker = await Worker("dynamic-loader", {
entrypoint: "./src/worker.ts",
bindings: {
LOADER: WorkerLoader(),
}
});

Then use the loader to create workers on-demand:

./src/worker.ts
import type { worker } from "../alchemy.run.ts";
export default {
async fetch(request: Request, env: typeof worker.Env) {
const dynamicWorker = env.LOADER.get(
'my-dynamic-worker',
async () => ({
compatibilityDate: "2025-06-01",
mainModule: "index.js",
modules: {
'index.js': `
export default {
async fetch(request) {
return new Response('Hello from dynamic worker!');
}
}
`,
},
}),
);
const entrypoint = dynamicWorker.getEntrypoint();
return entrypoint.fetch(new URL(request.url));
}
};

Configure Workers to consume messages from queues with automatic retries, batching, and dead letter queues for reliable background processing:

import { Worker, Queue } from "alchemy/cloudflare";
const taskQueue = await Queue("task-queue", {
name: "task-processing"
});
const failedQueue = await Queue("failed-tasks", {
name: "failed-tasks"
});
export const processor = await Worker("processor", {
entrypoint: "./src/processor.ts",
bindings: {
TASK_QUEUE: taskQueue // Producer - bind queue for sending messages
},
eventSources: [{ // Consumer - configure processing settings
queue: taskQueue,
settings: {
batchSize: 15,
maxConcurrency: 3,
maxRetries: 5,
maxWaitTimeMs: 2500,
retryDelay: 60,
deadLetterQueue: failedQueue
}
}]
});

Consumer implementation:

./src/processor.ts
import type { MessageBatch } from "@cloudflare/workers-types";
export default {
async queue(batch: MessageBatch, env: Env) {
for (const message of batch.messages) {
try {
const data = message.body;
await processTask(data);
message.ack(); // Acknowledge successful processing
} catch (error) {
message.retry(); // Retry on failure - respects maxRetries
}
}
}
};

Queue Consumer Settings:

SettingPurposeDefaultExample
batchSizeMessages processed per batch1015
maxConcurrencyConcurrent Worker invocations23
maxRetriesRetry attempts for failed messages35
maxWaitTimeMsMax wait time to fill a batch5002500
retryDelayDelay between retries (seconds)30120
deadLetterQueueQueue for permanently failed messagesundefinedfailedQueue

When to use: Background job processing, webhook handling, email processing, image optimization, data synchronization.

To publish messages to a Queue from a Worker, bind it to the Worker:

alchemy.run.ts
import { Worker, Queue } from "alchemy/cloudflare";
export const taskQueue = await Queue("task-queue");
export const producer = await Worker("producer", {
entrypoint: "./src/producer.ts",
bindings: {
TASK_QUEUE: taskQueue,
}
});

Then, in your producer worker, you can send messages to the queue:

./src/producer.ts
import type { producer } from "../alchemy.run.ts";
export default {
async fetch(request: Request, env: typeof producer.Env) {
await env.QUEUE.send({
name: "John Doe",
email: "john.doe@example.com",
});
return new Response("Ok");
}
};

Schedule tasks with cron expressions:

alchemy.run.ts
export const cronWorker = await Worker("cron-tasks", {
entrypoint: "./src/cron.ts",
crons: [
"0 0 * * *", // Run daily at midnight UTC
"0 */6 * * *", // Run every 6 hours
"0 12 * * MON" // Run Mondays at noon UTC
]
});
// Static assets serving
const assets = await Assets({ path: "./public" });
export const frontend = await Worker("frontend", {
entrypoint: "./src/worker.ts",
bindings: { ASSETS: assets }
});
./src/cron.ts
import type { ScheduledEvent } from "@cloudflare/workers-types";
export default {
async scheduled(event: ScheduledEvent, env: Env) {
const cron = event.cron;
switch (cron) {
case "0 0 * * *":
await dailyCleanup(env);
break;
case "0 */6 * * *":
await syncData(env);
break;
}
}
};

Common cron patterns:

  • "0 0 * * *" - Daily at midnight UTC
  • "0 */12 * * *" - Every 12 hours
  • "0 9 * * MON-FRI" - Weekdays at 9 AM UTC
  • "*/15 * * * *" - Every 15 minutes

Enable Workers LogPush to send trace events to configured destinations:

const worker = await Worker("api", {
entrypoint: "./src/api.ts",
logpush: true,
});

Important: Setting logpush: true only enables trace event collection. You must separately create a LogPush job using the Cloudflare API to specify where logs should be sent.

LogPush jobs are created via the Cloudflare API. Here’s an example using R2:

Terminal window
curl -X POST "https://api.cloudflare.com/client/v4/accounts/{account_id}/logpush/jobs" \
-H "Authorization: Bearer {api_token}" \
-H "Content-Type: application/json" \
--data '{
"name": "workers-logpush",
"dataset": "workers_trace_events",
"destination_conf": "r2://{bucket}?account-id={account_id}&access-key-id={key}&secret-access-key={secret}",
"output_options": {
"field_names": ["Event", "EventTimestampMs", "Outcome", "ScriptName", "Logs"]
},
"enabled": true
}'

For more information, see the Cloudflare LogPush documentation.

Enable preview URLs for testing and configure production routing with domains and routes:

alchemy.run.ts
// Preview Worker for testing
const preview = await Worker("preview", {
name: "my-worker",
entrypoint: "./src/worker.ts",
version: "pr-123",
url: true,
});
console.log(preview.url);

You should initialize clients at the top-level of your Worker script to reduce cold start times:

src/worker.ts
import { env } from "cloudflare:workers";
import MyExpensiveApiClient from "example-api-client";
// Initialize client at module scope
const apiClient = new MyExpensiveApiClient();
export default {
async fetch(request: Request, env: Env) {
// Configure with API key during request handling
apiClient.setApiKey(env.API_KEY);
return new Response("Configured with API key");
},
};

Build distributed systems using Worker-to-Worker communication, RPC patterns, and Durable Object sharing:

Enable Workers to reference themselves:

alchemy.run.ts
import { Worker, Self } from "alchemy/cloudflare";
export const service = await Worker("auth-service", {
entrypoint: "./src/auth.ts",
bindings: { SELF: Self }
});

Break circular dependencies with WorkerStub:

alchemy.run.ts
import { Worker, WorkerStub } from "alchemy/cloudflare";
const authStub = WorkerStub("auth-stub", { name: "auth-service" });
export const apiWorker = await Worker("api", {
entrypoint: "./src/api.ts",
bindings: { AUTH: authStub }
});
export const authWorker = await Worker("auth", {
entrypoint: "./src/auth.ts",
bindings: { API: apiWorker }
});

Other alternatives:

  • Event-driven communication via Queues or Durable Objects
  • Shared state using Durable Objects as coordination layer
  • API Gateway pattern with unidirectional data flow

Smart Placement (performance optimization)

Section titled “Smart Placement (performance optimization)”

Enable automatic network optimization by configuring smart placement to reduce latency and improve performance:

alchemy.run.ts
export const optimizedWorker = await Worker("api", {
entrypoint: "./src/api.ts",
placement: {
mode: "smart" // Automatically optimize placement for performance
}
});

Benefits:

  • ✅ Automatic network optimization based on performance metrics
  • ✅ Reduced latency for global users
  • ✅ No infrastructure management required
  • ✅ Seamless scaling across Cloudflare’s network

When to use: Global applications, latency-sensitive APIs, high-traffic workloads, applications with geographically distributed users.

Deploy workers to dispatch namespaces for multi-tenant architectures using Cloudflare’s Workers for Platforms:

import { Worker, DispatchNamespace } from "alchemy/cloudflare";
// Create a dispatch namespace
const tenants = await DispatchNamespace("tenants", {
namespace: "customer-workers",
});
// Deploy a worker to the dispatch namespace
const tenantWorker = await Worker("tenant-app", {
name: "tenant-app-worker",
entrypoint: "./src/tenant.ts",
namespace: tenants,
});
// Create a router that binds to the dispatch namespace
export const router = await Worker("platform-router", {
name: "main-router",
entrypoint: "./src/router.ts",
bindings: {
TENANT_WORKERS: tenants,
},
});

In your ./src/router.ts, you can dynamically route to tenant workers:

src/router.ts
import type { router } from "./alchemy.run.ts";
export default {
async fetch(request: Request, env: typeof router.Env) {
const url = new URL(request.url);
const tenantId = url.hostname.split(".")[0];
// Get the tenant's worker from the dispatch namespace
const tenantWorker = env.TENANT_WORKERS.get(tenantId);
// Forward the request to the tenant's worker
return await tenantWorker.fetch(request);
},
};
alchemy.run.ts
// Gateway Worker coordinates requests
export const gateway = await Worker("gateway", {
entrypoint: "./src/gateway.ts",
bindings: {
USER_SERVICE: userService,
ORDER_SERVICE: orderService,
PAYMENT_SERVICE: paymentService
},
routes: ["api.example.com/*"]
});
// Individual service Workers
const userService = await Worker("users", {
entrypoint: "./src/users.ts",
bindings: { AUTH: authWorker }
});
const orderService = await Worker("orders", {
entrypoint: "./src/orders.ts",
bindings: {
USERS: userService,
PAYMENTS: paymentService
}
});

Benefits:

  • ✅ Distributed system architecture
  • ✅ Service isolation and independent scaling
  • ✅ Fault tolerance and graceful degradation
  • ✅ Clear separation of concerns

If you’re using Workers RPC, you can specify the rpc property on the worker to define its interface when inferring the environment types:

For example, say you have a RPC worker that exports a class extending WorkerEntrypoint:

./src/rpc.ts
import { WorkerEntrypoint } from "cloudflare:workers";
export default class MyRPC extends WorkerEntrypoint {
async getData(id: string): Promise<{ id: string }> {
return { id };
}
}

If you don’t specify the rpc property, the environment type will be inferred from the worker’s entrypoint.

src/worker.ts
export default {
async fetch(request: Request, env: typeof worker.Env) {
// ❌ Type error
await env.getData("123");
},
};

In your alchemy.run.ts script, import the type MyRPC and set it as the rpc property on the worker:

alchemy.run.ts
import { Worker, type } from "alchemy/cloudflare";
import type MyRPC from "./src/rpc.ts";
export const rpcWorker = await Worker("rpc", {
entrypoint: "./src/rpc.ts",
rpc: type<MyRPC>,
});

Now, when you access the rpc binding in your worker, it will have the correct types:

src/worker.ts
export default {
async fetch(request: Request, env: typeof rpcWorker.Env) {
const result = await env.RPC.getData("123");
},
};

You can access a Durable Object from another Worker by using the bindings property:

alchemy.run.ts
import { Worker, DurableObjectNamespace } from "alchemy/cloudflare";
const data = await Worker("data", {
entrypoint: "./src/data.ts",
bindings: {
STORAGE: DurableObjectNamespace("storage", {
className: "DataStorage"
}),
},
});
await Worker("api", {
entrypoint: "./src/api.ts",
bindings: {
// ✅ Access the STORAGE DO hosted in the `data` worker
STORAGE: data.bindings.STORAGE
},
});

When to use url: true

  • Use for: CI previews, demos, short-lived feature previews, and sharing a build with reviewers.
  • Do not use for production: Route production traffic via domains or routes to ensure DNS/TLS control and SLAs.

Define routes and domains alongside the Worker to keep routing policies readable:

alchemy.run.ts
import { Worker, Zone } from "alchemy/cloudflare";
const zone = await Zone("example", { name: "example.com", type: "full" });
export const api = await Worker("api", {
entrypoint: "./src/api.ts",
routes: [
"backend.example.com/*",
{ pattern: "api.example.com/*", zoneId: zone.id },
],
domains: ["admin.example.com"],
});

When to use url: true: CI previews, demos, short-lived feature previews. Do not use for production prefer domains/routes for DNS/TLS control.

Generate wrangler.json is available with the WranglerJson resource:

alchemy.run.ts
import { WranglerJson } from "alchemy/cloudflare";
await WranglerJson({
worker: api,
transform: {
wrangler: (spec) => ({
...spec,
vars: { ...spec.vars, CUSTOM_VAR: "value" },
}),
},
});

Binding Resolution Errors: Ensure bindings are configured in Worker definition.

alchemy.run.ts
// ✅ created a KV namespace
const cache = await KVNamespace("cache");
// ❌ but, forget to bind it to the worker
export const worker = await Worker("api");
src/worker.ts
// ❌ Missing binding configuration
await env.CACHE.get("key"); // ReferenceError
alchemy.run.ts
// ✅ Correctly configured
const cache = await KVNamespace("cache");
export const worker = await Worker("api", {
bindings: { CACHE: cache }
});

Queue Consumer Failures: Always acknowledge or retry messages.

src/worker.ts
// ❌ Missing acknowledgment
export default {
async queue(batch: MessageBatch) {
for (const message of batch.messages) {
await processTask(message.body);
// Missing: message.ack() or message.retry()
}
}
};
// ✅ Proper message handling
export default {
async queue(batch: MessageBatch) {
for (const message of batch.messages) {
try {
await processTask(message.body);
message.ack();
} catch (error) {
// not necessary, but use retry to set a delay
message.retry({ delaySeconds: 30 });
}
}
}
};

Preview URL Limitations: Use routes/domains for Durable Objects and production traffic. See Cloudflare Docs - Previews Limitations for details. Performance Issues: Enable smart placement, increase CPU limits, use global scope initialization, cache data in KV, minimize cold starts.