Cloudflare Workers
KV, D1 (SQLite), R2 (S3 bez egress), Durable Objects (stateful WebSockets), Workers AI (Llama, SDXL, Whisper) i Pages Functions.
6 produktów Cloudflare — edge computing stack
Workers, KV, D1, R2, Durable Objects i Workers AI — typ, storage i kiedy używać.
| Produkt | Typ | Storage | Kiedy |
|---|---|---|---|
| Workers | Serverless functions | V8 isolates (nie Node.js) | API, middleware, edge logic |
| Workers KV | Key-value store | Eventually consistent, global | Sessions, config, rate limits, cache |
| D1 | SQLite database | Replicated SQLite | Relacyjne dane, SQL queries |
| R2 | Object storage | S3-compatible, zero egress | Pliki, obrazy, statyczne assets |
| Durable Objects | Stateful edge | Per-instance persistent storage | WebSocket rooms, rate limiting, CRDT |
| Workers AI | AI inference | GPU na edge | Llama, SDXL, Whisper bez własnych GPU |
Często zadawane pytania
Co to są Cloudflare Workers i jak działają na edge?
Cloudflare Workers: serverless JavaScript na edge. V8 isolates (nie Node.js). 300+ lokalizacji globalnie. Sub-millisecond cold start. Brak cold start problemu Lambda. Instalacja: npm install wrangler@latest. npx wrangler init my-worker. Podstawowy Worker: export default {async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise(Response) { const url = new URL(request.url). if (url.pathname === '/api/hello') { return Response.json({message: 'Hello from edge!'}) } return new Response('Not found', {status: 404}) }}. wrangler dev — lokalne testowanie. wrangler deploy — deploy na edge. wrangler.toml: name = 'my-worker'. main = 'src/index.ts'. compatibility_date = '2024-01-01'. Hono z Workers: npm install hono. import {Hono} from 'hono'. const app = new Hono(). app.get('/api/users', async (c) => { return c.json({users: [...]}) }). export default app. Ograniczenia: Brak Node.js APIs (fs, crypto — użyj Web Crypto). 128MB memory per request. 30s CPU time (Paid: 30s default, Workers Unbound: bez limitu). 10ms CPU Paid plan. 100,000 req/dzień free tier. Request/Response — Web APIs. Edge Runtime — kompatybilny ze standardami.
KV, D1 i R2 — storage w Cloudflare Workers?
KV (Workers KV): key-value store. Eventually consistent (eventual consistency). Globalna replikacja. 25MB value limit. Świetny dla: session data, config, rate limits. wrangler.toml: [[kv_namespaces]]. name = 'MY_KV'. id = 'abc123'. Binding w Worker: env.MY_KV.get('key'). env.MY_KV.put('key', 'value', {expirationTtl: 3600}). env.MY_KV.delete('key'). env.MY_KV.list({prefix: 'user:'}). D1 (SQLite na edge): Cloudflare managed SQLite. Writable replicas w wielu regionach (beta). SQL queries. Prisma kompatybilny (driver adapter). wrangler.toml: [[d1_databases]]. binding = 'DB'. database_name = 'my-db'. Worker: const result = await env.DB.prepare('SELECT * FROM users WHERE id = ?').bind(userId).first(). await env.DB.prepare('INSERT INTO users (name) VALUES (?)').bind(name).run(). Migracje: wrangler d1 migrations create init. wrangler d1 migrations apply. R2 (Object Storage): S3-compatible. Brak egress fee! Wrangler binding. env.MY_BUCKET.put(key, body). env.MY_BUCKET.get(key). env.MY_BUCKET.delete(key). S3 SDK kompatybilny: @aws-sdk/client-s3 z R2 endpoint. Presigned URLs przez Workers. Porównanie KV vs D1 vs R2: KV — config, cache, sessions. D1 — relacyjne dane, SQL. R2 — pliki, obrazy, assets.
Durable Objects — stateful edge computing?
Durable Objects: stateful server na edge. Jedyna instancja per ID. Globalnie dostępna. WebSocket hibernation. Jak działają: DO ma ID (np. chat-room-123). Wszystkie requesty do tego ID -> ta sama instancja. Stan w pamięci lub DO Storage. Definicja DO: export class ChatRoom { storage: DurableObjectStorage. sessions: Map(WebSocket, any). constructor(state: DurableObjectState, env: Env) { this.storage = state.storage } async fetch(request: Request) { if (request.headers.get('Upgrade') === 'websocket') { const {0: client, 1: server} = new WebSocketPair(). server.accept(). this.handleSession(server) return new Response(null, {status: 101, webSocket: client}) } } async handleSession(ws: WebSocket) { ws.addEventListener('message', ({data}) => { this.broadcast(data) }) } broadcast(message: string) { this.sessions.forEach((_, ws) => ws.send(message)) } }. Binding w wrangler.toml: [[durable_objects.bindings]]. name = 'CHAT_ROOM'. class_name = 'ChatRoom'. DO Storage: await this.storage.put('count', count + 1). const count = await this.storage.get('count'). Transakcyjne (per DO). WebSocket Hibernation: state.acceptWebSocket(server). Nie trzyma procesu aktywnego. Millions of DO możliwych — jedno per room. Rate limiting z DO: Globalny rate limiter per IP. CRDT state. Distributed locking.
Workers AI i AI Gateway — AI na edge?
Workers AI: AI inference na Cloudflare GPU. Modele: Llama 3, Mistral, SDXL, Whisper. Bez własnej infrastruktury GPU. env.AI binding. import {Ai} from '@cloudflare/ai'. const ai = new Ai(env.AI). Text generation: const response = await ai.run('@cf/meta/llama-3-8b-instruct', {prompt: 'Hello, how are you?', max_tokens: 500}). response.response — odpowiedź. Streaming: const stream = await ai.run('@cf/meta/llama-3-8b-instruct', {prompt, stream: true}). return new Response(stream, {headers: {'Content-Type': 'text/event-stream'}}). Embeddings: const {data} = await ai.run('@cf/baai/bge-base-en-v1.5', {text: ['Hello world']}). data[0] — embedding array. Image classification: ai.run('@cf/microsoft/resnet-50', {image: imageArray}). Stable Diffusion: ai.run('@cf/stabilityai/stable-diffusion-xl-base-1.0', {prompt: 'A cat'}). Whisper (STT): ai.run('@cf/openai/whisper', {audio: audioBuffer}). AI Gateway: proxy dla zewnętrznych AI API. OpenAI, Anthropic, Hugging Face. Caching requestów. Rate limiting. Logging i analytics. Fallback między providerami. Workers AI pricing: $0.011 per 1000 neurons (tokens). Free tier: 10,000 neurons/dzień. Vectorize (vector DB): Cloudflare managed pgvector-like. insert, query. Kombinuj z Workers AI dla RAG. KV dla caching embedding.
Pages Functions i zaawansowane wzorce Workers?
Cloudflare Pages: hosting statycznych stron + Functions. Git-based deployment. Functions jako Workers przy Pages. functions/_middleware.ts — middleware. functions/api/hello.ts — /api/hello. Automatyczne deployment PR previews. Wrangler dev — lokalne Pages Functions. Middleware w Workers: chain funkcji. export const onRequest = [auth, handler]. Context propagation. Caching: Cache API: const cache = caches.default. const cached = await cache.match(request). if (!cached) { const response = await fetch(origin). await cache.put(request, response.clone()) return response }. Cache-Control headers. cf.cacheTtl — Cloudflare cache. waitUntil: ctx.waitUntil(analytics.record(request)). Nie blokuj response. Background tasks. Scheduled Workers (Cron): wrangler.toml: [triggers]. crons = ['0 * * * *']. export default {async scheduled(event, env, ctx) { await cleanupExpiredData(env.DB) }}. Wrangler deploy z cron. Email routing: Workers z Email routing. Email jako event. Parse email headers. Forward lub respond. Logpush: stream logs do R2, Datadog, Splunk. Workers analytics. Architektura: Pages + Workers + D1 + R2 + KV to kompletny stack bez serwera. Prawie zero cold start. Globalna dystrybucja. Limity: 1MB Worker size (compressed). 10MB Unbound. 128MB memory. Subdomeny: worker.username.workers.dev lub własna domena. SSL automatyczny.
Powiązane artykuły
Skontaktuj się z nami
Porozmawiajmy o Twoim projekcie. Bezpłatna wycena w ciągu 24 godzin.
Wyślij zapytanie
Telefon
+48 790 814 814
Pon-Pt: 9:00 - 18:00
adam@fotz.pl
Odpowiadamy w ciągu 24h
Adres
Plac Wolności 16
61-739 Poznań
Godziny pracy
Wolisz porozmawiać?
Zadzwoń teraz i porozmawiaj z naszym specjalistą o Twoim projekcie.
Zadzwoń teraz