Shipstone case study

Measured AI, small architecture.

Shipstone turns official AI and mobile engineering sources into concise public readers and private social drafts. The point is not AI everywhere. The point is a narrow system where model calls are traced, summaries are checked, and new Cloudflare primitives are added only when they remove a real problem.

30m
smart check
2
public readers
D1
run archive
KV
edge fallback

One Worker owns fetch, relevance, summarization, storage, and rendering.

The workload is deliberately small, so a cron-driven Worker is easier to reason about than a queue mesh. The current work keeps AI Gateway traceability, D1 run history, and offline evals close to the pipeline before adding heavier primitives.

01

Official sources

RSS and HTML feeds from AI, Android, Kotlin, Swift, iOS, and tooling teams.

02

Fetchers

Normalize title, canonical URL, published date, source id, priority, and content hash.

03

Relevance gate

Skip weak topic matches before AI. Cap summaries per topic and avoid duplicate hashes.

04

Workers AI

Generate short summaries and social drafts with prompt versions and Gateway metadata.

05

Storage

D1 stores run history and summaries. KV remains an edge fallback for reader continuity.

06

Surfaces

Public readers for portfolio signal. Shipstone stays private for drafts and operations.

Trace first, migrate storage second.

New summaries carry source id, prompt version, pipeline run id, model name, and Gateway log id when available. D1 now archives runs and summaries, while reader queries merge D1 with KV fallback so migration gaps do not blank the public feed.

{ topic: "ai-news", sourceId: "openai-blog", stage: "summary", promptVersion: "summary-v1", pipelineRunId: "run-20260430-..." }

Gateway metadata is capped to five scalar fields. Full audit data belongs in Shipstone storage, not inside Gateway log tags.

AI Gateway Workers AI promptVersion pipelineRunId eval:summary

Built, next, and intentionally deferred.

The portfolio signal comes from restraint: a useful product, measured model calls, durable history, and clear reasons for not adding heavier primitives yet.

Live now

Worker routes, source fetching, topic readers, Workers AI summaries, D1 archive reads, KV fallback, social drafts, and 30-minute smart checks.

  • /ai-news
  • /mobile
  • shipstone private console

In motion

AI Gateway wrapping, prompt trace fields, summary eval fixtures, D1 backfill, and this architecture page.

  • AI_GATEWAY_ID
  • summary-v1
  • npm run eval:summary

Next

Use the run archive to tune model latency, add run detail drill-downs, and decide whether the cron needs queues or Workflows.

  • run detail view
  • model latency checks
  • daily brief pages

The system has a bias against architecture theater.

These features stay out until there is measured pressure or a concrete workflow that needs them.

deferred

Queues

Thirty-minute checks over capped candidates do not need fan-out complexity yet.

deferred

Workflows

Useful later if daily briefs need approval pauses, retries, and durable state.

deferred

Vectorize

Semantic search is valuable only after enough clean D1 content exists.

deferred

Public chat

Citation grounding, abuse handling, and rate limits need to come first.