Remove Promise.race timeout wrapper and let relay fetch use its natural
timeouts (7s for article, 5s for profile). Remove background refresh
complexity entirely.
Flow is now simple:
1. Check Redis cache
2. If miss, fetch from relays (up to 7s)
3. Cache result
4. Subsequent requests hit cache
First request may take 7-8 seconds, but after that it's cached and fast.
Much simpler and more reliable.
Add more detailed logging to diagnose background refresh issues:
- Log the exact URL being called
- Log whether secret is present
- Better error handling for non-JSON responses
- More detailed error messages
This will help identify if the refresh endpoint is being called
and why we're not seeing logs from it.
Relays can be slow, especially on first connection. Increase timeout
to 5 seconds to give relays more time to respond before falling back
to default metadata.
Add detailed logging to verify background refresh is working:
- Log when background refresh is triggered
- Log the response status and result from refresh endpoint
- Log errors if refresh fetch fails
- Add logging in refresh endpoint to track:
- When refresh starts
- When metadata is found
- When metadata is cached
- Any errors during refresh
This will help diagnose if background refresh is actually
populating the Redis cache after timeouts.
- Remove gateway HTTP fetch entirely
- Fetch directly from relays on cache miss with 3s timeout
- If relay fetch succeeds: cache and return article metadata
- If relay fetch fails/times out: return default OG and trigger background refresh
- Update logging to reflect relay-first strategy
- Keep background refresh endpoint for eventual consistency
This simplifies the code and removes dependency on unreliable gateway,
while ensuring fast responses and eventual correctness via background refresh.
- Support both KV_* and UPSTASH_* env var names for Redis
- Add error handling for Redis get operations
- Add console logging to track cache hits/misses and gateway fetches
- Fix syntax error in background refresh trigger
This will help diagnose why article metadata isn't being returned correctly.
ESM requires explicit file extensions in import paths. Add .js
extensions to all relative imports in API files and services,
even though source files are .ts (they compile to .js).
This fixes ERR_MODULE_NOT_FOUND errors on Vercel.
- Move ogStore, ogHtml, and articleMeta from src/services/ to api/services/
- Update imports in article-og.ts and article-og-refresh.ts
- Update import paths in articleMeta.ts (lib/profile and src/config/relays)
- Remove old files from src/services/
- Clean up ESLint config to only reference api/**/*.ts
This fixes the ERR_MODULE_NOT_FOUND error on Vercel by ensuring
serverless functions can access the service modules.
- Add ogStore service for Redis get/set operations
- Extract shared logic: ogHtml (generateHtml, escapeHtml) and articleMeta (relay/gateway fetching)
- Refactor article-og endpoint to read from Redis, try gateway on miss, trigger background refresh
- Add article-og-refresh endpoint for background relay fetching and caching
- Update vercel.json with refresh function config
- Remove WebSocket dependencies from main OG endpoint for faster crawler responses
- Update HTML title and meta tags in index.html
- Update PWA manifest in public/manifest.webmanifest
- Update Vite PWA manifest in vite.config.ts
- Update article OG title fallback in api/article-og.ts
- Reflects the core functionality: reading, highlighting, and exploring content
Use named parameter syntax in Vercel rewrite and add <base href="/"> tag to ensure assets load correctly from root when serving index.html through the API.
Instead of redirecting, serve the static index.html file directly. The Vercel rewrite preserves the /a/{naddr} URL, allowing client-side SPA routing to work correctly.
Browsers get 302 redirect to / where the SPA handles routing client-side with the original /a/{naddr} URL preserved. Crawlers/bots get the full HTML with OG meta tags.