Building 311monitor.com — real-time city service request alerts
open311 python fastapi self-hosting claude-codeIf you have been following this blog you might have noticed a theme — I have a few posts about Open311 APIs and tools I have built around them. 311monitor.com is the latest and most ambitious of those projects: a real-time alerting platform that watches 311 service requests across 13 US cities and notifies you when something matches your rules. I built it over a weekend (March 14-15) using Claude Code, going from zero to a production deployment with 894 tests and 97% code coverage.
Why I built this
I have been pulling data from Open311 APIs since 2020. My earlier projects on this blog were simple proximity tools — paste your coordinates, see nearby reports. They worked, but they were manual. You had to go check them. I wanted something that would come to me.
The use case that motivated this: I wanted to know the moment someone filed a 311 report near my property. Not tomorrow, not when I remember to check — immediately. Property managers, landlords, and anyone who cares about what is happening on their block would want the same thing.
What it does
You create alert rules that define what you care about, and 311monitor.com watches for matching reports and sends you a notification. The rule types are:
- Geo radius — anything filed within X feet of an address
- Address watch — tight monitoring on a specific property
- Keyword — reports mentioning a street name, business, or any text you choose
- License plate — reports that mention your plate number (best-effort, since not all reports include plates)
- Geo box — draw a polygon on the map and watch everything inside it
When a rule matches, you get notified through Discord, email, or a generic webhook you configure. Each rule can have multiple endpoints, daily caps, quiet hours, and a digest mode that batches alerts into a single daily email instead of sending them one at a time.
There is also a rule preview feature — before you save a rule, you can see what it would have matched in the last 30 days. The preview shows matched reports as pins on a map with your radius or polygon overlaid, so you can tune the rule before it goes live.
Supported cities
The platform currently pulls from 13 cities:
Real-time (Open311 / SeeClickFix): Boston, Cambridge, Brookline, Malden, Medford, Watertown, Framingham, Natick, Needham, Worcester
Near real-time via Socrata (~1-2 day lag): New York City, Dallas, Seattle
Adding a new SeeClickFix city is one dictionary entry in the source registry with the organization ID. Open311 and Socrata cities need a bit more client work depending on how they deviate from the spec, but the client architecture makes it straightforward. I wrote an earlier post about finding SeeClickFix organization IDs that turned out to be directly useful here.
How it works
The core loop:
- Poll — Every 3 minutes, the app fetches new service requests from each city’s API. It tracks where it left off per source so it does not re-fetch or miss anything.
- Match — Each new report is evaluated against all active rules. Geo rules use a bounding box SQL pre-filter before running the Haversine distance calculation. Geo box rules use ray-casting point-in-polygon. Keyword and plate rules do substring matching (no regex, to prevent ReDoS).
- Deliver — Matched reports are dispatched to the user’s configured endpoints. A unique constraint on (rule, endpoint, source, request ID) prevents duplicate notifications. Failed deliveries are logged and can be retried.
There is no separate worker process — the poller runs inside the FastAPI app via APScheduler. The whole thing is a single process, which keeps deployment simple.
Tech stack
- Python 3.11+, FastAPI, uvicorn — async web framework
- SQLite in WAL mode — WAL gives concurrent reads during writes, which is all this workload needs. No Postgres, no external database server
- SQLAlchemy + Alembic — ORM and migrations
- Jinja2 + Bootstrap 5 — server-rendered templates, no JS framework, no build step
- Leaflet + markercluster — maps for rule preview, browse reports, and nearby property views
- Chart.js — interactive activity charts on the public stats page
- APScheduler — embedded background scheduler for polling
No external caches, no message queues, no Redis. The whole application runs on a single $5/month DigitalOcean droplet (1 vCPU, 1 GB RAM) with room to spare.
Building it with Claude Code in a weekend
The entire project — from initial FastAPI scaffold to production deployment with 13 cities, 894 tests, and 97% code coverage — was built over a weekend using Claude Code with Anthropic’s Opus 4.6 model. The version history shows v1.0 on March 13 through v1.31 by March 19. That includes the full matching engine, three notification channels (Discord, email, webhook), multi-source polling, admin panel, public browse and stats pages, daily digests, and the security hardening described below.
The key to making this work was not just prompting well — it was setting up the right guardrails so that Claude Code could move fast without breaking things. I treated the project the same way I would treat a junior engineer who writes code quickly but needs strong automated checks before anything reaches production.
Code quality tooling
The first thing I set up was comprehensive automated quality enforcement. Every tool runs both locally (via pre-commit hooks and Makefile targets) and in CI:
- black — code formatting
- ruff — linting with rules for PEP 8, pyflakes, isort, pyupgrade, code simplification, McCabe complexity (max 15), and flake8-bandit security checks
- mypy — type checking
- bandit — dedicated security scanner
- vulture — dead code detection
- jscpd — copy-paste detection (8% threshold)
- pip-audit — dependency vulnerability scanning
- npm audit — for vendored JS dependencies
- pytest-cov — coverage enforcement (tests fail if coverage drops below threshold)
All of these run through make targets (make lint, make security, make quality, make audit, make test). The CLAUDE.md file instructs Claude Code to always use the Makefile — never run pytest or ruff directly.
CI/CD pipeline
The GitHub Actions workflow runs three jobs in parallel on every push and PR:
- Lint & type check — black, ruff, mypy
- Security & quality audit — bandit, vulture, jscpd, pip-audit, npm audit
- Tests — 894 tests in parallel via pytest-xdist (~5 seconds)
Deploy only runs on merge to main, gated behind all three jobs passing. The deploy step backs up the SQLite database to Backblaze B2, runs Alembic migrations, restarts the systemd service, and smoke-tests the /healthz endpoint with retries.
CLAUDE.md as a forcing function
The most important piece was the CLAUDE.md file — a 476-line document that acts as instructions for Claude Code. It defines explicit PR requirements that must all pass before merging:
make testmust exit 0, coverage cannot decreasemake lint(black + ruff + mypy) must passmake security(bandit) must passmake quality(vulture + jscpd) must passmake audit(pip-audit + npm audit) must pass- Every PR must bump the version in
pyproject.toml(semantic versioning) - After merging, verify the deploy by curling
/healthzand checking the version matches, status is healthy, and all sources show recent poll times - Scan the codebase for lint suppression comments (
# noqa,# type: ignore,# nosec) and verify each one still applies — remove stale suppressions, document reasons inline
This last rule is important. When an LLM hits a type error or lint warning, its instinct is to add a suppression comment and move on. Requiring an audit of every suppression on every PR forces it to fix the underlying issue instead. The project currently has 19 inline suppressions, all documented with the reason they are genuine false positives.
The combination of pre-commit hooks catching issues at commit time, CI catching anything that slips through, and the CLAUDE.md checklist enforcing verification after deploy means that bad code has to get past multiple independent layers to reach production.
Security
This project required more thorough security hardening than a typical personal project, since it accepts user-supplied webhook URLs and handles authentication:
- 5 layers of rate limiting — nginx connection limits, LLM bot interception (returning 403 to known AI crawlers), a sliding-window auto-ban that permanently blocks IPs exceeding 60 req/min, slowapi rate limiting on auth and API routes, and fail2ban watching for 429 responses
- CSRF protection on all form submissions, tied to IP and User-Agent
- SSRF protection on webhooks — blocks RFC 1918, loopback, link-local, and CGNAT ranges, with DNS rebinding protection
- HMAC-SHA256 webhook signing so consumers can verify payloads
- Content-Security-Policy with
script-src 'self'(no unsafe-inline), all static assets vendored locally - Account lockout after 10 failed login attempts
- Systemd hardening —
NoNewPrivileges,PrivateTmp,ProtectSystem=strict,ProtectHome,MemoryMax=512M
API keys use a b311_ prefix and are stored as SHA-256 hashes. The full key is shown once on creation and never again.
What I learned
SQLite handled everything I needed. I started this project expecting to migrate to Postgres eventually and kept the SQLAlchemy abstraction clean for that. After 860K+ ingested reports, the WAL mode concurrent read/write pattern handles polling and serving requests without issue. The operational simplicity of a single database file — backed up daily with a gzip and rclone to Backblaze B2 — is a meaningful advantage over running a separate database server on a $5 droplet.
Bounding box pre-filters are essential for geo matching at scale. Running Haversine distance calculations against every rule for every new report does not scale. A SQL bounding box check first (comparing lat/lng against a precomputed rectangle around the rule’s center point) eliminates the majority of candidates before the expensive trigonometry runs. The same principle applies to geobox polygon rules — computing the bounding box of the polygon’s vertices and filtering in SQL before running the ray-casting algorithm.
Setting up quality tooling before writing features paid for itself immediately. With 8 linters, a security scanner, dead code detection, copy-paste detection, and enforced coverage thresholds all running in CI, I could let Claude Code generate large amounts of code with confidence that anything problematic would be caught before merging. The pre-commit hooks caught formatting and type issues at commit time, CI caught anything that slipped through, and the post-deploy health check verified the production system was actually working. Without this infrastructure, the speed of LLM-generated code would have been a liability rather than an asset.
The CLAUDE.md suppression audit rule was the highest-leverage guardrail. LLMs will add # type: ignore or # noqa comments to make warnings go away. Requiring an explicit audit of every suppression comment on every PR — with documented justification for each one — forced the underlying issues to be fixed rather than silenced. 19 suppressions across the entire codebase, all with inline explanations, is a number I am comfortable with.
SeeClickFix organization IDs unlock rapid city expansion. A large number of US cities use SeeClickFix as their 311 backend. Once I had a generic SeeClickFix client, adding a new city was a single dictionary entry with the org ID — no new code, no migration. Ten of the thirteen supported cities use this path.
What is next
- SMS notifications via a paid channel
- More cities — the infrastructure supports it, but I want more user testing on the current set before scaling up
- Open sourcing — the repo is private for now, but I may open it up if there is interest
The site is live at 311monitor.com.