Job Radar: From Side Project Framing to Operating System Framing
Job Radar was not built as a novelty bot. It was designed as a live monitoring
and decision-support system: structured ingestion, controlled state, human
approvals, deterministic checks, and reliable notifications for time-sensitive
job discovery work.
Executive Summary
The problem was not "can a bot send messages?" The real problem was building a
system that could monitor multiple hiring sources, avoid duplicate noise, cope
with changing source behavior, preserve operator trust, and keep the state sane
across repeated runs. The architecture was shaped around reliability, not novelty.
State Ownership
SQLite was used as a deliberate state boundary for lifecycle tracking, not just as a quick local cache.
Concurrency Discipline
Scan locking, short write patterns, and retry-friendly behavior were used to reduce contention and duplicate work.
Delivery Trust
Deduplication and explicit triage reduce alert fatigue, which is critical in any notification-driven system.
Operator Control
Human approval remains part of the system contract, so automation can accelerate work without silently drifting.
Operational Constraints Addressed
Rate limits: the system assumes external sources may throttle, change, or partially fail, so scan behavior must be bounded and retry-aware.
Duplicate detection: multiple sources and repeated scans create noise unless records are normalized and compared consistently.
State drift: if lifecycle state is loose, downstream triage and tailoring become unreliable.
Crash recovery: scan locks, SQLite writes, task logs, and idempotent execution claims reduce duplicate work after refreshes, retries, or interrupted runs.
Notification quality: a "working" alert system is still bad if it sends too much, too often, or without enough confidence.
Auditability: logs, scan reports, and outcome labels matter because they make later tuning evidence-based instead of guesswork.
Runtime Today, Scale Later
Today the desktop machine is the active node, which is why the host needs to stay
on for scheduled scans and Telegram interaction. The architecture is already shaped
so it can move to a dedicated always-on device later, where multiple agents can run
in parallel without changing the core operating model.
How To Talk About It
The strongest framing is architectural, not hobbyist. This is the concise
version I would use in recruiter-facing contexts:
Architected a Telegram-first ATS monitoring and decision-support system using
SQLite for state management, lifecycle tracking, and deduplication across a
27-source watchlist spanning Greenhouse, Lever, Ashby, and direct careers-page
discovery. Designed around scheduled plus manual Telegram scans, retry-aware
polling, human approval checkpoints, and template-driven resume/cover-letter
generation, so the workflow reduced search latency without becoming noisy or brittle.