i Automated My Job Hunt Because Suffering is Optional
There is a certain kind of despair unique to the modern job hunt. Not the despair of not finding work, that is a more ancient suffering, but the despair of the process itself. The nightmare of opening seventeen browser tabs, copy-pasting the same resume into a text box that will strip all its formatting anyway, writing a cover letter that begins “Dear Hiring Manager” because you don’t know who you’re writing to and honestly neither do they, and then refreshing your email for the next two weeks to learn, via a boilerplate rejection, that you were not “the right fit at this time.”
So. i built jobhuntr.
The short version: jobhuntr is a Go web application that scrapes Google Jobs on an hourly schedule, sends push notifications to your phone, lets you approve or reject listings from a dashboard, and then automatically generates a tailored resume and cover letter using Claude. It handles OAuth, multi-user isolation, EZ-DEPLOY on Render, and it does all of this without you having to look at a single job board.
1. The Problem (or: Why i Built This Instead of Just Applying Somewhere)
Job hunting is a grind not because finding work is inherently hard, but because the surface area of modern job hunting is enormous. You need to monitor multiple sources, triage dozens of listings, and then produce personalised application materials for the handful worth your time all while also, presumably, working at your actual job.
Most of the tedium is mechanical. The search itself is just a query with some filters. The triage is just a binary decision per listing. The resume tailoring is a document transformation problem that takes a fixed input (your base resume) and a variable input (the job description) and produces an output that emphasises the right parts.
None of this requires human ingenuity. It requires repetition. And since 1971, with the introduction of the Kenbak-1, repetition is what computers are for.
i built the first version as a personal tool - a single-user script that ran locally, hit SerpAPI, and dropped job listings into a database i could query. It worked. It was ugly. i showed it to a friend over coffee, and he looked at it the way people look at someone else’s slightly-too-complicated home automation setup: impressed, but also clearly wanting one.
“Can i use it?”
And there it was. The moment every side project either dies quietly or becomes something real. He wanted an account. His own search filters. His own resume stored in the system. His own phone notifications. His own view of the dashboard, isolated from mine.
That single question - can i use it? - is what forced the jump to multi-user architecture. Everything you’ll read about below: per-user isolation in the database, OAuth, the users table, the scheduler iterating user IDs, the user_id column threading its way through every query - all of it traces back to that moment.
2. What It Does
The pipeline, in plain English:
Search - A scheduler runs on a configurable interval (default: hourly) and calls the SerpAPI Google Jobs engine for each of your saved search filters (keywords, location, salary range, title).
Filter - Before persisting results, the scheduler checks your per-user banned terms list. Jobs matching any banned keyword or company name are silently dropped. (Yes, you can ban specific companies. You’re welcome.)
Persist & Notify - New jobs that pass the filter get written to PostgreSQL and immediately trigger a push notification to your phone via ntfy.sh. Each user has their own private topic. The notification is a tap-to-open link straight to the job detail page.
Triage - On the dashboard, you approve or reject jobs. Rejection is final. Approval puts the job in a queue.
Generate - A background worker polls the queue and picks up approved jobs. For each one, it fetches the job owner’s stored resume from the database, constructs a prompt, and sends it to Claude. The model returns a tailored resume and cover letter in both Markdown and HTML.
PDF - If Chromium is available, go-rod renders the HTML to A4 PDF. This step is gracefully skipped if no browser is found - the Markdown and DOCX versions are always produced.
Download - The job detail page shows a preview of the generated resume and cover letter, with download links for
.md,.docx, and.pdfformats.
A stats page at /stats summarises your pipeline: how many jobs were discovered, approved, applied to, and so on, with a weekly trend chart so you can see whether your filters are actually catching what you want.
3. The Tech Stack
Go was an easy choice. The application is a collection of long-running goroutines (HTTP server, scheduler, background worker), and Go’s concurrency model makes that kind of architecture feel natural rather than bolted on. The standard net/http package handles most of what you’d need from a web framework; chi adds routing and middleware without much ceremony.
PostgreSQL holds everything: users, jobs, search filters, banned terms, scrape run logs. The schema is straightforward relational - no ORM, just database/sql with pgx as the driver. Migrations run at startup via an embedded migrations/*.sql directory, each file applied in order with a schema_migrations tracking table. Twelve migrations at the time of writing, each one a surgical ALTER TABLE rather than a schema reconstruction.
The Claude API drives two distinct workflows:
- Summarization - When a job is first discovered, Claude Haiku produces a 1–2 sentence summary and attempts to extract the salary from the description. This runs inline in the scheduler so the dashboard shows a readable summary instead of raw job description prose.
- Generation - When a job is approved, Claude (the full model, configurable) generates a tailored resume and cover letter in both Markdown and HTML. More on this in section 5.
SerpAPI provides the Google Jobs data. The alternative, directly scraping Google Jobs, is too fragile, and a different project entirely. SerpAPI is optional; without a key the scheduler simply has no source to query, unless you provide your own.
ntfy.sh handles push notifications. It’s a free, open-source pub/sub notification service with excellent mobile apps. Each user picks their own private topic name; the server posts to https://ntfy.sh/<topic> with a JSON payload that includes the job title, company, location, and a deep-link back to the dashboard. No Firebase, no APNS, no account required beyond “install the app and subscribe to a topic.”
go-rod wraps Chromium’s DevTools Protocol to convert the generated HTML resume to PDF. The converter opens a headless browser page, sets the document content directly (no file system round-trip), calls the Page.printToPDF DevTools method with A4 dimensions and 1.5cm margins, and writes the stream to disk. If Chromium isn’t on $PATH, the converter simply doesn’t initialise, and the worker skips PDF creation without failing.
DOCX export is handled by godocx. The exporter parses a Markdown subset (ATX headings, bold, italic, unordered lists, paragraphs) and constructs the Word document programmatically. It’s not a full Markdown renderer - it handles the subset that actually appears in resumes and cover letters.
OAuth (GitHub and Google) is the primary authentication mechanism, via golang.org/x/oauth2. Email/password authentication also exists, with verification tokens, reset flows, and bcrypt hashing. The system supports both simultaneously - a user can sign in with GitHub or create a local account. Sessions are stored in a signed cookie via gorilla/sessions.
4. The AI Bit
There are two Claude integrations, and they serve different purposes at different points in the pipeline.
Summarization (fast path, cheap model)
When the scheduler discovers a new job, it immediately calls AnthropicSummarizer.Summarize(). The system prompt is deliberately minimal:
You summarize job listings. Given a job title, company, and description,
respond with exactly two lines:
LINE 1: A 1-2 sentence summary of the role and key requirements.
LINE 2: The salary or compensation range if mentioned anywhere in the
description, or "N/A" if not found.
The response is parsed by splitting on newline. If Claude returns “N/A” for salary, that field is stored as empty. This runs on Claude Haiku - fast, cheap, and perfectly adequate for a sentence or two.
The summary appears in the dashboard so you can make approve/reject decisions without reading the full description. Salary extraction is genuinely useful because SerpAPI’s structured detected_extensions.salary field is often absent even when the description mentions compensation clearly.
Generation (slow path, full model)
When a job is approved, AnthropicGenerator.Generate() gets called with the full job record and the user’s base resume (loaded from users.resume_markdown). The system prompt establishes the task:
You are an expert resume writer and career coach.
Given a job listing and a base resume in Markdown, produce four sections
in this exact format with no extra text:
---RESUME_MD---
[tailored resume in Markdown]
---RESUME_HTML---
[tailored resume as self-contained HTML with inline CSS for PDF printing]
---COVER_MD---
[professional cover letter in Markdown]
---COVER_HTML---
[cover letter as self-contained HTML with inline CSS for PDF printing]
The user prompt is the job title, company, location, salary, description, and the base resume verbatim. Claude returns all four sections delimited by the separator constants, and the generator parses them out with simple strings.Index extraction.
A few things worth noting about this design:
Self-contained HTML - The HTML variants include inline CSS specifically for PDF rendering. This matters because go-rod renders the page without any external stylesheets; everything needed to produce a readable PDF has to be in the document.
Four formats from one API call - Markdown, HTML, DOCX (derived from Markdown by the exporter), and PDF (derived from HTML by go-rod) all flow from a single Claude call. One request, four download formats.
Resume stored per user in the DB - The base resume lives in
users.resume_markdown. There’s no file upload, no S3, no filesystem path - just a TEXT column. You paste your resume into the settings page, it’s saved to the database, and the worker reads it from there at generation time.
5. Running It Yourself
Why open source?
i already have people telling me i should take this to an accelerator or pitch party or whatever the hell people are calling “money please” meetings in the valley these days. i’m not going to do that, i’m open sourcing the thing. In fact, if you use the live version, you’re actually consuming my own money that i earn writing code. Why? Because i’m tired of every last thing being commodified. It’s hard enough to live in this hellscape, but setting a tool in front of people that makes things easier and then charging for it - especially since it took me less than a week to get this into a beta state - feels wrong.
Quick start (Docker Compose)
git clone https://github.com/whinchman/jobhuntr
cd jobhuntr
cp .env.example .env
cp config.yaml.example config.yaml
make dev
Fill in SESSION_SECRET (generate one with openssl rand -hex 32) and GITHUB_CLIENT_ID/GITHUB_CLIENT_SECRET at minimum. The app starts at http://localhost:8080 with hot-reload via air.
The optional pieces are genuinely optional:
| Variable | What it enables |
|---|---|
SERPAPI_KEY | Job scraping. Without it, the scheduler runs but finds nothing. |
ANTHROPIC_API_KEY | Resume generation and summarisation. Without it, approving a job does nothing interesting. |
GOOGLE_CLIENT_ID/SECRET | Google OAuth as a second login option. GitHub OAuth is sufficient. |
Chromium is detected at startup from $PATH (as chromium, chromium-browser, or google-chrome). If absent, go-rod will attempt to download it automatically; if that fails, PDF generation is skipped silently.
Deploy to Render
The repository includes a render.yaml Blueprint. Point Render at the repo, set the environment variables in the service dashboard, update base_url in config.yaml to your Render URL, and register that URL as an authorised redirect URI in your GitHub (and optionally Google) OAuth app. First deploy runs migrations automatically. Subsequent deployments rebuild the Docker image and restart the service.
6. What’s Next
The backlog is currently empty in the formal sense, but the obvious future directions are:
- More job sources - SerpAPI is the only source right now. LinkedIn, Indeed, and direct ATS APIs are the natural next additions. The
Sourceinterface in the scraper package is already defined for exactly this purpose. - Application tracking - The data model already has
application_statusfields (applied,interviewing,won,lost) and timestamp columns. The natural extension is a Kanban-style pipeline view. - Resume versioning - Right now there’s one resume per user. Multiple resume templates (e.g. “engineering IC”, “engineering manager”) would let the generator pick the right starting point.
- Smarter scheduling - The current hourly tick runs for all users simultaneously. User-configurable schedule (or a smarter backoff when no new jobs are found) would reduce unnecessary SerpAPI calls.
- Email notifications - ntfy.sh is great if you’re willing to install another app. SMTP support is partially plumbed (
internal/mailerexists and is wired for email auth); extending it to job notifications is a small step.
7. Closing
If you’ve made it this far, you now know more about jobhuntr than most people who use it. It is not a startup. It is not trying to disrupt recruitment. It is a tool built to solve a specific, tedious problem in the life of exactly one person, which was then extended to work for exactly two people, and which now, apparently, works for an arbitrary number of people via OAuth and a well-indexed user_id column.
The universe is under no obligation to make job hunting pleasant. But it turns out it’s under no obligation to make it entirely manual either.
The code is at github.com/whinchman/jobhuntr.