Building Web Tools For Job Applications

Written July 10, 2024

When Job Applications Are A Slog... Automate!

Edit July 30: The site has evolved a lot!

----------------

Any reasonable developer will get sick of tedium and job searches are full of tedium. Managing copies of resumes, templates, searching multiple sites, tracking correspondance, scheduling, and all manner of clicking buttons, find/replace,  and copy/paste.

I have generally thought that there was too much human involvement required and not enough automatable process to merit the work. That's changed the last few months with a tough job market and some extra time on my hands. I wanted a tool that would help me juggle a few key operations:

There were a lot of great ideas early on too:

I'll share some screenshots throughout the post for those who are curious. While it's not useful to anyone but myself, it is accessible to others to explore and provide feedback here: Maker Consulting JobTools

Edit: I didn't stop developing after this site so some screenshots will differ.

Building Blocks

I knew I would likely want to integrate with the Google API to manage files in Drive, since that's where I keep them today. I also knew I'd need to work with Jobscan.co's API which is not documented. I do pay for Jobscan.co and presumably, the number of scans required before accepting an offer is static, albeit more bursty in this case. I wrote example code for both API's in Python at first. It's comfortable and I use it during technical screens. 

The Google API examples are straight from their documentation but you can check mine out here. This provided me some anxiety and relief as I learned the hard way that GitHub and Google are very good at detecting and reported potentially leaked credentials. Protect your secrets folks!

The Jobscan endpoints are not documented but a small amount of inspection with Chrome's developer tool's gave me all the headers and payload I would require and the response from the scan endpoint was just JSON containing all the relevant information for the normal scan results page. Here's an example in Python.

I had some vague thoughts about how to do find/replace actions in the documents and how to extract job description fields with ChatGPT but realistically, I had no idea if they would work besides some quick playing around with prompts. I felt the site would still be useful with at least some automation, if not at least to make the process more fun for myself.

The start page lets you open multiple useful links in separate tabs or copy common search queries to your clipboard.

Setting the Groundwork

The simplest requirements can lead to major changes. I wanted to run my code on multiple machines without worrying about syncing them, even if they were in GitHub. I wanted to share them with others, get feedback, and generalize them for mass use (maybe, someday). I wanted to create more dynamic and interactive workflows outside of a command line or an IDE. So, I reached for web apps.

I deviated briefly to explore some Javascript frameworks. I thought I would need a fancy framework like Angular, React, or Vue but there's a lot of set up and time commitment to learning these tools. I did Angular's Tour of Heroes in the past and tried React's Tic Tac Toe game but really, these tools were getting in the way of me starting my site. Angular provides great type checking and conventional routing, React provides fast and simple state management for interactive apps, and Vue provides progressive design but I didn't really need these things. I needed a barebones app and some help in the design department.

Making Progress

So instead, I started with a single blank HTML page and started adding elements and Javascript. At first, it was all inputs, labels, and buttons to trigger calls to Google or Jobscan. Eventually, I got tired of reloading credentials and incorporated session storage to preserve details over page reloads. Each secret was its own input so I replaced them all with a file picker to hold credentials between sessions. I learned how to do seemingly obvious tasks like creating "copy to clipboard" buttons, pop-under links (yes, I know, shame on me), event listeners, handle promises, defer loading scripts, and style buttons with CSS. I have dabbled in most of this on other projects before but this was a real test to bring it all together.


Details are extracted automatically from the job description including the company name, job title, company description, role description and duties, basic requirements, and preferred requirements. These are used later to provide more granular ATS scan results and resume tailoring options.

Life Lessons

I learned some cool things about UI/UX too. I realized just how strange it can be to interact with web elements that don't behave like I expect and that when I embraced conventions of coloring, styling, and behavior, my site became a pleasure to use. The more I did it, the more obvious the decisions became as well. This was surprising as someone who has always felt that HTML and CSS were both tedious and alien things to fear and avoid. However, simple patterns like habitually using id's and classes on your HTML elements make life so much easier. Learning and leveraging the differences in let, var, and const is also incredibly helpful for managing state in your application. 

It's also fascinating to watch web sites became progressively more automated. Suddenly, problems about how to manage a certain activity (like authenticating a third-party API) like when to reload the client, what to do when it fails, how to display it to the user and how to manage credentials just ... disappear. At least, they disappear into neat Javascript functions. Compared to CLI or library development, the ergonomics of the interface are extremely transparent and obviate a lot of design choices for the programmer.

One of the biggest challenges to moving into web development are fairly universal to software development. That is, it is very easy to move too fast and not build on stable foundations. This becomes very clear when you don't entirely know what a stable foundation looks like. In many languages, a stable foundation is well-organized and documented code with tests, repeatable builds, repeatable deployments, and a good review process.  Building a complete site with HTML, CSS, Javascript and multiple 3rd-party API's can make it hard to know where and how to apply these good ideas. I think the same thing can be said about major movements in the industry cloud, microservices, analytics engineering, and of course, machine learning.

Almost Done... Or Is It?

The web site really started feeling close to "done" after I implemented a workflow navigation bar. Rather than a big page to scroll up and down, I could neatly compartmentalize activities into sub pages contained by <div> tags. I could click the workflow buttons and show/hide whole sections of the page to create the illusion of a sub page. This highlighted so many flaws in my thinking about how I might use the site from getting the sub pages' names wrong, to their ordering, to their contents. It was a pretty good reminder that modeling a problem space is iterative and our first impressions tend to be wrong regardless of whether your making an API, a object oriented model, a data model, or a web site.

Shortly after, I had assessed my to-do list. There were (and still are) lots of ergonomic improvements to make. There's still weirdness in the auth buttons. There's still unclear behavior around which buttons are enabled under what circumstances. There's code to clean up, tests to write, things to do automatically during initialization or with event listeners, and missing tidbits of information hidden on subpages. There were also a couple big outliers: splitting up job descriptions and automating find/replace. 

The scan & tailor page lets you generate company-specific documents stored in Google Drive. The latest copy is fetched with each scan so all of your edits are included without copy/pasting to Jobscan. Progressively inclusive scans are performed so you can see how you line up at a minimum, comprehensively against the whole job description, and in-between.

Here Comes ChatGPT

I had had enough of the Google API after wrestling with the semantics of gets versus exports and which file types were supported and how to manage the raw content in Javascript. I also realized that many of the features I was imaginging could easily be enhanced by ChatGPT and that knowing more about that particular API might be relevant to my career these days. So, I caved and put $10 on my ChatGPT account and started running examples. I updated my credential file picker to include the GPT API key, wrote some prompts and a helper function to wrap the API. This turns out to be a great use of ChatGPT. It's usually accurate enough to get it right and my involvement has gone from data entry to supervisory. 

With that out of the way, I can move on the final bits of find/replace automation. My plan at the moment is to retrieve my docs in their native format (Google Docs uses openXML) and do string substitution. If I avoid manipulating the XML headers and formatting information, it should be safe to write back to the cloud before exporting it as a PDF.

Application forms on sites like Workday and Greenhouse are notoriously clunky.  This page keeps all of your downloads and URL's in one place so you don't have to manage bookmarks, or click around to your profile or to export PDF's.

What Comes Next

The thing about biting the bullet and automating something is what you learn and what you can now imagine. I learned a ton about web app development but also my own process and needs like habits I needed to keep or discard. I can also now picture so much more. For example, with the boilerplate out of the way, I can now think about adding more personal branding and flair to my cover letters or more sophisticated ways to tailor my resume.

The configuration section is mostly useful to developers. If you set up a Jobscan account, Google API project, and ChatGPT project, feel free to test it out yourself.

Did you find this useful?