How We Screened 124 Writer Applications in 4 Hours

Published September 22, 2025 by Jahdunsin Osho
How We Screened 124 Writer Applications in 4 Hours

AI search is fast becoming the primary way people find information, and one of the criteria for your content to get cited is that it offers unique insights.

So, as a startup looking to scale its technical content strategy, you need writers who both have a strong grasp of their domain and can write well. They should have interesting insights or takes to share. However, these writers are challenging to find.

You might have hundreds of applicants for a single job posting, but only a few will have the skills you need. With AI making it easy to create content, you also get a flood of applications.

Reviewing them takes a lot of time you don’t have, but you also can’t ignore them because among those applications are writers who could bring the value you need to scale your technical content strategy. So what do you do?

Well, to solve this problem for our client, we built an automated screening system that reduced what could have been weeks of manual review to just four hours.

The Problem

Finding Contributors With Real Authority in a Crowded Market

Our client runs a community writing program and received 124 applications with over 300 articles to review.

It might not seem like much, but they were a small team, and manually reviewing them would have stretched into weeks. Not to mention the oversights on qualified candidates that could occur if fatigue sets in.

The goal was clear: quickly identify contributors from a relatively large pool of applicants who could share insights beyond the basics and communicate them well.

Here’s how we solved the problem by engineering an evaluation prompt and integrating it into their application flow.

The Solution

We solved the problem in four steps.

  1. Defining content criteria
  2. Gathering data
  3. Prompt engineering
  4. Integration

Step 1: Defining Content Criteria

Our client wanted writers who could share insights from their experience and write with authority.

So, we broke down the authority criteria into three types: experiential authority, research-based authority, and implementation authority.

Experiential Authority: Identifies writers who have actually implemented what they discuss, shown through specific scenarios and lessons learned.

Research-Based Authority: Separates writers who understand the broader context from those rehashing basic concepts.

Implementation Authority: Distinguishes between those who have built real systems versus those who have only read about them.

After deciding on the criteria, we set out to create a dataset of articles, a list of the kind of articles that met our standards, and those that didn’t. This would teach our evaluation system what “good” and “bad” looked like.

Step 2: Gathering Data

To ensure our AI system could accurately identify these authority types, we needed concrete examples of what good and bad articles looked like.

We manually sorted through existing articles to create a dataset of clear examples that demonstrated strong authority versus those that appeared knowledgeable but lacked real expertise.

Our goal was to produce reliable evaluations. Without these examples, our prompts would be theoretical guidelines that the AI couldn’t reliably apply. The AI model required reference points to comprehend subjective concepts such as “authority” and “expertise.”

The manual sorting process also helped us identify subtle patterns that distinguished truly authoritative content from surface-level knowledge.

Step 3: Prompt Engineering and Testing

Based on our defined criteria, we created a rubric and prompt that included concrete examples of what constituted strong versus weak authority indicators.

For instance, strong experiential authority was characterized by articles that included specific tools used, problems encountered, and solutions implemented, whereas weak authority meant generic advice without personal context.

We created disqualification criteria that would automatically filter out basic tutorial content and articles lacking practical experience indicators. The rubric provided clear scoring guidelines, allowing the AI model to evaluate the content with consistent assessment.

We deliberately started with a lenient rubric to avoid false negatives, so we wouldn’t miss qualified candidates, and then tuned it when we observed unqualified articles passing the assessment.

Step 4: Integration

We built the automation workflow using n8n, integrating it with Google Forms, which they used to accept applications.

When a new application was submitted, the workflow evaluated the author’s submitted articles and sent the assessment to the content team via Slack. The justification behind each assessment was included, so the team could validate the reasoning.

The Result

We completed all 124 application screenings in 4 hours versus the 3–4 days manual review would have required. And out of 124 applications, only 4 candidates met our authority standards.

Imagine if the client reviewed all 124 manually, only to get 4 candidates. The automated screening system also revealed that inbound applications weren’t the best source of quality contributors, validating a shift toward outbound recruitment.

Instead of spending days reviewing unsuitable applications, our client could invest that time in reaching out and building relationships with writers more likely to meet the publication’s requirements.

TinyRocket – Content Compliance Partner

Onboarding authors is just one part of executing a technical content strategy.

After onboarding, you’ll need to manage and review the content to ensure it meets your quality standards. This takes time that could be spent on distribution, making sure your content reaches your target audience.

That’s why we help technical startups build content compliance systems that integrate into their existing workflows so they never have to worry about quality.

If you’d like to scale your technical content strategy without increasing overhead, book a call, let’s have a chat.

Frequently Asked Questions

1. Could we have just used ChatGPT directly instead of building a custom system?

Using ChatGPT to review each article based on the client’s criteria might sound like a solution, but it would still be slow and unreliable. We would have had to paste each of 372 articles across 124 applications individually, which would have taken hours.

The bigger issue is consistency. ChatGPT’s context window expands as you add more content, and it becomes less reliable at following specific requirements. By the time dozens of articles have been processed, it may have lost the thread of the instructions and the results would no longer be reliable.

2. How do you ensure the automated system doesn’t miss qualified candidates that a human would catch?

Our three-authority evaluation criteria were designed based on extensive analysis of what distinguishes good candidates from poor ones. Rather than trying to identify everything we wanted (which is subjective), we focused on clear indicators of real expertise versus theoretical knowledge.

Processing individual articles with consistent rubrics ensures our evaluation criteria don’t drift over time like manual review does. In addition, our iterative refinement process helped us handle edge cases systematically.

3. Can this approach work for other types of hiring beyond content creators?

Yes. The same approach, defining clear authority signals, building an example dataset, creating a rubric, and integrating the evaluation into your intake workflow, can be adapted to other roles where demonstrated experience matters.