<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Witarist Hire</title>
    <description>The latest articles on DEV Community by Witarist Hire (@witaristhire).</description>
    <link>https://dev.to/witaristhire</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/witaristhire"/>
    <language>en</language>
    <item>
      <title>Writing async engineering specs that remote teams actually follow</title>
      <dc:creator>Witarist Hire</dc:creator>
      <pubDate>Wed, 13 May 2026 17:29:01 +0000</pubDate>
      <link>https://dev.to/witaristhire/writing-async-engineering-specs-that-remote-teams-actually-follow-1l9p</link>
      <guid>https://dev.to/witaristhire/writing-async-engineering-specs-that-remote-teams-actually-follow-1l9p</guid>
      <description>&lt;p&gt;One of the biggest friction points in distributed engineering teams isn't technical — it's the spec. Here's the format that reduced our back-and-forth significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with most specs
&lt;/h2&gt;

&lt;p&gt;Most engineering specs are either too vague ("build a notification system") or too prescriptive ("use Redis pub/sub with these exact keys"). Neither works well.&lt;/p&gt;

&lt;p&gt;The first requires endless clarification before starting. The second removes the engineering judgment that makes a senior developer worth hiring.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 6-section format
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Context (2–4 sentences)
&lt;/h3&gt;

&lt;p&gt;Why does this need to exist? What breaks without it?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; "Our billing webhook handler processes all events synchronously. When Stripe sends a burst, the endpoint times out and Stripe retries, causing duplicate processing. This caused 3 support tickets this week."&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Success criteria
&lt;/h3&gt;

&lt;p&gt;Observable behaviors, not implementation steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Stripe webhooks return 200 within 200ms
- Duplicate events are idempotent (same event ID = no side effect)
- Failures are retried with backoff (3 attempts: 5min / 15min / 60min)
- Failed events after retries are stored for manual review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Constraints
&lt;/h3&gt;

&lt;p&gt;What can't change? What's the technology boundary?&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Open questions
&lt;/h3&gt;

&lt;p&gt;What decisions is the engineer expected to make?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; "Queue implementation is your call — evaluate BullMQ vs Inngest and document the tradeoff in the PR."&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Out of scope
&lt;/h3&gt;

&lt;p&gt;What explicitly doesn't need to be solved here.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Dependencies / resources
&lt;/h3&gt;

&lt;p&gt;Links to relevant code, docs, or people to consult.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "open questions" is the most important section
&lt;/h2&gt;

&lt;p&gt;When you skip it, engineers either make decisions silently (you find out at review) or block waiting for approval. Writing "here are the decisions I'm delegating to you" gives senior engineers the autonomy they want while keeping you in the loop on design surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anti-patterns
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The spec-by-wireframe&lt;/strong&gt; — a Figma screenshot with no written context. Engineers can't build from UI designs alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The spec with a hidden solution&lt;/strong&gt; — "We need a caching layer" when you actually mean "add Redis here." If you've decided, just say so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The spec that changes mid-implementation&lt;/strong&gt; — if requirements change, write an addendum. Don't edit the original. Engineers need to track what changed.&lt;/p&gt;




&lt;p&gt;What sections do you include that I haven't mentioned? Comments welcome.&lt;/p&gt;

</description>
      <category>career</category>
    </item>
    <item>
      <title>How I structured a technical assessment for hiring remote developers (and what actually worked)</title>
      <dc:creator>Witarist Hire</dc:creator>
      <pubDate>Wed, 13 May 2026 17:28:13 +0000</pubDate>
      <link>https://dev.to/witaristhire/how-i-structured-a-technical-assessment-for-hiring-remote-developers-and-what-actually-worked-2l9p</link>
      <guid>https://dev.to/witaristhire/how-i-structured-a-technical-assessment-for-hiring-remote-developers-and-what-actually-worked-2l9p</guid>
      <description>&lt;p&gt;Running a 3-person startup means every engineering hire is high-stakes. Here's the assessment framework we landed on after several failed attempts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What didn't work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live coding in a video call&lt;/strong&gt; — candidates froze up, output didn't reflect their actual ability. Senior devs especially hated being watched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full take-home projects (3–5 hours)&lt;/strong&gt; — good candidates dropped out because their time is valuable and they had options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Just asking about past projects in an interview&lt;/strong&gt; — too easy to embellish, no signal on actual implementation thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  What worked: a focused 90-minute async task
&lt;/h2&gt;

&lt;p&gt;The format we converged on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A real, scoped problem&lt;/strong&gt; from our actual codebase context (not a LeetCode puzzle)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;90-minute time limit&lt;/strong&gt;, explicitly communicated upfront&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous&lt;/strong&gt; — they pick a 2-hour window, we review async&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Evaluated on reasoning, not perfection&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We send a single file: a TypeScript module with a bug, a missing feature, and some intentionally messy code. The task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fix the bug (tests are provided, they should pass)&lt;/li&gt;
&lt;li&gt;Add the missing feature (spec is included)&lt;/li&gt;
&lt;li&gt;Optionally: clean up anything that bothers you&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we evaluate
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bug fix correctness&lt;/strong&gt;: Do they actually read the failing test?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature implementation&lt;/strong&gt;: Do they handle edge cases?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code style choices&lt;/strong&gt;: Consistent, readable, no over-engineering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional cleanup&lt;/strong&gt;: Shows initiative and code taste&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time management&lt;/strong&gt;: Did they scope appropriately in 90 min?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we explicitly don't evaluate
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Whether they used the same patterns we use (we want their instincts)&lt;/li&gt;
&lt;li&gt;Speed (90 minutes is enough to see real work, not a rush)&lt;/li&gt;
&lt;li&gt;Perfect syntax (they can Google things)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The follow-up conversation
&lt;/h2&gt;

&lt;p&gt;After the assessment, we do a 30-minute async written Q&amp;amp;A (not a call). We ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Walk me through your fix for the bug"&lt;/li&gt;
&lt;li&gt;"What would you change if you had another hour?"&lt;/li&gt;
&lt;li&gt;"What bothered you most about the original code?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The written format gives introverted engineers a chance to shine and reveals how well they communicate technical decisions in text — which matters enormously for async remote teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;We've used this format across ~40 hires. Key observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drop-off rate is much lower than take-homes (candidates respect the 90-min scope)&lt;/li&gt;
&lt;li&gt;Strong signal on seniority without a 5-round process&lt;/li&gt;
&lt;li&gt;Candidates who do well here almost always perform well in the role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to share the actual task template if there's interest — drop a comment.&lt;/p&gt;

</description>
      <category>typescript</category>
    </item>
  </channel>
</rss>
