<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Warren de Leon</title>
    <description>The latest articles on DEV Community by Warren de Leon (@warrendeleon).</description>
    <link>https://dev.to/warrendeleon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/warrendeleon"/>
    <language>en</language>
    <item>
      <title>How I designed a tech test scorecard that works from Graduate to Senior</title>
      <dc:creator>Warren de Leon</dc:creator>
      <pubDate>Mon, 13 Apr 2026 07:30:10 +0000</pubDate>
      <link>https://dev.to/warrendeleon/how-i-designed-a-tech-test-scorecard-that-works-from-graduate-to-senior-97</link>
      <guid>https://dev.to/warrendeleon/how-i-designed-a-tech-test-scorecard-that-works-from-graduate-to-senior-97</guid>
      <description>&lt;h2&gt;
  
  
  The problem with "is this a 3 or a 4?"
&lt;/h2&gt;

&lt;p&gt;When I started building the hiring process for my squad at Hargreaves Lansdown, I knew I wanted a structured scorecard from day one. I wrote about the tech test itself in &lt;a href="https://warrendeleon.com/blog/why-i-redesigned-our-react-native-tech-test-in-my-first-week/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=tech-test-scorecard" rel="noopener noreferrer"&gt;an earlier post&lt;/a&gt;. The test worked. The scoring didn't. At least, not the way I first designed it.&lt;/p&gt;

&lt;p&gt;My first scorecard used a 1–5 scale for each criterion. "TypeScript usage: score 1 to 5." "State management: score 1 to 5." Each criterion had a rubric describing what each score meant. It looked thorough on paper.&lt;/p&gt;

&lt;p&gt;Then I tried to use it.&lt;/p&gt;

&lt;p&gt;Two people reviewed the same submission. One scored the TypeScript a 3 ("types are there but not strict"). The other scored it a 4 ("clean types throughout, good use of typed hooks"). They were both looking at the same code. They just interpreted the rubric differently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; If two reasonable people can disagree on the score, the rubric isn't specific enough. The problem isn't the reviewers. It's the tool.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Checklists over rubrics
&lt;/h2&gt;

&lt;p&gt;The fix was embarrassingly simple: replace every subjective score with a &lt;strong&gt;yes/no checklist&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A single criterion, before and after. This is TypeScript usage:&lt;/p&gt;

&lt;h3&gt;
  
  
  Before: subjective rubric
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Strong typing throughout, strict mode, generics where appropriate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Clean types, minimal &lt;code&gt;any&lt;/code&gt;, props and navigation typed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Types for main structures, some &lt;code&gt;any&lt;/code&gt; leakage, works but not strict&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;TypeScript used poorly, frequent &lt;code&gt;any&lt;/code&gt;, adds little safety&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;any&lt;/code&gt; everywhere, effectively JavaScript with &lt;code&gt;.tsx&lt;/code&gt; extensions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The problem: "clean types" and "types for main structures" are both reasonable descriptions of the same code. One reviewer sees a 3, another sees a 4. Both are right.&lt;/p&gt;

&lt;h3&gt;
  
  
  After: observable checklist
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;✅ Source files use .ts/.tsx extensions
✅ Interfaces or types exist for API data, state shape, and component props
✅ Navigation params are typed
✅ Zero any in production code
☐  Typed hooks used (useAppSelector, useAppDispatch)
☐  Strict TypeScript enabled
☐  Zod or Yup schemas for validation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same criterion. Seven checks. Each one is a fact you can verify by looking at the code. Two reviewers will tick the same boxes because there's nothing to interpret.&lt;/p&gt;

&lt;p&gt;The first four checks are baseline (any competent candidate will have these in a 4–6 hour submission). The last three are signals of deeper experience. &lt;strong&gt;The ordering does the levelling for you.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I did this for every criterion across four sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Functionality&lt;/strong&gt;: does the app work?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Layer &amp;amp; API&lt;/strong&gt;: how does it fetch and manage data?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Quality&lt;/strong&gt;: is the code well-written and well-organised?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt;: is it tested, and how?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;100 checks. 100 points. One point each.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Same test, different ceiling
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most excited about. The checks are ordered by how much investment they represent.&lt;/p&gt;

&lt;p&gt;The first few checks in each criterion are things any competent candidate will achieve in &lt;strong&gt;4–6 hours&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the FlatList render items?&lt;/li&gt;
&lt;li&gt;Does pagination work?&lt;/li&gt;
&lt;li&gt;Does the party screen have an empty state?&lt;/li&gt;
&lt;li&gt;Are there types for the main data structures?&lt;/li&gt;
&lt;li&gt;Is there at least one test file?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the baseline. If you built the thing the brief asked for, you pass these.&lt;/p&gt;

&lt;p&gt;The later checks require more time, deeper experience, or both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GraphQL instead of REST&lt;/li&gt;
&lt;li&gt;Runtime response validation with Zod&lt;/li&gt;
&lt;li&gt;MSW for HTTP mocking in tests&lt;/li&gt;
&lt;li&gt;Feature-first project structure&lt;/li&gt;
&lt;li&gt;BDD with Cucumber&lt;/li&gt;
&lt;li&gt;Coverage thresholds enforced&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't things you do in a weekend. They're patterns you've learnt from building real production apps.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Key insight:&lt;/strong&gt; A candidate investing 4–6 hours scores in the 50–65 range. A candidate investing a full week with years of experience might score 85–95. &lt;strong&gt;The brief is the same. The expectations scale with the score.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How the levels map
&lt;/h2&gt;

&lt;p&gt;The total score maps directly to a level:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Code review score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Graduate&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;20–45&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Associate&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;46–64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Software Engineer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;65–88&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Senior&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;89–100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The code review score isn't the whole picture. The walkthrough call adds more signal. But the code review is the foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Respecting the time constraint
&lt;/h2&gt;

&lt;p&gt;A tech test is &lt;strong&gt;not a production app&lt;/strong&gt;. Candidates have jobs, families, lives. They're giving you their evening or their weekend. Penalising someone for not implementing a caching layer or not co-locating their styles would be like marking down a timed essay for not having footnotes.&lt;/p&gt;

&lt;p&gt;That's why the baseline checks matter. Getting all of them right scores you around &lt;strong&gt;50–60 out of 100&lt;/strong&gt;. That's Associate to Software Engineer territory. On my old rubric, a "3 out of 5" &lt;em&gt;sounded&lt;/em&gt; like a consolation prize. 55 out of 100 on the checklist is a positive result with a clear path to the next level.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "above baseline" looks like
&lt;/h2&gt;

&lt;p&gt;The later checks are where candidates differentiate themselves. These aren't requirements. They're &lt;strong&gt;signals&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A candidate who adds &lt;strong&gt;Detox E2E tests&lt;/strong&gt; with extracted helpers is telling me something about their testing culture.&lt;/p&gt;

&lt;p&gt;A candidate who implements &lt;strong&gt;GraphQL with Apollo&lt;/strong&gt; is telling me something about their API thinking.&lt;/p&gt;

&lt;p&gt;A candidate who sets up &lt;strong&gt;MSW with multiple handler sets&lt;/strong&gt; (success, error, 401, timeout, offline) is telling me they've debugged production API failures before.&lt;/p&gt;

&lt;p&gt;None of these are required. &lt;strong&gt;All of them are noticed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The stretch goals sit on top of the 100 points as bonuses: search, dark mode, accessibility, i18n, feature-first structure, Storybook, ErrorBoundary. These are the marks of someone who had time and chose to invest it wisely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The walkthrough changes everything
&lt;/h2&gt;

&lt;p&gt;The code review gives me a number. The walkthrough gives me &lt;strong&gt;context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A candidate who scores 65 on the code review might jump to 85 after the walkthrough if they can articulate every trade-off, explain what they'd change with more time, and navigate their codebase from memory. The number measures what they built. The conversation measures how they think.&lt;/p&gt;

&lt;p&gt;I designed the walkthrough as a set of &lt;strong&gt;question tables&lt;/strong&gt;. Each question has five signal descriptions, from "can't find the code" to "explains it from memory with edge cases." The interviewer ticks one row per question. No more "was that walkthrough a 3 or a 4?"&lt;/p&gt;

&lt;p&gt;For Senior candidates, there's an additional &lt;strong&gt;system design section&lt;/strong&gt; in the same call. No separate interview. The last 15–20 minutes shift from "show me your code" to "how would you design this for a team of 20 engineers?" The same question tables, the same tick-one-row format.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learnt building this
&lt;/h2&gt;

&lt;p&gt;Building this scorecard taught me more about hiring design than anything I've read about it. The lessons that stuck:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with checklists, not rubrics.&lt;/strong&gt; Every time I wrote a rubric ("5 = excellent, 3 = good, 1 = poor"), it turned into a debate about what "good" means. Checklists end the debate. Either the thing exists in the code or it doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Order the checks by investment, not importance.&lt;/strong&gt; The first checks aren't more important than the last. They're just more achievable in 4–6 hours. A Senior candidate who skips check 3 but nails check 7 isn't penalised for the skip because the total still reflects their level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate what you can see from what you need to ask.&lt;/strong&gt; The code review scorecard is 100% observable from the code. No "is the architecture clean?" questions. The walkthrough is 100% conversational. No code-reading during the call. Each document has one job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Respect the time constraint.&lt;/strong&gt; If a check would require more than 6 hours of work from a competent Software Engineer, it belongs in the upper half of the checklist, not the baseline. I kept catching myself writing baseline checks that were really Senior expectations. The question I kept asking: &lt;em&gt;"Would I expect this from someone doing this test after work on a Wednesday evening?"&lt;/em&gt; If the answer was no, it moved up.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's still evolving
&lt;/h2&gt;

&lt;p&gt;I've used this scorecard for our first round of React Native hiring at HL. My peer EM reviewed it and adopted it for his squad's hires too. That's the test of a good system: &lt;strong&gt;someone else can pick it up and use it without you in the room.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm not pretending it's perfect. The levels might need recalibrating after more candidates go through. Some checks might turn out to be too easy or too hard. The stretch goals might need rebalancing.&lt;/p&gt;

&lt;p&gt;The structure is right though:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Checklists, not rubrics&lt;/li&gt;
&lt;li&gt;✅ Observable facts, not opinions&lt;/li&gt;
&lt;li&gt;✅ Ordered by investment&lt;/li&gt;
&lt;li&gt;✅ Same test for everyone&lt;/li&gt;
&lt;li&gt;✅ Different ceiling for different levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building a hiring process and your interviewers keep disagreeing on scores, try replacing your rubric with a checklist. You might be surprised how much agreement you get when you stop asking &lt;em&gt;"how good is this?"&lt;/em&gt; and start asking &lt;em&gt;"is this here?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want the candidate's perspective on what this scorecard evaluates, I wrote a companion post: &lt;a href="https://warrendeleon.com/blog/how-to-pass-a-react-native-tech-test/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=tech-test-scorecard" rel="noopener noreferrer"&gt;How to pass a React Native tech test&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best scoring systems don't measure how you feel about the code. They measure what's in the code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;We're hiring!&lt;/strong&gt; We're looking for React Native engineers to join the Mobile Platform team at Hargreaves Lansdown. &lt;a href="https://warrendeleon.com/hiring/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=tech-test-scorecard" rel="noopener noreferrer"&gt;View open roles&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>engineeringmanagement</category>
      <category>hiring</category>
      <category>techinterviews</category>
    </item>
    <item>
      <title>How to pass a React Native tech test</title>
      <dc:creator>Warren de Leon</dc:creator>
      <pubDate>Mon, 06 Apr 2026 07:30:11 +0000</pubDate>
      <link>https://dev.to/warrendeleon/how-to-pass-a-react-native-tech-test-4642</link>
      <guid>https://dev.to/warrendeleon/how-to-pass-a-react-native-tech-test-4642</guid>
      <description>&lt;h2&gt;
  
  
  This is from the other side of the table
&lt;/h2&gt;

&lt;p&gt;I review React Native tech test submissions. I've seen what gets people hired and what gets them rejected. Most of the rejections aren't because the candidate can't code. They're because the candidate didn't show the right things.&lt;/p&gt;

&lt;p&gt;This post is the advice I'd give a friend before they submitted a take-home tech test. Not theory. Specific, practical things that move you from "maybe" to "yes."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I wrote about why I redesigned a tech test from the hiring manager's perspective in &lt;a href="https://warrendeleon.com/blog/why-i-redesigned-our-react-native-tech-test-in-my-first-week/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=pass-rn-tech-test" rel="noopener noreferrer"&gt;a separate post&lt;/a&gt;. This one is the other side: how to pass one.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Read the brief twice. Then read it again.
&lt;/h2&gt;

&lt;p&gt;Sounds obvious. It's the most common mistake.&lt;/p&gt;

&lt;p&gt;If the brief says "build three screens with navigation," don't build two. If it says "use TypeScript," don't use JavaScript. If it says "manage a list of up to 6 items," make sure adding a 7th is handled gracefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reviewers check requirements like a checklist.&lt;/strong&gt; Every missing requirement is points dropped. Not because we're pedantic, but because following a spec is part of the job. If you miss requirements in a tech test with a clear brief, what happens with a vague Jira ticket?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; Read the brief before you start. Read it again halfway through. Read it one final time before you submit.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Project structure matters more than you think
&lt;/h2&gt;

&lt;p&gt;The first thing I do when I open a submission is look at the folder structure. Before I read a single line of code, the structure tells me how you think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type-first structure&lt;/strong&gt; (screens/, components/, hooks/, services/):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/
  components/
  hooks/
  screens/
  services/
  types/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Feature-first structure&lt;/strong&gt; (each feature is self-contained):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/
  features/
    product-list/
    product-detail/
    favourites/
  shared/
    components/
    hooks/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Neither is wrong. But feature-first shows you've thought about how the app scales. If I ask "what happens when 5 teams work on this codebase?" and your structure already answers that question, you're ahead.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🚩 &lt;strong&gt;Red flag:&lt;/strong&gt; Everything in a flat &lt;code&gt;src/&lt;/code&gt; folder with no organisation. It suggests the coding started before the architecture was planned.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TypeScript is not optional
&lt;/h2&gt;

&lt;p&gt;Even if the brief says "TypeScript preferred," treat it as required. Submitting plain JavaScript in 2026 is an automatic downgrade.&lt;/p&gt;

&lt;p&gt;But it's not enough to just use TypeScript. Use it &lt;em&gt;well&lt;/em&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Do this&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Type your props&lt;/td&gt;
&lt;td&gt;Every component should have a typed props interface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Type your API responses&lt;/td&gt;
&lt;td&gt;Don't use &lt;code&gt;any&lt;/code&gt; for data from the server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Type your navigation params&lt;/td&gt;
&lt;td&gt;React Navigation has excellent TypeScript support&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The one &lt;code&gt;any&lt;/code&gt; I'll forgive: complex third-party library types that would take an hour to figure out. Acknowledge it in a comment. &lt;em&gt;"// TODO: type this properly — ran out of time"&lt;/em&gt; is better than pretending it doesn't exist.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🚩 &lt;strong&gt;Red flag:&lt;/strong&gt; &lt;code&gt;any&lt;/code&gt; scattered throughout the codebase with no acknowledgment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  State management: pick something and own it
&lt;/h2&gt;

&lt;p&gt;I don't care whether you use Redux Toolkit, Zustand, React Context, or Jotai. I care that you picked it deliberately and can explain why.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Choice&lt;/th&gt;
&lt;th&gt;What it signals&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Context&lt;/strong&gt; for a three-screen app&lt;/td&gt;
&lt;td&gt;Perfectly reasonable. Lightweight, no dependencies.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Redux Toolkit&lt;/strong&gt; for a three-screen app&lt;/td&gt;
&lt;td&gt;Fine, but I'll ask why. "It's what I know best" is an honest answer.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Zustand&lt;/strong&gt; with a clean store&lt;/td&gt;
&lt;td&gt;Shows you're current with the ecosystem.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you go with Redux, &lt;strong&gt;use Redux Toolkit&lt;/strong&gt;. Not the old &lt;code&gt;switch/case&lt;/code&gt; reducer pattern. If I see &lt;code&gt;createStore&lt;/code&gt; instead of &lt;code&gt;configureStore&lt;/code&gt;, or manual action type constants instead of &lt;code&gt;createSlice&lt;/code&gt;, it suggests the Redux knowledge might need refreshing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually matters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ State logic separated from the UI&lt;/li&gt;
&lt;li&gt;✅ Actions, reducers, and selectors in their own files&lt;/li&gt;
&lt;li&gt;✅ Business rules (like max party size) enforced in the state layer&lt;/li&gt;
&lt;li&gt;✅ Updates are predictable&lt;/li&gt;
&lt;li&gt;❌ Business logic living inside components&lt;/li&gt;
&lt;li&gt;❌ State scattered across &lt;code&gt;useState&lt;/code&gt; calls with no pattern&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Don't dispatch a fetch every time a screen mounts.&lt;/strong&gt; If I navigate to a detail screen, go back, and navigate to the same detail screen, I shouldn't see a loading spinner again. A simple &lt;code&gt;if (!data[id])&lt;/code&gt; check before your &lt;code&gt;dispatch(fetchDetails(id))&lt;/code&gt; is enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tests: quality over coverage
&lt;/h2&gt;

&lt;p&gt;You don't need 90% coverage. You need &lt;em&gt;meaningful&lt;/em&gt; tests. Three good tests beat twenty snapshot tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I want to see:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test type&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Business logic&lt;/td&gt;
&lt;td&gt;If there's a rule (max 6 in a list, no duplicates), test it. Reducers and selectors are the highest-value tests.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User interactions&lt;/td&gt;
&lt;td&gt;Render a component with RNTL, press a button, check the result. Use &lt;code&gt;render&lt;/code&gt;, &lt;code&gt;fireEvent&lt;/code&gt;, &lt;code&gt;waitFor&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Edge cases&lt;/td&gt;
&lt;td&gt;What happens when you add a duplicate? When the list is empty? At the pagination boundary?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Passing tests&lt;/td&gt;
&lt;td&gt;Run them before you submit. Failing tests signal unfinished work.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What I don't want to see:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ &lt;strong&gt;Snapshot tests everywhere.&lt;/strong&gt; They break on every UI change and prove nothing about behaviour.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Tests that mock everything.&lt;/strong&gt; If your test mocks the function it's testing, it's testing the mock.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;No tests at all.&lt;/strong&gt; This is a hard one to recover from in the walkthrough.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; 5-10 focused tests covering the critical paths. Reducers, selectors, key interactions. That's enough.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Handle loading, errors, and empty states
&lt;/h2&gt;

&lt;p&gt;This is where candidates stand out. Anyone can build the happy path. The question is: what happens when things go wrong?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;State&lt;/th&gt;
&lt;th&gt;What to do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Loading&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Show a spinner or skeleton on first load. Show a subtle indicator during pagination. Don't flash a full-screen spinner for 100ms.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Error&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If the API fails, tell the user. A retry button is better than nothing. An informative message is better than "Something went wrong."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Empty&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If the list is empty or there are no saved items, show something useful. Not a blank screen.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;🚩 &lt;strong&gt;Red flag:&lt;/strong&gt; The app crashes on a slow network. No loading state, no error handling. The reviewer opens DevTools, throttles the network, and the app falls apart.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The API call matters
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GraphQL vs REST:&lt;/strong&gt; if the brief offers both, GraphQL is the stronger choice. It shows you can work with modern API patterns. But a well-implemented REST client beats a messy GraphQL setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use FlatList or FlashList. Never ScrollView for lists.&lt;/strong&gt; &lt;code&gt;ScrollView&lt;/code&gt; renders every item at once. With 100+ items, you'll see frame drops, memory spikes, and eventual crashes. &lt;code&gt;FlatList&lt;/code&gt; virtualises the list, only rendering what's on screen. If I see a &lt;code&gt;ScrollView&lt;/code&gt; wrapping a &lt;code&gt;.map()&lt;/code&gt; for a data list, it suggests a gap in understanding React Native's rendering model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other things that get noticed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Caching: don't refetch data you already have&lt;/li&gt;
&lt;li&gt;✅ Pagination: don't fetch 1000 items on first load&lt;/li&gt;
&lt;li&gt;✅ ErrorBoundary: catches JavaScript errors and shows a fallback instead of a white screen&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Edge cases are where you stand out
&lt;/h2&gt;

&lt;p&gt;The happy path is the minimum. What separates a Software Engineer submission from a Senior one is edge case handling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full list?&lt;/strong&gt; What happens when someone tries to add a 7th item? A toast, a disabled button, a modal. Anything except silently failing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Empty list?&lt;/strong&gt; Show a meaningful empty state, not a blank screen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid taps?&lt;/strong&gt; Does pressing "add" five times fast cause duplicates or crashes?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Back navigation?&lt;/strong&gt; When I go from detail back to the list, is my scroll position preserved?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End of list?&lt;/strong&gt; Does pagination stop cleanly when there's no more data?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't need to handle all of these. But handling &lt;em&gt;some&lt;/em&gt; of them shows you think about real users, not just passing requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  The README is part of the test
&lt;/h2&gt;

&lt;p&gt;Write a README. Not a novel. A short document that covers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Section&lt;/th&gt;
&lt;th&gt;What to write&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;How to run it&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;yarn install&lt;/code&gt;, &lt;code&gt;yarn ios&lt;/code&gt;, done. Extra steps documented.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What you built&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;One paragraph summary.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Decisions you made&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Why this state management? Why this folder structure? Two sentences each.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What you'd improve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;This is the most important section. It shows self-awareness.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;The "what I'd improve" section is a cheat code.&lt;/strong&gt; It lets you acknowledge shortcuts without the reviewer discovering them as flaws. &lt;em&gt;"With more time, I'd add E2E tests with Detox and implement proper caching"&lt;/em&gt; turns a missing feature into a demonstration of judgement.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The walkthrough: this is where jobs are won
&lt;/h2&gt;

&lt;p&gt;If the test has a walkthrough call, prepare for it. The code got you into the room. The walkthrough gets you the offer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Know your code.&lt;/strong&gt; If I say "show me where you handle the API response," you should navigate there in under 5 seconds. If you hesitate, it can raise questions about how well you know the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain your trade-offs.&lt;/strong&gt; Don't wait for me to ask. When you show a section of code, say &lt;em&gt;"I chose this approach because X, but I know the trade-off is Y."&lt;/em&gt; That's the answer I'm looking for before I even ask the question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be honest about shortcuts.&lt;/strong&gt; &lt;em&gt;"I used Context here because it was faster, but in a production app I'd move to Zustand once the state got more complex."&lt;/em&gt; That's a strong answer. &lt;em&gt;"I think Context is the best approach"&lt;/em&gt; is a weaker one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have a list of improvements.&lt;/strong&gt; When I ask "what would you change with more time?" the worst answer is "nothing, I'm happy with it." The best answer is a prioritised list: &lt;em&gt;"First I'd add caching, then E2E tests, then refactor to feature-first folders."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask questions back.&lt;/strong&gt; The best walkthroughs are conversations, not presentations. Ask about the team's architecture, their testing approach, their deployment process. It shows you're evaluating the role too, not just hoping to pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stretch goals: do them, but do them well
&lt;/h2&gt;

&lt;p&gt;If the brief mentions optional extras, pick one or two that you can do &lt;em&gt;well&lt;/em&gt;. Don't try to do all of them poorly.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Worth picking&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Search/filter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Quick to implement, immediately visible, shows UX thinking.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Labels, roles, contrast. Most candidates skip this. Even basic accessibility makes you stand out.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Error/offline handling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A retry button when the network fails. Shows real-world thinking.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Avoid unless you can do them properly&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Animations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Half-finished animations look worse than no animations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dark mode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If it's not consistent across every screen, it's a liability.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;One well-executed stretch goal is worth more than three half-finished ones.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The mistakes that actually cost people the job
&lt;/h2&gt;

&lt;p&gt;These aren't about code quality. They're about signals.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mistake&lt;/th&gt;
&lt;th&gt;Why it hurts&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Not reading the brief properly&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Missing a core requirement. Building two screens when the brief says three.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;No tests at all&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Even two or three tests show you care about quality. Zero is a strong negative signal.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI-generated code you can't explain&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Using AI to help is fine. Submitting code you don't understand is not. This becomes apparent during the walkthrough.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Overengineering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A tech test doesn't need a design system and a micro-frontend architecture. Build what the brief asks for, well.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Submitting late without communicating&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If you need more time, ask. Going silent and submitting three days late is a red flag.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The one thing that matters most
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Show that you think.&lt;/strong&gt; Not just that you code.&lt;/p&gt;

&lt;p&gt;Anyone can build screens. The candidates who get hired are the ones who demonstrate judgement: why they chose this approach, what they'd do differently, where the code would break at scale, what tests actually matter.&lt;/p&gt;

&lt;p&gt;The tech test isn't testing whether you can write React Native. It's testing whether you can make good decisions and communicate them clearly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Build something clean, test the important parts, document your thinking, and be ready to talk about it honestly. That's it. That's the whole secret.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;We're hiring!&lt;/strong&gt; We're looking for React Native engineers to join the Mobile Platform team at Hargreaves Lansdown. &lt;a href="https://warrendeleon.com/hiring/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=pass-rn-tech-test" rel="noopener noreferrer"&gt;View open roles&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>reactnative</category>
      <category>hiring</category>
      <category>careeradvice</category>
    </item>
    <item>
      <title>Why I redesigned our React Native tech test in my first week</title>
      <dc:creator>Warren de Leon</dc:creator>
      <pubDate>Sun, 29 Mar 2026 14:48:16 +0000</pubDate>
      <link>https://dev.to/warrendeleon/why-i-redesigned-our-react-native-tech-test-in-my-first-week-hhk</link>
      <guid>https://dev.to/warrendeleon/why-i-redesigned-our-react-native-tech-test-in-my-first-week-hhk</guid>
      <description>&lt;h2&gt;
  
  
  A test built for a different time
&lt;/h2&gt;

&lt;p&gt;Four days before I officially started at Hargreaves Lansdown, I went into the office for a passport check. While I was there, my manager mentioned I'd be hiring a team. My first question was whether I could change the interview process. He said yes. &lt;em&gt;I hadn't even had my first day yet.&lt;/em&gt; By the time I started on the 23rd, I was already building the new test.&lt;/p&gt;

&lt;p&gt;I'm the new Engineering Manager for the &lt;strong&gt;Mobile Platform&lt;/strong&gt; squad. We're rebuilding HL's mobile app in React Native, a brownfield migration from the existing native iOS and Android apps. I need engineers who can work at the platform level.&lt;/p&gt;

&lt;p&gt;I didn't need to ask to see the tech test. I'd been through it myself just weeks earlier. It's how HL hired &lt;em&gt;me&lt;/em&gt;: a live coding exercise where you build a small app in about an hour with the interviewer watching, followed by technical questions from a questionnaire. The whole interview ran about 90 minutes.&lt;/p&gt;

&lt;p&gt;The test made sense for its original context. When the team was smaller and hiring for different roles, it was a reasonable way to screen candidates quickly. But our needs had changed. We weren't hiring someone to build simple screens anymore. We were hiring &lt;strong&gt;platform engineers&lt;/strong&gt; who'd own the architecture that every other mobile team at HL would ship through.&lt;/p&gt;

&lt;p&gt;I needed the test to answer different questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can they structure a &lt;strong&gt;multi-screen app&lt;/strong&gt; with navigation that doesn't fall apart?&lt;/li&gt;
&lt;li&gt;Can they call a &lt;strong&gt;real API&lt;/strong&gt; and handle what happens when the network fails?&lt;/li&gt;
&lt;li&gt;Do they write &lt;strong&gt;tests&lt;/strong&gt; because they care about working software, or because someone told them to?&lt;/li&gt;
&lt;li&gt;Can they sit across from me and explain &lt;em&gt;why&lt;/em&gt; they built it that way?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The existing test was designed for different questions. I needed to build something around ours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The limits of live coding
&lt;/h2&gt;

&lt;p&gt;Live coding can tell you whether someone codes comfortably under observation. For some roles, that matters. For ours, I needed to see something different.&lt;/p&gt;

&lt;p&gt;I've been on both sides. As recently as January this year, I bombed a live coding exercise for a role I was perfectly qualified for. The problem was simple. I knew how to solve it. But with someone watching my every keystroke, my mind went blank. &lt;em&gt;I didn't pass.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As an interviewer, I've watched the same thing happen to candidates. Strong engineers who freeze on problems they'd solve in five minutes at their own desk. Live coding measures composure under observation. That's a valid signal for some roles, but it wasn't the signal I needed.&lt;/p&gt;

&lt;p&gt;For a platform engineering role, where the work is architecture decisions, design system components, and CI/CD pipelines, I wanted to see how candidates approach problems with time and context. &lt;strong&gt;The kind of thinking the job actually requires.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Showing vs telling
&lt;/h2&gt;

&lt;p&gt;The previous process also included a technical questionnaire. The interviewer would pick questions from a reference sheet covering React Native architecture, state management, testing strategies, and platform differences, then compare answers against expected responses. Sometimes candidates would naturally cover the topics during the live coding, and the interviewer would skip those questions.&lt;/p&gt;

&lt;p&gt;These are all valid topics. They're &lt;em&gt;exactly&lt;/em&gt; the things I want my engineers to understand. Asking someone to explain a concept tells you whether they understand the theory. Seeing how they apply it in their own code gives you a different kind of signal.&lt;/p&gt;

&lt;p&gt;The new process tests the same topics through the candidate's own code. Instead of asking &lt;em&gt;"how would you structure navigation in a complex app?"&lt;/em&gt;, I can open their submission and see how they approached it, then have a richer conversation about the choices they made. The walkthrough still covers architecture, trade-offs, and technical depth, but it's grounded in something the candidate &lt;em&gt;built&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built instead
&lt;/h2&gt;

&lt;p&gt;I designed a take-home assessment. A small but real app: multiple screens, a public API, navigation, state management with actual business rules, TypeScript throughout. Not a toy. Not a weekend project either. Something that requires &lt;strong&gt;genuine architectural thinking&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Four principles guided the design:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mirror the actual job.&lt;/strong&gt; The test should feel like the work. If a candidate can build this app, they can contribute to our codebase on day one. If they can't, that's useful information too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove the boilerplate tax.&lt;/strong&gt; I give candidates a fully configured starter project. TypeScript, ESLint, Prettier, Jest, React Native Testing Library, path aliases. &lt;em&gt;All set up.&lt;/em&gt; I don't care whether someone can configure a bundler. I care whether they can write application code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be clear about what, not how.&lt;/strong&gt; The brief explains what the app should do. It never says which state management library to use, how to structure the folders, or which API client to pick. Those decisions are the most revealing part of the submission. A candidate who picks Redux Toolkit for a three-screen app tells me something different from one who picks Zustand or React Context. Neither is wrong. &lt;em&gt;Both are interesting.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Respect people's time.&lt;/strong&gt; Candidates get a week. The work should take 4 to 6 hours. People have jobs, families, lives. No one should have to take a day off to do a tech test for a company that might not hire them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The walkthrough is where the magic happens
&lt;/h2&gt;

&lt;p&gt;The take-home code is half the evaluation. The other half is a walkthrough call: the candidate &lt;strong&gt;demos the app&lt;/strong&gt;, runs their tests live, and walks through the code.&lt;/p&gt;

&lt;p&gt;This is where you learn how deeply someone understands what they built. In the age of AI-assisted development, that understanding matters more than ever.&lt;/p&gt;

&lt;p&gt;Three things I'm looking for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ownership.&lt;/strong&gt; &lt;em&gt;"Navigate to the file where you handle the API response."&lt;/em&gt; If they wrote it, they'll jump straight there. If they're not fully comfortable with the codebase, that becomes clear quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade-off thinking.&lt;/strong&gt; I ask about every significant decision. &lt;em&gt;"Why this state management approach?"&lt;/em&gt; The answer I want isn't "because it's the best." The answer I want is &lt;em&gt;"because it fits this scope, but here's where it would break down, and here's what I'd move to."&lt;/em&gt; Engineers who think in trade-offs build better systems than engineers who think in absolutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-awareness.&lt;/strong&gt; &lt;em&gt;"What would you change if you had more time?"&lt;/em&gt; Strong candidates light up at this question. They have a list. They know where they cut corners. They know what's fragile. They've been thinking about improvements since they submitted. Less experienced candidates tend to say &lt;em&gt;"I'm happy with it"&lt;/em&gt; and move on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structured scoring
&lt;/h2&gt;

&lt;p&gt;One thing I wanted from day one was a &lt;strong&gt;structured scorecard&lt;/strong&gt;. When you're scaling a team and multiple people are involved in hiring, everyone needs to evaluate the same things in the same way. Without that, two interviewers can review the same candidate and reach different conclusions because they're weighting different things.&lt;/p&gt;

&lt;p&gt;I built a scorecard that breaks the evaluation into weighted sections: does the app work, is the data layer sound, is the code well-structured, are there tests, and can the candidate explain it all in the walkthrough. Each section has specific criteria on a consistent scale. &lt;strong&gt;Every interviewer evaluates the same things in the same order.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The scorecard also maps scores to levels. A number tells you whether someone is Graduate, Associate, Software Engineer, or Senior level. This removes ambiguity from the levelling conversation. The rubric does the thinking. The humans verify it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Senior candidates get a harder round
&lt;/h2&gt;

&lt;p&gt;For senior hires, there's an additional &lt;strong&gt;system design&lt;/strong&gt; conversation. No whiteboard. No &lt;em&gt;"design Twitter in 45 minutes."&lt;/em&gt; We talk through real scenarios relevant to the platform we're building. What changes when 20 teams build on the same mobile platform? How do you handle shared dependencies? What's your approach to backwards compatibility?&lt;/p&gt;

&lt;p&gt;It's a conversation between two engineers, not a performance for an audience. The best candidates &lt;strong&gt;push back&lt;/strong&gt; on my assumptions and ask clarifying questions. That's exactly the behaviour I want from a senior on the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Early days
&lt;/h2&gt;

&lt;p&gt;In my first week at HL, I hired a Senior Engineer through the existing process (that happened on day two, before the new test was ready). Going forward, the new process is the standard for all React Native hiring across the UCX-Core tribe. My peer EM, who runs another squad, reviewed the test and the scorecard and agreed to adopt it for his team's hires too. That's the advantage of a well-documented system: &lt;strong&gt;it scales beyond one manager's squad.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm about to hire two Software Engineers using the new process. Every candidate will get the same test, the same starter project, the same evaluation criteria, and the same scoring rubric. The bias surface area shrinks when you standardise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The lesson
&lt;/h2&gt;

&lt;p&gt;If you're joining a new team as an engineering manager, &lt;strong&gt;look at the hiring process early&lt;/strong&gt;. Don't wait until you've "learned the codebase" or "understood the culture." Hiring is one of the highest-leverage activities you have. Every person you bring on shapes the team for years.&lt;/p&gt;

&lt;p&gt;And if your tech test no longer matches what you're hiring for, it's worth revisiting. The best hiring processes evolve alongside the team's needs.&lt;/p&gt;

&lt;p&gt;Design a test that mirrors the actual job. Give candidates a starter project so you're testing &lt;em&gt;engineering&lt;/em&gt;, not &lt;em&gt;configuration&lt;/em&gt;. Make the requirements clear but let them make their own decisions. Then sit across from them and ask &lt;strong&gt;&lt;em&gt;why&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The combination of thoughtful take-home code and a structured walkthrough gives you more signal in two hours than any live coding exercise gives you in two days.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;If you're preparing for a React Native tech test, I wrote a companion post with practical advice: &lt;a href="https://warrendeleon.com/blog/how-to-pass-a-react-native-tech-test/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=rn-tech-test-redesign" rel="noopener noreferrer"&gt;How to pass a React Native tech test&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;We're hiring!&lt;/strong&gt; We're looking for React Native engineers to join the Mobile Platform team at Hargreaves Lansdown. &lt;a href="https://warrendeleon.com/hiring/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=rn-tech-test-redesign" rel="noopener noreferrer"&gt;View open roles&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>engineeringmanagement</category>
      <category>hiring</category>
      <category>reactnative</category>
    </item>
  </channel>
</rss>
