TL;DR
The OWASP Mobile Top 10 isn't abstract theory — it's the exact list pen testers use to fail your app. Here's the cheat sheet: M1 — stop storing tokens in AsyncStorage, use Keychain/Keystore. M2 — audit your node_modules before it audits you. M3 — biometric gates belong on every sensitive screen, not just login. M4 — parameterize your SQLite queries and validate deep links. M5 — SSL pin your public keys, not your certs, and always have a backup pin. M6 — your crash reporter is exfiltrating PII right now. M7 — enable Hermes, strip source maps, turn off debug mode. M8 — detect jailbreak/root but let your developers bypass it. M9 — encrypt local data with MMKV + Keychain-stored keys. M10 — Math.random() is not random; use expo-crypto.
Why This Matters Now
The OWASP Mobile Top 10 was updated in 2024 with a significant restructuring. "Insecure Data Storage" and "Insecure Communication" are still there, but the new list adds "Inadequate Supply Chain Security" (M2) — which feels like it was written specifically for the npm ecosystem — and "Insufficient Binary Protections" (M7), which targets exactly the kind of JavaScript-bundle-in-a-native-shell architecture that React Native uses.
If you're building a fintech app in React Native, you're a target. RN apps ship a JavaScript bundle that can be extracted, decompiled, and analyzed in minutes. Unlike Swift or Kotlin binaries, there's no compilation step that obfuscates your logic by default. An attacker with a rooted Android device can pull your APK, unzip it, and read your business logic in index.android.bundle — including any hardcoded API keys, endpoint URLs, or validation logic you thought was "server-side."
What does a failed security audit actually cost? I've seen three outcomes: App Store rejection during review (Apple has gotten aggressive about checking for jailbreak detection and SSL pinning in fintech apps), a pen test report with 15+ critical findings that delays launch by 2-3 months, or worse — a compliance failure that means you can't process payments at all. PCI DSS, SOC 2, and regional banking regulations all reference OWASP controls.
And the breaches are real. In 2018, British Airways lost data on 400,000 customers due to poor authentication and unsecured third-party scripts — the ICO fined them £20 million. In 2021, ParkMobile exposed 21 million users' data through a vulnerability in a third-party component. These aren't theoretical risks — they're financial and regulatory consequences that hit real companies.
The gap I keep seeing: everyone knows the names of these vulnerabilities. Blog posts explain what "insecure data storage" means. But almost nobody shows the actual TypeScript code. This article is the code.
The companion repository has every file referenced below: github.com/FastheDeveloper/owasp-rn-fintech
Every section includes screenshots from the interactive demo app so you can see these controls in action — not just read about them.
Project Setup
npx create-expo-app@latest owasp-rn-fintech --template blank-typescript
cd owasp-rn-fintech
# Secure storage & crypto
npx expo install expo-secure-store@15.x expo-crypto@15.x
# Biometrics
npx expo install expo-local-authentication@17.x
# SSL pinning
npm install react-native-ssl-public-key-pinning@1.2.x
# Encrypted local storage
npm install react-native-mmkv@4.x
# Local database
npx expo install expo-sqlite@16.x
# Jailbreak detection
npm install jail-monkey@3.x
Warning:
react-native-ssl-public-key-pinning,react-native-mmkv, andjail-monkeyrequire native modules. They won't work in Expo Go — you'll need a development build (npx expo prebuild+npx expo run:ios).
M1: Improper Credential Usage
What it means in React Native: Hardcoded API keys in your source code. Auth tokens stored in AsyncStorage (plaintext JSON files on disk). Secrets committed to your repo via .env files that aren't gitignored. Refresh tokens that never expire.
Real-World Incident: In 2016, Uber's breach exposed 57 million users' and drivers' personal data — names, emails, phone numbers, and 600,000 driver's license numbers. The root cause: Uber engineers had hardcoded AWS credentials in a private GitHub repository. Attackers used those credentials to access an S3 bucket containing the user database. Uber then paid the attackers $100,000 through their bug bounty program to delete the data and keep quiet — and didn't disclose the breach for over a year. Hardcoded credentials in source code is the #1 credential misuse pattern, and it applies equally to mobile apps: API keys in your React Native bundle, tokens in
.envfiles committed to git, or secrets baked into your build config.
I audited a production fintech app recently and found this exact pattern:
// ❌ WRONG — This is what most RN apps do
// store/store.ts (actual production code)
import AsyncStorage from "@react-native-async-storage/async-storage";
import { persistReducer, persistStore } from "redux-persist";
const persistConfig = {
key: "root",
storage: AsyncStorage, // ← PLAINTEXT on disk
whitelist: ["persisted", "auth"], // ← auth tokens persisted in plaintext
};
On a rooted Android device, that data lives at /data/data/com.yourapp/files/RCTAsyncLocalStorage/manifest.json. Every token, every piece of user state — plaintext. Even on iOS, if a user backs up their device unencrypted, those tokens are in the backup.
The fix has two parts: store credentials in Keychain/Keystore (hardware-encrypted), and implement session expiry so stolen tokens have a shelf life.
// src/security/secure-storage.ts
import * as SecureStore from "expo-secure-store";
const SECURE_STORE_OPTIONS: SecureStore.SecureStoreOptions = {
keychainAccessible: SecureStore.WHEN_UNLOCKED_THIS_DEVICE_ONLY,
};
export const CredentialStore = {
async setToken(key: string, value: string): Promise<void> {
await SecureStore.setItemAsync(key, value, SECURE_STORE_OPTIONS);
},
async getToken(key: string): Promise<string | null> {
return SecureStore.getItemAsync(key, SECURE_STORE_OPTIONS);
},
async deleteToken(key: string): Promise<void> {
await SecureStore.deleteItemAsync(key);
},
async clearAll(keys: string[]): Promise<void> {
await Promise.all(keys.map((key) => SecureStore.deleteItemAsync(key)));
},
};
export const CREDENTIAL_KEYS = {
ACCESS_TOKEN: "fintech_access_token",
REFRESH_TOKEN: "fintech_refresh_token",
DEVICE_ID: "fintech_device_id",
ENCRYPTION_KEY: "fintech_encryption_key",
BIOMETRIC_ENROLLED: "fintech_biometric_enrolled",
} as const;
For session management, the token in Redux (in-memory) is fine for the session lifetime. But you need active timeout enforcement:
// src/security/session-manager.ts
import { AppState, AppStateStatus } from "react-native";
import { CredentialStore, CREDENTIAL_KEYS } from "./secure-storage";
export interface SessionConfig {
inactivityTimeoutMs: number; // 5 minutes for banking apps
backgroundTimeoutMs: number; // 2 minutes in background
heartbeatIntervalMs: number; // Check every 30 seconds
onSessionExpired: () => void;
onTokenRefreshNeeded: () => Promise<boolean>;
}
class SessionManager {
private lastActivityTimestamp: number = Date.now();
private backgroundTimestamp: number | null = null;
private heartbeatTimer: ReturnType<typeof setInterval> | null = null;
private appStateSubscription: any = null;
start(tokenExpiryMs?: number): void {
this.lastActivityTimestamp = Date.now();
// Monitor app state changes (foreground/background)
this.appStateSubscription = AppState.addEventListener(
"change",
this.handleAppStateChange,
);
// Periodic heartbeat to check session validity
this.heartbeatTimer = setInterval(
() => this.checkSession(),
this.config.heartbeatIntervalMs,
);
}
recordActivity(): void {
this.lastActivityTimestamp = Date.now();
}
private handleAppStateChange = (nextState: AppStateStatus): void => {
if (nextState === "background" || nextState === "inactive") {
this.backgroundTimestamp = Date.now();
} else if (nextState === "active") {
if (this.backgroundTimestamp) {
const backgroundDuration = Date.now() - this.backgroundTimestamp;
if (backgroundDuration > this.config.backgroundTimeoutMs) {
this.expireSession("background_timeout");
return;
}
}
this.checkSession();
}
};
private async expireSession(reason: string): Promise<void> {
this.stop();
await CredentialStore.clearAll([
CREDENTIAL_KEYS.ACCESS_TOKEN,
CREDENTIAL_KEYS.REFRESH_TOKEN,
]);
this.config.onSessionExpired();
}
}
Production gotcha: The fintech app I audited stored
LAST_ACTIVE_KEYin AsyncStorage for session tracking. That means a sophisticated attacker could modify the timestamp to prevent session expiry. Store session timestamps in memory only — if the app restarts, the session should be treated as expired anyway.
In the Demo App
The Secure Storage screen shows the difference visually. On the left: AsyncStorage storing your token as plaintext JSON. On the right: SecureStore encrypting it with AES-256-GCM in the Secure Enclave. The live demo lets you store, retrieve, and delete a secret — proving the round-trip works and the Keychain entry is fully wiped on delete.
Checklist
- [ ] Auth tokens stored in
expo-secure-storeorreact-native-keychain, never AsyncStorage - [ ] Session timeout enforced (5 min inactivity, 2 min background for banking)
- [ ] Tokens cleared on logout, session expiry, and excessive failed auth attempts
- [ ] No API keys, secrets, or tokens hardcoded in source — use server-side config
M2: Inadequate Supply Chain Security
The RN-specific risk: Your node_modules folder contains hundreds of packages, each maintained by different people with different security practices. A single compromised dependency can exfiltrate user data. This isn't theoretical — event-stream, ua-parser-js, and colors all had real supply chain attacks.
Real-World Incident: In 2021, ParkMobile's breach exposed 21 million users' personal data — names, emails, phone numbers, license plates, and hashed passwords — all because of a vulnerability in a third-party integration. The app itself wasn't directly compromised; a dependency was. In the npm ecosystem, this risk is amplified: your
package-lock.jsoncontains hundreds of transitive dependencies you never explicitly chose, each one a potential attack surface.The
event-streamincident (2018) is the textbook example. A user namedright9ctrlsocial-engineered their way into maintainer access of the popularevent-streampackage (used by 3,931 other packages including@vue/cli-ui,vscode, andnodemon). After a series of innocent commits to build trust, they added a new dependency calledflatmap-stream— which contained an encrypted malicious payload hidden in its minified source code. The payload was surgically targeted: it would only decrypt and execute when built as part of Copay, a Bitcoin wallet app, using the app's ownnpm_package_descriptionas the decryption key. Once active, it harvested Bitcoin wallet private keys and balances above 100 BTC. The attack was downloaded 8 million times and went undetected for over a month — only discovered because of an unrelated OpenSSL deprecation warning innodemon.
React Native adds extra risk because many packages include native modules (Objective-C, Java/Kotlin) that most JavaScript developers never audit. A native module has full device access — it can read the filesystem, make network calls, and access hardware without any JavaScript-visible API surface.
// package.json — lock your dependencies
{
"dependencies": {
"expo-secure-store": "15.0.8",
"react-native-mmkv": "4.3.1",
"react-native-ssl-public-key-pinning": "1.2.6"
}
}
Use exact versions (not ^ or ~) for security-critical packages. Add these to your CI pipeline:
# CI audit command
npm audit --audit-level=high
npx better-npm-audit audit --level high
# Check for known vulnerabilities in native dependencies
npx expo-doctor
# Verify package provenance (npm v9.5+)
npm audit signatures
What to watch for:
- Packages with no updates in 12+ months (abandonware)
- Packages with a single maintainer and recent ownership transfer
- Native modules that request permissions your app doesn't need
- Transitive dependencies pulling in unexpected native code
Checklist
- [ ]
npm auditruns in CI and blocks on high/critical vulnerabilities - [ ] Security-critical packages use exact versions, not ranges
- [ ] Review native module permissions before adding new dependencies
- [ ] Package-lock.json is committed and reviewed in PRs
M3: Insecure Authentication and Authorization
The real issue in RN fintech: Most apps only require biometric auth at login. But a pen tester will check whether you can access the payment screen, view account balances, or initiate transfers without re-authenticating. If you authenticate once at login and then trust the session for everything, a stolen unlocked device = full access.
Real-World Incident: In 2018, Facebook's photo API bug exposed 6.8 million users' private photos — including photos they had uploaded but never posted. The vulnerability was an insufficient authorization check: third-party apps that were granted permission to view a user's timeline photos could also access their unpublished, Marketplace, and Stories photos. The API simply didn't enforce granular access control on the resource. In a fintech context, this is the equivalent of a payment confirmation endpoint that doesn't re-verify whether the authenticated user is the account owner — one missing authorization check and any authenticated user can access any account.
The fix: biometric gates on sensitive screens, not just login.
// src/security/biometric-auth.ts
import * as LocalAuthentication from "expo-local-authentication";
import { Platform } from "react-native";
export async function authenticateWithBiometrics(options: {
reason: string;
allowDeviceFallback?: boolean;
}): Promise<BiometricAuthResult> {
const capability = await checkBiometricCapability();
if (!capability.isAvailable || !capability.isEnrolled) {
return {
success: false,
error:
Platform.OS === "ios"
? "Please set up Face ID or Touch ID in Settings."
: "Please set up fingerprint in Settings.",
userCancelled: false,
};
}
const result = await LocalAuthentication.authenticateAsync({
promptMessage: options.reason,
disableDeviceFallback: !options.allowDeviceFallback,
fallbackLabel: options.allowDeviceFallback ? "Use Passcode" : "",
});
if (result.success) {
return { success: true, userCancelled: false };
}
const userCancelled =
result.error === "user_cancel" || result.error === "system_cancel";
return {
success: false,
error: getBiometricErrorMessage(result.error),
userCancelled,
};
}
The BiometricGateScreen wraps any sensitive screen. The child component doesn't even mount until auth succeeds — preventing any mount-time data fetching:
// src/screens/BiometricGateScreen.tsx
export function BiometricGateScreen({
children,
reason = "Authenticate to continue",
allowFallback = false,
onAuthFailure,
}: BiometricGateScreenProps) {
const { capability, isEnrolled, authenticate, isLoading } = useBiometrics();
const [isAuthenticated, setIsAuthenticated] = useState(false);
const [attempts, setAttempts] = useState(0);
const MAX_ATTEMPTS = 3;
useEffect(() => {
if (!isLoading && capability?.isAvailable && isEnrolled) {
handleAuthenticate();
}
}, [isLoading]);
const handleAuthenticate = async () => {
if (attempts >= MAX_ATTEMPTS) {
onAuthFailure?.();
return;
}
const result = await authenticate(reason, {
allowDeviceFallback: allowFallback,
});
if (result.success) {
setIsAuthenticated(true);
} else if (!result.userCancelled) {
setAttempts((prev) => prev + 1);
}
};
if (isAuthenticated) {
return <>{children}</>;
}
// ... render authentication prompt UI
}
Simulator gotcha:
LocalAuthentication.authenticateAsync()always succeeds on iOS Simulator when Face ID is "enrolled" (Features > Face ID > Enrolled). On Android emulators, behavior varies by version. Always test biometric flows on a real device before your security audit. I've seen teams ship with biometric "protection" that they only ever tested on simulators.Physical spoofing risk: Face ID has been bypassed with 3D-printed masks and fingerprint sensors with lifted prints. For high-value transactions (payments, withdrawals), always combine biometrics with a secondary factor — device passcode fallback at minimum, or a server-validated OTP for transactions above a threshold. Biometrics alone is a convenience feature, not a security guarantee.
In the Demo App
The Biometric Auth screen shows your device's capability (hardware, enrollment, security level, supported types), then lets you trigger two auth flows: a login prompt with passcode fallback, and a strict payment prompt with biometric-only (no fallback). The result box turns green on success, red on failure.
Checklist
- [ ] Biometric auth gates on payment, balance view, and settings screens — not just login
- [ ] Maximum attempt limit with lockout after failures
- [ ] Tested on real devices, not just simulators
- [ ] Fallback strategy defined (device passcode for low-risk, no fallback for payments)
M4: Insufficient Input/Output Validation
SQL injection in local SQLite: Yes, it happens in mobile apps. If you build SQLite queries with string concatenation, you're vulnerable.
Real-World Context: Input validation failures aren't just about SQL injection. WhatsApp's buffer overflow vulnerability (CVE-2019-3568) allowed attackers to install spyware with a single VoIP call — the target didn't even need to answer. The root cause was insufficient validation of incoming data in the call signaling protocol. In a React Native fintech context, your deep link handler and payment input fields are the most exposed surfaces — they accept data from outside your trust boundary (other apps, URLs, clipboard) and feed it directly into your business logic.
// ❌ WRONG — SQL injection in local database
const query = `SELECT * FROM transactions WHERE account_id = '${accountId}'`;
db.execAsync(query);
// ✅ RIGHT — Parameterized query
const result = db.getAllAsync(
"SELECT * FROM transactions WHERE account_id = ?",
[accountId],
);
Deep link hijacking: React Native apps register URL schemes (yourfintech://) that can be invoked by any app on the device. If you don't validate deep link parameters, an attacker can craft a link that redirects to a phishing page or triggers unintended actions.
// src/security/certificate-check.ts
export function validateDeepLink(url: string): {
valid: boolean;
sanitizedUrl?: string;
reason?: string;
} {
try {
const parsed = new URL(url);
// Only allow your app's registered schemes
const allowedSchemes = ["yourfintech", "https"];
if (!allowedSchemes.includes(parsed.protocol.replace(":", ""))) {
return { valid: false, reason: `Disallowed scheme: ${parsed.protocol}` };
}
// For https links, only allow your domains
if (parsed.protocol === "https:") {
const allowedHosts = ["yourfintech.com", "app.yourfintech.com"];
if (!allowedHosts.includes(parsed.hostname)) {
return { valid: false, reason: `Disallowed host: ${parsed.hostname}` };
}
}
// Reject script injection in parameters
for (const [key, value] of parsed.searchParams) {
if (/<script/i.test(value) || /javascript:/i.test(value)) {
return { valid: false, reason: `Suspicious parameter in ${key}` };
}
}
return { valid: true, sanitizedUrl: parsed.toString() };
} catch {
return { valid: false, reason: "Malformed URL" };
}
}
Amount validation for payments — this is fintech-specific and gets missed:
// src/screens/SecurePaymentScreen.tsx
const validateAmount = (value: string): string | null => {
const num = parseFloat(value);
if (isNaN(num)) return "Please enter a valid amount.";
if (num <= 0) return "Amount must be positive.";
if (num > 50000) return "Amount exceeds single transaction limit.";
// Precision attack prevention (e.g., 0.001 to drain funds via rounding)
if (value.includes(".") && value.split(".")[1].length > 2) {
return "Amount cannot have more than 2 decimal places.";
}
return null;
};
In the Demo App
The Deep Link Validation screen has a live URL input — type or paste any URL and it validates in real time. The input border turns green for allowed URLs and red for blocked ones. Tap any of the preset buttons (phishing domain, XSS payload, JS injection, wrong scheme) to see them rejected instantly with a reason.
Checklist
- [ ] All SQLite queries use parameterized statements, never string concatenation
- [ ] Deep links validated against an allowlist of schemes and hosts
- [ ] Payment amounts validated for range, precision, and format
- [ ] TextInput fields use
textContentType="none"andautoComplete="off"for sensitive data
M5: Insecure Communication
SSL pinning is mandatory for fintech. Without it, anyone on the same WiFi with a proxy (Charles, mitmproxy) and a self-signed CA installed on the device can read all your API traffic — including auth tokens, account numbers, and transaction data.
Real-World Incident: The British Airways breach in 2018 is the definitive example of what happens without communication security. Attackers injected a malicious script (just 22 lines of JavaScript) into the BA website and mobile app that skimmed payment card data — names, card numbers, CVVs, and expiry dates — as customers typed them. The data was exfiltrated to a look-alike domain (
baways.com) in real time. 500,000 customers were affected and BA was fined £20 million by the ICO. The attack worked because there was no integrity check on the scripts being loaded and no pinning to verify the communication channel. In a React Native app without SSL pinning, an attacker on the same WiFi running mitmproxy achieves the same result — intercepting every API call, including payment data, in plaintext.
Hash pinning vs. certificate pinning: Use hash pinning. When you pin a certificate, you pin the entire cert — including its expiry date. When it rotates (every 90 days with Let's Encrypt), your pins break. Hash pinning pins the public key's SPKI hash, which survives cert renewals if the same key pair is reused.
How to extract your pin hash:
openssl s_client -connect api.yourfintech.com:443 \
-servername api.yourfintech.com < /dev/null 2>/dev/null \
| openssl x509 -pubkey -noout \
| openssl pkey -pubin -outform DER \
| openssl dgst -sha256 -binary \
| base64
// src/security/ssl-pinning.ts
import {
initializeSslPinning,
isSslPinningAvailable,
} from "react-native-ssl-public-key-pinning";
// Initialize once at app startup — after this, all fetch() calls
// are automatically pinned for the configured domains.
const PINNING_CONFIG = {
"api.yourfintech.com": {
includeSubdomains: true,
publicKeyHashes: [
// Primary: current certificate's public key hash
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=",
// Backup: pre-generated backup key pair (CRITICAL — iOS requires 2+)
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=",
// Disaster recovery: CA intermediate key hash
"CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC=",
],
// Safety valve: after this date, pinning degrades to normal TLS
expirationDate: "2026-12-31",
},
};
// Call this in your app's entry point
if (isSslPinningAvailable()) {
initializeSslPinning(PINNING_CONFIG);
}
// After initialization, standard fetch() is automatically pinned.
The #1 production gotcha — cert rotation: If you only pin one hash and your certificate rotates with a new key pair, every user is locked out until they update through the App Store. This is not a hypothetical — I've seen it happen. Mitigations:
- Always pin at least 2 hashes (current + backup key pair)
- Set
expirationDate— after this date, pinning disables gracefully- Monitor cert expiry server-side and push updated pins via OTA (expo-updates)
- Have a kill switch — a remote config flag to disable pinning in emergencies
The secure API client wraps all of this into a clean interface:
// src/api/secureClient.ts
export function createSecureClient(config: SecureClientConfig) {
// After initializeSslPinning(), the standard fetch() is automatically pinned.
// No special fetch import needed — just use global fetch.
async function request<T>(
method: string,
path: string,
options?: RequestOptions,
) {
const headers: Record<string, string> = {
"Content-Type": "application/json",
};
// Inject auth token from SecureStore (not Redux, not AsyncStorage)
if (!options?.skipAuth) {
const token = await CredentialStore.getToken(
CREDENTIAL_KEYS.ACCESS_TOKEN,
);
if (token) headers["Authorization"] = `Bearer ${token}`;
}
// Anti-replay: nonce + timestamp
const nonce = await generateSecureHex(16);
headers["X-Request-Nonce"] = nonce;
headers["X-Request-Timestamp"] = Date.now().toString();
// Request signing for tamper detection
if (config.requestSigningKey) {
const payload = `${method}:${path}:${headers["X-Request-Timestamp"]}:${nonce}`;
headers["X-Request-Signature"] = await hmacSha256(
payload,
config.requestSigningKey,
);
}
try {
const response = await fetch(url, { method, headers, body });
return response;
} catch (error) {
if (error instanceof SSLPinningError) {
// Potential MITM — log to security monitoring
throw createApiError("Secure connection failed.", { code: "SSL_PIN" });
}
throw error;
}
}
}
In the Demo App
The SSL Pinning screen makes real HTTPS requests to demonstrate the concept. First, a normal fetch to jsonplaceholder.typicode.com succeeds — proving the network works. Then, the same request with a deliberately wrong pin hash gets BLOCKED before any data is sent. An ASCII diagram shows how a proxy attack works with and without pinning.
Checklist
- [ ] SSL public key pinning enabled for all API endpoints
- [ ] Minimum 2 pin hashes configured (current + backup)
- [ ]
expirationDateset as a safety valve - [ ] Cert rotation plan documented and tested
- [ ] Pin validation tested on a real device through a proxy (should fail)
M6: Inadequate Privacy Controls
What data you're accidentally logging: Flipper (the React Native debugger) logs every network request by default — including auth headers, request bodies with PII, and response bodies with account data. Sentry and Crashlytics capture breadcrumbs that often include screen names ("PaymentScreen"), user actions ("entered amount: 5000"), and even full stack traces with variable values.
Real-World Incident: In 2018, the MyFitnessPal breach exposed 150 million accounts — Under Armour disclosed that usernames, email addresses, and hashed passwords were stolen. Forensic analysis revealed that sensitive data was accessible in plaintext in the app's cache directory at
/data/data/com.myfitnesspal/cache/— extractable with oneadb shellcommand on a rooted device. The data wasn't just in the database — it was in log files and cached API responses that nobody thought to encrypt. Your Sentry breadcrumbs and Flipper network logs are the React Native equivalent: unencrypted records of everything your user does, stored on disk.
// src/api/secureClient.ts
const SENSITIVE_FIELDS = [
"token",
"accessToken",
"refreshToken",
"password",
"pin",
"ssn",
"accountNumber",
"routingNumber",
"cardNumber",
"cvv",
];
export function maskSensitiveData(
data: Record<string, any>,
): Record<string, any> {
const masked = { ...data };
for (const key of Object.keys(masked)) {
if (
SENSITIVE_FIELDS.some((f) => key.toLowerCase().includes(f.toLowerCase()))
) {
const value = masked[key];
masked[key] =
typeof value === "string" && value.length > 4
? `***${value.slice(-4)}`
: "***";
} else if (typeof masked[key] === "object" && masked[key] !== null) {
masked[key] = maskSensitiveData(masked[key]);
}
}
return masked;
}
export function secureLog(label: string, data?: any): void {
if (!__DEV__) return; // No logging in production
if (data && typeof data === "object") {
console.log(`[SECURE] ${label}`, maskSensitiveData(data));
}
}
Screenshot & screen recording prevention (iOS + Android): expo-screen-capture handles both platforms with a single API. On Android, it sets FLAG_SECURE (blocks screenshots, recordings, AND blanks the app switcher preview). On iOS 13+, it uses the UITextField.isSecureTextEntry trick to block screenshots and recordings, plus a separate blur overlay for the app switcher.
npx expo install expo-screen-capture
// src/security/screenshot-prevention.ts
import * as ScreenCapture from "expo-screen-capture";
import { Platform } from "react-native";
import { useEffect, useRef } from "react";
/**
* Prevents screenshots + recordings while the component is mounted.
* Cleanup is automatic on unmount — other screens aren't affected.
*/
export function usePreventScreenCapture(key: string = "default"): void {
useEffect(() => {
ScreenCapture.preventScreenCaptureAsync(key);
return () => {
ScreenCapture.allowScreenCaptureAsync(key);
};
}, [key]);
}
/**
* Maximum protection: blocks capture + blurs app switcher preview.
* Use on payment, card details, OTP, and PIN screens.
*/
export function useSecureScreen(key: string = "default"): void {
useEffect(() => {
ScreenCapture.preventScreenCaptureAsync(key);
if (Platform.OS === "ios") {
ScreenCapture.enableAppSwitcherProtectionAsync(0.8);
}
return () => {
ScreenCapture.allowScreenCaptureAsync(key);
if (Platform.OS === "ios") {
ScreenCapture.disableAppSwitcherProtectionAsync();
}
};
}, [key]);
}
/**
* Detect screenshot attempts for monitoring/logging.
* Does NOT prevent — use alongside usePreventScreenCapture.
*/
export function useScreenshotDetection(onScreenshot: () => void): void {
const callbackRef = useRef(onScreenshot);
callbackRef.current = onScreenshot;
useEffect(() => {
const sub = ScreenCapture.addScreenshotListener(() =>
callbackRef.current(),
);
return () => sub.remove();
}, []);
}
Use it on any sensitive screen — one line:
// src/screens/SecurePaymentScreen.tsx
function PaymentForm() {
// Blocks screenshots, recordings, and app switcher preview (iOS + Android)
useSecureScreen("payment-screen");
// Log screenshot attempts to your SIEM
useScreenshotDetection(() => {
secureLog("Screenshot attempt on PaymentScreen");
Alert.alert(
"Screenshot Detected",
"For your security, please avoid capturing screenshots of payment information.",
);
});
// ... rest of the screen
}
Platform coverage matrix:
Threat Android iOS Screenshots FLAG_SECUREisSecureTextEntrytrick (iOS 13+)Screen recordings FLAG_SECURESame mechanism App switcher preview FLAG_SECURE(automatic)enableAppSwitcherProtectionAsync()(blur)AirPlay / screen mirror N/A Blocked by capture prevention Why per-screen, not app-wide: Blocking screenshots globally is hostile UX. Users should be able to screenshot their transaction history or support chat. Only block on screens showing card details, CVV, balances, payment confirmation, OTP entry, and PIN entry.
In the Demo App
Screenshot Prevention: Toggle protection on/off and take screenshots to see the difference. With protection disabled, your screenshot captures the full card number, CVV, and balance. Enable it, screenshot again — the capture is black. The detection counter increments each time. Background the app to see the app switcher blur.
PII Masking: The demo shows raw sensitive data (red block) next to the masked output (green block), field by field. Tokens, passwords, SSNs, and card numbers are replaced with *** plus last 4 characters. Non-sensitive fields like userName and email pass through unchanged.
Sentry/Crashlytics configuration for fintech:
// Don't send PII in crash reports
Sentry.init({
beforeSend(event) {
// Strip user data
if (event.user) {
delete event.user.email;
delete event.user.ip_address;
}
// Strip sensitive breadcrumb data
event.breadcrumbs = event.breadcrumbs?.map((b) => ({
...b,
data: b.data ? maskSensitiveData(b.data) : undefined,
}));
return event;
},
beforeBreadcrumb(breadcrumb) {
// Don't log network request bodies
if (breadcrumb.category === "fetch" || breadcrumb.category === "xhr") {
delete breadcrumb.data?.body;
delete breadcrumb.data?.response_body;
}
return breadcrumb;
},
});
Checklist
- [ ] PII masking applied to all log outputs (console, Sentry, Crashlytics)
- [ ] Network request/response bodies stripped from crash reports
- [ ]
useSecureScreen()on payment, card, OTP, and PIN screens (iOS + Android) - [ ] App switcher blur enabled on iOS via
enableAppSwitcherProtectionAsync() - [ ] Screenshot detection logging to SIEM on sensitive screens
- [ ] Flipper and Reactotron disabled in production builds
M7: Insufficient Binary Protections
React Native's specific weakness: Your entire business logic ships as a JavaScript bundle (index.android.bundle / main.jsbundle). Without protections, anyone can extract and read it.
Real-World Incident: In 2022, the Lapsus$ group breached Samsung and leaked 190GB of source code — including device authentication logic, DRM modules, and Knox security platform source code. The leaked code was extracted, decompiled, and analyzed by attackers worldwide. Samsung confirmed the breach affected "source code relating to the operation of Galaxy devices" but stated no customer data was compromised. The damage, however, was done: proprietary security logic was now public, making targeted attacks on Samsung devices significantly easier. React Native apps face an even lower bar — your JavaScript bundle is essentially readable source code that can be extracted from any APK with
unzipand a text editor, no sophisticated decompilation required.
Hermes bytecode compilation is the first line of defense. Hermes compiles JavaScript to bytecode at build time, making casual reverse engineering harder (not impossible, but significantly harder than reading plain JavaScript):
// app.json
{
"expo": {
"jsEngine": "hermes"
}
}
ProGuard/R8 for Android obfuscates the native Java/Kotlin layer:
// android/app/build.gradle
def enableProguardInReleaseBuilds = true
android {
buildTypes {
release {
minifyEnabled enableProguardInReleaseBuilds
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'),
'proguard-rules.pro'
}
}
}
Strip source maps in production: Source maps map bytecode back to your original TypeScript. Never include them in your production bundle:
// metro.config.js
const { getDefaultConfig } = require("expo/metro-config");
const config = getDefaultConfig(__dirname);
if (process.env.NODE_ENV === "production") {
config.transformer = {
...config.transformer,
minifierConfig: {
compress: {
drop_console: true, // Remove console.log statements
},
},
};
}
module.exports = config;
Disable debug mode in production: Verify that __DEV__ is false in production builds. Metro automatically sets this, but verify it:
if (__DEV__) {
// This entire block is stripped from production bundles by Metro
console.log("Debug mode active");
}
Checklist
- [ ] Hermes engine enabled (bytecode compilation)
- [ ] ProGuard/R8 enabled for Android release builds
- [ ] Source maps uploaded to crash reporting service but NOT bundled in the app
- [ ]
console.logstatements stripped in production (drop_console: true) - [ ] Debug mode verified as disabled in release builds
M8: Security Misconfiguration
Jailbreak/root detection without false-positives. Here's the problem: if you just check isJailBroken() and block the app, your Android developers with unlocked bootloaders can't test the app. So they add if (__DEV__) return false and forget to remove it. Or worse, they disable the check entirely.
The solution: a proper developer bypass that's architecturally impossible to trigger in production.
// src/security/jailbreak-detection.ts
import * as SecureStore from "expo-secure-store";
// jail-monkey provides native jailbreak/root detection
let JailMonkey: any = null;
try {
JailMonkey = require("jail-monkey");
} catch {
// Expected in Expo Go — works in production builds
}
export async function checkDeviceSecurity(): Promise<DeviceSecurityStatus> {
const status: DeviceSecurityStatus = {
isCompromised: false,
isJailbroken: false,
isRooted: false,
isDebugged: false,
reasons: [],
devBypassed: false,
};
// Developer bypass — ONLY in __DEV__ builds (stripped by Metro in production)
if (__DEV__ && (await isDevBypassEnabled())) {
status.devBypassed = true;
return status;
}
if (!JailMonkey) {
if (!__DEV__) {
status.isCompromised = true;
status.reasons.push("Security module unavailable");
}
return status;
}
if (JailMonkey.isJailBroken()) {
status.isCompromised = true;
status.reasons.push(
Platform.OS === "ios"
? "Device appears to be jailbroken"
: "Device appears to be rooted",
);
}
if ((await JailMonkey.isDebuggedMode()) && !__DEV__) {
status.isCompromised = true;
status.reasons.push("Debugger detected");
}
return status;
}
The developer bypass system: The bypass token is stored in SecureStore and only checked when __DEV__ is true. Since __DEV__ is a compile-time constant that Metro sets to false in production builds, the entire bypass code path is dead-code-eliminated. Even if an attacker extracts and sets the bypass token, it has no effect in a production binary.
// src/security/jailbreak-detection.ts
const DEV_BYPASS_KEY = "fintech_dev_jailbreak_bypass";
export async function enableDevBypass(): Promise<void> {
if (!__DEV__) {
throw new Error("Dev bypass cannot be enabled in production");
}
await SecureStore.setItemAsync(DEV_BYPASS_KEY, "DEV_SECURITY_BYPASS_ACTIVE");
}
// Call from your dev menu:
// enableDevBypass().then(() => console.log('Bypass enabled'));
Response strategy — don't just block: Different contexts warrant different responses:
export function getCompromisedAction(
status: DeviceSecurityStatus,
context: "payment" | "sensitive_data" | "app_launch" | "general",
): "block" | "warn" | "limit" | "allow" {
if (status.devBypassed) return "allow";
if (!status.isCompromised) return "allow";
switch (context) {
case "payment":
return "block"; // Never allow payments on compromised devices
case "sensitive_data":
return "block"; // Block financial data viewing
case "app_launch":
return "warn"; // Warn but let user see non-sensitive features
default:
return "warn";
}
}
OTA update security: If you use expo-updates, verify that updates are signed and come from your server. A compromised OTA update can inject malicious code without going through App Store review.
In the Demo App
Tap "Scan Device" and watch each check run: native module loaded, jailbreak/root status, debugger detection, __DEV__ guard, and the resulting payment action (ALLOWED or BLOCKED). On a clean simulator you'll see all green. On a jailbroken device, the jailbreak row turns red and payments are blocked.
Checklist
- [ ] Jailbreak/root detection active in production builds
- [ ] Developer bypass uses
__DEV__guard (dead-code-eliminated in production) - [ ] Graduated response: block payments, warn on general use
- [ ] OTA updates configured with code signing verification
M9: Insecure Data Storage
The AsyncStorage trap. I cannot stress this enough: AsyncStorage is not encrypted. It stores data as plaintext JSON files. On Android, it's in the app's shared preferences directory. On iOS, it's in the app's documents directory. A rooted device, an unencrypted backup, or a forensic extraction tool can read everything.
Here's a tiered storage strategy:
Tier 1 — Credentials (Keychain/Keystore): Use expo-secure-store for tokens, API keys, PII, and encryption keys. Backed by hardware encryption on devices with Secure Enclave (iOS) or StrongBox (Android).
Tier 2 — App data (encrypted MMKV): Use react-native-mmkv with an encryption key stored in Tier 1. 30x faster than AsyncStorage, and encrypted at rest.
// src/security/secure-storage.ts
import { MMKV } from "react-native-mmkv";
import * as SecureStore from "expo-secure-store";
import * as Crypto from "expo-crypto";
async function getOrCreateEncryptionKey(): Promise<string> {
let key = await SecureStore.getItemAsync("fintech_encryption_key");
if (!key) {
// Generate 256-bit key with CSPRNG
const randomBytes = await Crypto.getRandomBytesAsync(32);
key = Array.from(new Uint8Array(randomBytes))
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
await SecureStore.setItemAsync("fintech_encryption_key", key, {
keychainAccessible: SecureStore.WHEN_UNLOCKED_THIS_DEVICE_ONLY,
});
}
return key;
}
export async function getEncryptedStorage(): Promise<MMKV> {
const encryptionKey = await getOrCreateEncryptionKey();
return new MMKV({
id: "fintech-encrypted-storage",
encryptionKey,
});
}
Tier 3 — Structured data (encrypted SQLite): For transaction history, contacts, etc., use expo-sqlite with column-level encryption. Encrypt sensitive fields before writing:
import { sha256 } from "./crypto-utils";
// Encrypt sensitive columns before storage
const encryptedName = await encryptForStorage(accountName, encryptionKey);
db.runAsync(
"INSERT INTO accounts (id, name_encrypted, type) VALUES (?, ?, ?)",
[accountId, encryptedName, accountType],
);
What stays in AsyncStorage: Theme preference. Language selection. Whether the user has seen the onboarding carousel. That's it.
Checklist
- [ ] Tokens and PII in
expo-secure-store(Keychain/Keystore) - [ ] App data in encrypted MMKV with Keychain-stored encryption key
- [ ] Sensitive SQLite columns encrypted before storage
- [ ] AsyncStorage used ONLY for non-sensitive preferences
- [ ]
redux-persiststorage backend changed from AsyncStorage to SecureStore for auth state
M10: Insufficient Cryptography
Math.random() is not cryptographically secure. It uses a PRNG (Pseudo-Random Number Generator) seeded from system time. If you use it for transaction IDs, OTP generation, or anything security-sensitive, an attacker can predict the sequence.
Real-World Incident: In 2019, security researchers discovered that Twitter had been storing passwords in plaintext in internal logs — affecting all 330 million users. The passwords were written to logs before the hashing function was applied, meaning anyone with access to the log system could read them. Twitter urged all users to change their passwords. The root cause: the hashing step was in the wrong place in the pipeline. In the same vein, Adobe's 2013 breach initially reported 2.9 million credit card records stolen, but later analysis revealed 153 million user accounts were affected. The passwords were encrypted with 3DES in ECB mode — which preserves patterns, meaning identical passwords produced identical ciphertext — instead of properly hashed with bcrypt or Argon2. Security researchers could cross-reference the password hints with the repeated ciphertext blocks to crack millions of passwords without even breaking the encryption. Weak cryptography isn't just using the wrong algorithm; it's using the right algorithm in the wrong way.
Math.random()for transaction IDs, SHA-1 for password hashing, or AES with a hardcoded key are all variations of the same mistake: cryptography that looks secure but isn't.
// ❌ WRONG — Predictable "random" IDs
const transactionId = Math.random().toString(36).substring(2);
const otp = Math.floor(Math.random() * 1000000)
.toString()
.padStart(6, "0");
// ✅ RIGHT — Cryptographically secure random
import * as Crypto from "expo-crypto";
export async function generateSecureHex(
byteCount: number = 16,
): Promise<string> {
const bytes = await Crypto.getRandomBytesAsync(byteCount);
return Array.from(new Uint8Array(bytes))
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
}
export async function generateSecureId(length: number = 32): Promise<string> {
const bytes = await Crypto.getRandomBytesAsync(length);
const charset =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
return Array.from(new Uint8Array(bytes))
.map((b) => charset[b % charset.length])
.join("");
}
Key derivation — PBKDF2, not bare hashing:
// ❌ WRONG — No stretching, vulnerable to brute force
const key = await sha256(userPin); // 4-digit PIN = 10,000 attempts
// ✅ RIGHT — Iterated key derivation with salt
export async function deriveKey(
input: string,
salt: string,
iterations: number = 10000,
): Promise<string> {
let derived = `${salt}:${input}`;
for (let i = 0; i < iterations; i++) {
derived = await Crypto.digestStringAsync(
Crypto.CryptoDigestAlgorithm.SHA256,
derived,
);
}
return derived;
}
Production note: The iterated SHA-256 approach above is a PBKDF2 approximation for environments where native PBKDF2 isn't available. For production, use
react-native-quick-cryptowhich provides proper PBKDF2 with configurable iterations. OWASP 2024 recommends 600,000 iterations for PBKDF2-SHA256. In a pure expo-crypto environment, 10,000 iterations of SHA-256 is a reasonable floor.
In the Demo App
The Crypto Utils screen puts Math.random() and CSPRNG side by side. Tap "Generate 5 Random Values" — the left column (red) shows predictable PRNG output, the right column (green) shows hardware-backed random bytes. Below that, type any text and hash it with SHA-256 in real time. The key derivation demo shows the intentional delay — tap "Derive Key from PIN 1234" and watch it take 100+ milliseconds for 1000 iterations.
Checklist
- [ ]
Math.random()never used for security-sensitive values - [ ] All random generation uses
expo-cryptoCSPRNG - [ ] Key derivation uses iterated hashing with salt (PBKDF2 or equivalent)
- [ ] No MD5 or SHA1 usage anywhere in the codebase
- [ ] Encryption keys generated with CSPRNG and stored in Keychain/Keystore
The Full Checklist
Print this. Tape it to your monitor. Check every item before submitting for security review.
| # | Control | Action | Status |
|---|---|---|---|
| M1 | Credential Storage | Tokens in Keychain/Keystore via expo-secure-store, not AsyncStorage |
|
| M1 | Session Timeout | 5-min inactivity + 2-min background timeout enforced | |
| M2 | Supply Chain |
npm audit in CI, exact versions for security packages |
|
| M2 | Dependency Review | Native module permissions reviewed before adding packages | |
| M3 | Biometric Auth | Gates on all sensitive screens, not just login | |
| M3 | Auth Fallback | Max attempt limits, tested on real devices | |
| M4 | SQL Injection | Parameterized queries for all SQLite operations | |
| M4 | Deep Links | Validated against scheme/host allowlist | |
| M4 | Input Validation | Amount range, precision, and format checks for payments | |
| M5 | SSL Pinning | Public key hash pinning with 2+ pins (current + backup) | |
| M5 | Cert Rotation |
expirationDate safety valve + rotation plan documented |
|
| M6 | PII in Logs | Sensitive fields masked in all log outputs | |
| M6 | Crash Reports | Request/response bodies stripped from Sentry/Crashlytics | |
| M6 | Screenshots |
useSecureScreen() via expo-screen-capture on iOS + Android |
|
| M7 | Hermes | Bytecode compilation enabled | |
| M7 | ProGuard | R8/ProGuard enabled for Android release builds | |
| M7 | Source Maps | Stripped from production bundle, uploaded to crash service only | |
| M7 | Debug Mode |
console.log stripped, __DEV__ verified false in release |
|
| M8 | Jailbreak | Detection active with developer bypass via __DEV__ guard |
|
| M8 | Response Strategy | Graduated: block payments, warn on general use | |
| M8 | OTA Security | Code signing verification for expo-updates | |
| M9 | Tier 1 Storage | Credentials in Keychain/Keystore | |
| M9 | Tier 2 Storage | App data in encrypted MMKV | |
| M9 | Tier 3 Storage | SQLite with column-level encryption | |
| M9 | AsyncStorage | Used ONLY for non-sensitive preferences | |
| M10 | CSPRNG |
expo-crypto for all random generation |
|
| M10 | Key Derivation | Iterated SHA-256 with salt (or PBKDF2 via quick-crypto) | |
| M10 | No Weak Crypto | No MD5, no SHA1, no Math.random() for security |
What a Security Audit Actually Looks For
I've worked with pen testing teams on RN apps. Here's what they check first — in order:
1. Network traffic interception. They install a proxy (Burp Suite, mitmproxy) on a rooted device with a custom CA certificate. If they can see plaintext API traffic, that's a critical finding. SSL pinning stops this — it's the first thing they test.
2. Local data extraction. They pull the app's data directory from a rooted device and search for tokens, PII, and financial data in plaintext. AsyncStorage is the first place they look. Shared preferences is second. If your tokens are in either, that's another critical.
3. Binary analysis. They extract the APK/IPA, unzip it, and examine the JavaScript bundle. They search for hardcoded API keys, endpoint URLs, secret keys, and validation logic. Hermes bytecode makes this harder but not impossible — tools like hermes-dec can decompile Hermes bytecode back to readable JavaScript.
4. Runtime manipulation. On a jailbroken device, they use Frida or Objection to hook into your app at runtime. They can bypass biometric checks, modify function return values, and intercept encrypted data before it's encrypted. Jailbreak detection catches this scenario — or at least makes it harder and shows your auditors you're trying.
5. Debug artifacts and extraneous functionality. They look for hidden API endpoints, debug parameters (?isAdmin=true — see the Peloton incident above), console logging in production, and leftover test screens. They'll also check if Flipper, Reactotron, or the React Native dev menu is accessible in your release build.
Tools they'll use against your app:
| Tool | What It Does | What It Finds |
|---|---|---|
| MobSF | Static + dynamic analysis | Exported components, hardcoded keys, insecure storage |
| Frida | Runtime hooking | Bypass biometrics, modify function returns, intercept decrypted data |
| Objection | Runtime exploration (built on Frida) | Dump Keychain, bypass SSL pinning, explore the filesystem |
| Burp Suite | HTTP proxy + scanner | API vulnerabilities, missing pinning, auth bypass |
| Jadx / hermes-dec | Decompiler | Hardcoded secrets, business logic, API endpoints in your bundle |
| Drozer | Android IPC testing | Exposed Intents, Content Providers, Broadcast Receivers |
Beyond build-time protection — RASP: Consider deploying Runtime Application Self-Protection (RASP) for your highest-security flows. Unlike jailbreak detection which checks once at launch, RASP monitors continuously for code injection, function hooking (Frida), debugger attachment, and memory tampering. Tools like Talsec freeRASP provide a React Native SDK. This doesn't replace the controls in this article — it adds a real-time detection layer on top of them.
The common thread: pen testers assume a rooted/jailbroken device. They assume the attacker has physical access. They assume the network is hostile. Your security controls need to hold up under all three assumptions simultaneously.
Conclusion
Security in React Native fintech isn't one big thing — it's 30 small things done right. No single control is sufficient. SSL pinning without secure storage is pointless (the attacker reads the token from disk). Secure storage without session timeout is pointless (a stolen token works forever). Biometric auth without jailbreak detection is pointless (Frida bypasses it in 5 minutes).
The code in this article is real, tested, and production-ready. Every file exists in the companion repo with full implementations. Clone it, adapt it to your domain, and use the checklist above as your pre-audit scorecard.
If you're about to go through a security audit for your RN fintech app, start with M1 (credential storage) and M5 (SSL pinning). Those two controls address the most common critical findings. Then work through the rest of the list. Your pen testers will thank you — or at least, they'll have to work harder to find something.
Companion repo: github.com/FastheDeveloper/owasp-rn-fintech
Built with expo-secure-store@15.x, expo-crypto@15.x, expo-local-authentication@17.x, react-native-ssl-public-key-pinning@1.2.x, react-native-mmkv@4.x, and jail-monkey@3.x.




Top comments (0)