Password managers often get discussed at product level: sync, UX, pricing, recovery, browser extensions.
What interested me more in OneRule was lower in stack:
- what "offline-first" actually means in implementation
- how local key derivation is handled
- where encryption boundaries sit
- how Flutter and native Android divide responsibility
OneRule is an open-source Android password manager built with Flutter. It does not require an account or backend for core vault operations. The interesting part is not only that it works offline, but that the architecture is intentionally shaped around that constraint.
This post is a technical walkthrough of current implementation: storage model, key flow, backup format, panic mode behavior, and Android Autofill boundary.
Architectural Shape
At code level, OneRule follows a fairly direct layered structure:
- Flutter UI for screens, forms, and interaction flows
- Provider-based state for app-level reactive data
- service and facade classes for storage, crypto, backup, auth, clipboard, and platform integration
- native Android code where platform APIs are necessary, especially Autofill
In practical terms, the flow looks like this:
UI/screens
-> Provider state
-> service/facade layer
-> SQLCipher / secure storage / crypto / platform channels
This is not an especially exotic architecture, but it fits security-sensitive mobile software well. UI code stays mostly unaware of encryption details. Providers coordinate visible app state. Services own the mechanics of vault initialization, field encryption, backup creation, and unlock behavior.
For a Flutter app, that separation matters because it reduces chances that sensitive logic leaks into widget code and becomes harder to reason about.
The startup path is also explicit about local protections. On launch, the app initializes error handling and reminder services, then enables screen-capture protections before rendering UI:
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await LocalLogService.instance.initializeGlobalErrorHandlers();
await BackupReminderService.instance.initialize();
if (Platform.isAndroid || Platform.isIOS) {
await ScreenProtector.preventScreenshotOn();
await ScreenProtector.protectDataLeakageOn();
await ScreenProtector.protectDataLeakageWithBlur();
}
runApp(/* providers + app */);
}
That is small, but it shows something important about OneRule: operational security controls are part of app bootstrap, not optional polish added later.
Data-at-Rest Model
The vault is not stored as plaintext records in local SQLite.
OneRule uses SQLCipher-backed SQLite as the primary vault store. The database file is encrypted at rest, and the SQLCipher password is derived from the active in-memory session key.
But the design does not stop there.
The password field is also encrypted again at application level using AES-256-GCM. So the current storage model is effectively:
- encrypted database file via SQLCipher
- encrypted sensitive field payloads inside that database
That second layer is important. It means the most sensitive credential field is stored and serialized as an authenticated encrypted envelope, not only as a row protected by database-level encryption. When the Flutter app needs to render or analyze entries, it decrypts that field at read time.
The current field payload format is versioned and structured like this:
or1:v2:gcm:<nonce_b64url>:<cipherText_b64url>:<tag_b64url>
Using a versioned envelope instead of opaque bytes makes future migrations much easier. It also gives the app a clean way to distinguish between:
- current AES-GCM records
- legacy AES-CBC records
- pre-envelope data from older versions
That matters because OneRule includes migration logic and does not assume the vault will always start in latest format.
For backups, the code takes the same approach: explicit envelope, explicit metadata, explicit version.
Key Derivation and Session Model
The master PIN is not used directly as an encryption key.
Instead, OneRule derives key material with PBKDF2-HMAC-SHA256. Current implementation uses:
- 100,000 iterations for interactive PIN unlock/session derivation
- 256-bit output
- 16-byte random salts
Backup encryption uses a separate derivation profile:
- PBKDF2-HMAC-SHA256
- 200,000 iterations
- 256-bit output
- 16-byte random salt embedded in backup payload
That separation is a good design choice. Unlock flows need acceptable latency on-device. Backup export and import can afford a higher work factor because they are less frequent and less latency-sensitive.
The session model is also worth calling out:
- the active vault session key is kept in memory
- SQLCipher password is derived from that key
- secure storage holds salts, verifiers, and biometric-restorable material
- session key is cleared on logout and lock paths
So the app has a fairly explicit distinction between:
- persistent verification or recovery metadata
- active decrypted session state
That is small but important. A lot of local-first security design is really state-lifecycle design.
Backup derivation path is straightforward and readable in code:
Future<SecretKey> _deriveKey(String passphrase, List<int> salt) async {
final pbkdf2 = Pbkdf2(
macAlgorithm: Hmac.sha256(),
iterations: _pbkdf2Iterations,
bits: 256,
);
return pbkdf2.deriveKey(
secretKey: SecretKey(utf8.encode(passphrase)),
nonce: salt,
);
}
Then the payload is encrypted with AES-GCM using fresh salt and nonce:
final salt = _randomBytes(16);
final nonce = _randomBytes(12);
final key = await _deriveKey(passphrase, salt);
final secretBox = await _gcm.encrypt(
utf8.encode(plaintext),
secretKey: key,
nonce: nonce,
);
Why SQLCipher Plus Field-Level AES-GCM?
A reasonable question is: if SQLCipher already encrypts database file at rest, why encrypt the password field again?
The answer is separation of concerns.
SQLCipher protects the database as a container. Field-level AES-GCM protects the most sensitive payloads as application-managed cryptographic objects with explicit integrity tags and versioned envelopes.
That gives OneRule a few advantages:
- clean migration path from old formats to new ones
- explicit tamper detection at field level
- easier reuse of encrypted credential payloads outside normal Flutter list rendering
- tighter control over what can be serialized or handed to native layers
The Android Autofill integration benefits directly from this. More on that later.
Vault Migration Strategy
OneRule still contains migration support for older storage models.
There are two migration stories in current codebase:
1. Legacy vault backend migration
Older vault data could exist in Hive-based storage. Current architecture uses SQLCipher, and the app includes a one-time migration reader that imports legacy records into encrypted SQLite, then removes old box data and marks migration complete.
2. Legacy cipher format migration
Password payloads and backup payloads have moved from older layouts into AES-GCM-based envelopes.
For vault records, the app can still read:
- legacy CBC envelope format
- older marker-based GCM payloads
- older plaintext-style legacy values
On unlock, rows are scanned and migrated into current or1:v2:gcm format. Migration is designed to be non-lossy:
- read legacy payload
- decrypt or parse
- re-encrypt in latest format
- persist migrated value
If crypto validation fails, migration hard-fails rather than overwriting questionable data.
That is exactly what you want in security migration code. Silent best-effort conversion is dangerous when integrity matters.
Backup Format and Cryptographic Container
Offline-first apps still need a portability story. Otherwise "no sync" quickly turns into "single-device risk."
OneRule handles that with encrypted backup export and import. Current export format is a JSON-based .enc container with schema version v: 3.
The backup pipeline looks like this:
- collect vault records
- serialize plaintext JSON
- derive encryption key from backup passphrase using PBKDF2-HMAC-SHA256
- encrypt with AES-256-GCM using random nonce
- emit structured JSON container with metadata
The container includes:
- KDF metadata
- cipher metadata
- salt
- authenticated encryption envelope
- creation timestamp
- item count
That is a much better design than "just encrypt some bytes and hope future versions remember how." Explicit metadata makes forward migration, compatibility handling, and debugging safer.
Current backup writer emits a structured object like:
return {
'v': _backupSchemaVersion,
'envelope': {
'format': 'onerule-backup',
'version': 'v3',
'algorithm': _gcmAlgorithmName,
'nonce': base64UrlEncode(secretBox.nonce),
'cipherText': base64UrlEncode(secretBox.cipherText),
'tag': base64UrlEncode(secretBox.mac.bytes),
},
'salt': base64UrlEncode(salt),
'kdf': {
'algorithm': 'PBKDF2-HMAC-SHA256',
'iterations': _pbkdf2Iterations,
'keyBits': 256,
},
};
That snippet captures design philosophy well: crypto parameters travel with container, not only with developer memory.
Import path is also backward-compatible with:
- current
.encencrypted backups - legacy
.oneruleextension - previous schema
v: 2 - previous schema
v: 1, including legacy CBC layout
Again, the pattern here is versioned crypto container plus explicit readers, not hidden magic.
Panic PIN and Privacy Flow
Panic mode is one of the more interesting product decisions because it crosses UX, threat modeling, and implementation details.
OneRule supports a separate Panic PIN verifier. When this flow is used, the app avoids exposing the real vault and enters a privacy-focused decoy mode.
There are two implementation details worth noting:
- provider state sets panic mode and suppresses real vault data
- home screen renders decoy credential entries for that mode
That means panic mode is not implemented as "delete everything" or "rename list to empty." It is an explicit alternate UI and state flow.
This is a subtle but meaningful distinction. Security-sensitive UX should not depend on the user remembering whether a blank screen is real, broken, or intentional.
The app also warns users that panic mode is a decoy flow and can be confusing if triggered accidentally. That kind of warning is good product engineering. Safety features that are easy to misunderstand become liabilities.
Android Autofill Boundary
Password managers live or die on ergonomics, and Android Autofill is one place where architecture decisions show up clearly.
OneRule uses a native Android AutofillService for API 26+ and bridges it from Flutter over a platform channel. The relevant boundary is important:
- Flutter owns normal vault UX and decrypted entry access after unlock
- Flutter syncs an encrypted credential snapshot to native storage
- Flutter also provides a session key to native layer for active unlock window
- native Autofill service performs decryption only when it needs to fill a request
That means Autofill is not backed by a permanently stored plaintext cache. Instead, encrypted envelopes are synced to native side, and usable decryption depends on active session state.
The platform methods reflect that design:
syncAutofillCredentialSnapshotsetAutofillSessionKeyclearAutofillSessionKey
If session key is cleared on lock/logout, Autofill stops returning datasets.
Flutter side bridge is intentionally thin:
Future<void> syncCredentialSnapshot({
required String payload,
required String sessionKeyBase64,
}) async {
final state = await availability();
if (state != AutofillMvpAvailability.available) {
return;
}
await _bridge.setSessionKey(sessionKeyBase64);
await _bridge.syncCredentialSnapshot(payload);
}
That sequence matters. Native side receives encrypted snapshot plus active session key only when feature is available and vault is in usable state.
This is a strong boundary design because it minimizes plaintext residency and ties Autofill usefulness to vault unlock state rather than treating Autofill as a separate always-on data channel.
Locking, Clipboard, and Leakage Controls
Security work is often ruined by small post-decrypt leaks rather than broken cryptography.
OneRule includes several controls in this category:
- screenshot prevention and screen-capture leakage protection on mobile
- auto-lock timeout based on app lifecycle transitions
- clipboard auto-clear after configurable delay
- optional application of clipboard policy to usernames as well
The lifecycle lock path is especially relevant in Flutter apps, where it is easy to focus on route state and forget process state. OneRule tracks pause/resume transitions and clears session state when idle time exceeds configured threshold.
The relevant branch in lifecycle observer is simple and effective:
if (diff.inSeconds > timeoutSeconds) {
storage.clearSessionKey();
unawaited(PlatformCredentialProvider().onVaultLocked());
unawaited(context.read<PasswordProvider>().lockForSession());
Navigator.of(context).pushAndRemoveUntil(
MaterialPageRoute(builder: (_) => const LoginScreen()),
(_) => false,
);
}
That is exactly kind of code I like seeing in security-sensitive mobile apps: short, direct, and tied to explicit state invalidation.
That is not glamorous, but it is real security work.
Vault Health Analysis
OneRule also includes a local "Vault Health" analyzer. This is not cryptography, but it is useful security-adjacent logic.
The current report model scores the vault from 0 to 100 based on:
- weak passwords
- exact-match duplicate passwords
- stale credentials
Duplicate detection uses local hash-based comparison logic rather than remote checks, which keeps analysis offline and aligned with rest of system design.
This is a good example of how offline-first does not need to mean feature-light. You can still deliver useful security insights locally if the data model and processing path are designed for it.
Why Flutter Works Here
Security software is often assumed to require fully native implementation, but OneRule is a good example of when Flutter is enough for most of stack and native code is reserved for true platform edges.
Flutter handles:
- application UI
- state management
- settings and backup flows
- vault rendering and search
- generator and health screens
Native Android handles:
- Autofill service
- platform-specific storage and bridge responsibilities
- native crash logging hooks
That is a practical split.
Flutter gives fast iteration and a unified codebase. Native code stays focused on surfaces where Android APIs are non-negotiable. For a solo-built security product, that balance can make a big difference in maintainability.
Threat Model Boundaries
Good security writeups should include limits, not only protections.
OneRule is designed to defend against:
- offline access to copied app data without the derived key
- theft or exfiltration of encrypted database files
- tampering of AES-GCM protected vault fields and backups
It is not designed to fully defend against:
- rooted or fully compromised operating systems
- malware with runtime access while vault is unlocked
- PIN disclosure by user coercion, phishing, or keylogging
- clipboard interception before auto-clear executes
That is a reasonable and honest scope. Offline encryption can do a lot. It cannot fix a hostile runtime after trust boundary is already lost.
Why This Project Is Interesting
A lot of apps claim privacy because they do not have obvious ads or trackers. OneRule is more interesting than that because privacy is expressed in structure:
- no backend dependency for core vault flows
- explicit key derivation
- versioned encrypted payloads
- layered storage protection
- local-only backup generation
- native Autofill integration without persistent plaintext snapshot model
This is what I like about the project technically. The product promise and implementation choices mostly line up.
That is rarer than it should be.
Closing
If you want to study a Flutter security app that takes local-first architecture seriously, OneRule is worth reading through.
It is not trying to out-feature every cloud password manager. It is solving a narrower problem: how to build a usable password vault where local storage, transparent cryptographic boundaries, and offline operation are first-class constraints rather than marketing copy.
Project links:
- GitHub: https://github.com/seralifatih/OneRule
- Google Play: https://play.google.com/store/apps/details?id=com.fidevelopment.onerule
If you build security software in Flutter, this kind of project is a useful reminder that "mobile-first" and "security-conscious" do not have to be opposites. With careful boundaries, versioned crypto payloads, and disciplined state handling, a local-first password manager can stay practical without outsourcing trust to infrastructure.
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.