I was going through the Updraft Smart Contract Security course, and I realized I had been doing some things the wrong way.
Here are a couple of things I started doing differently.
I am basing this article on the PasswordStore repo here: 3-passwordstore-audit.
Definitely check it out. It is only about 20 lines of contract code, and that is where nSLOC/CLOC comes in. It tells you how much actual code is in the codebase.
Step 1: Trace Trust Boundaries
Ask these questions:
- Who can write the value?
- Who can read the value?
- What does the constructor assume?
- What does the contract pretend is hidden?
- What breaks if the value is public?
In PasswordStore.sol, the owner is set in the constructor, and the setter check msg.sender.
That gives the shape of the access control.
function setPassword(string memory newPassword) external {
if (msg.sender != s_owner) {
revert PasswordStore__NotOwner();
}
s_password = newPassword;
}
The guard is there. The function is protected. So far so good.
Step 2: What the repo shows
The repo points to two separate issues:
Issue 1: Missing access control on getPassword.
The getter has no msg.sender check, so anyone can call it and retrieve the password directly through the ABI.
Issue 2: The password is exposed in storage regardless.
Here is why.
Every state variable in a Solidity contract occupies a storage slot. s_password sits at slot 1. The private keyword only prevents other contracts from reading it through the ABI. It does nothing to stop an off-chain read.
# the 'private' variable
string private s_password;
You can retrieve the saved password with a single Foundry cast command:
cast parse-bytes32-string $(cast storage <CONTRACT_ADDRESS> 1)
We fetched the raw bytes at storage slot 1 and decoded it to get the plaintext password.
This is not a bug in Solidity. It is how blockchains work. Every transaction, every storage write, is public by design. Marking a variable private is a visibility modifier for the compiler, not a confidentiality guarantee for your users.
The lesson: you can protect a function and still leak the data. Access control and data confidentiality are two different problems that need two different solutions.
Step 3: Simple audit flow
Small contracts are not easier to audit, they are easier to audit shallowly. For any contract under 100 nSLOC, I now run four checks before calling it done:
1. Read the code and map the intended trust boundary.
What is the contract supposed to do, and who is supposed to be able to do it? Write it down in plain English before touching the code.
2. Check who can actually call each function.
Does every sensitive function have a guard? Does that guard cover all entry points, including any inherited or fallback functions?
3. Check whether storage or calldata exposes the value anyway.
A private variable is not private on-chain. Ask: if someone ran cast storage against every slot, what would they find? Think about variables that leak information they are not supposed to.
4. Write a test for the failure mode.
For the storage issue, a simple Foundry test proves it:
function test_anyone_can_read_password() public {
vm.startPrank(owner);
passwordStore.setPassword("supersecret");
vm.stopPrank();
// Read slot 1 directly, bypassing the getter entirely
bytes32 storedPassword = vm.load(
address(passwordStore),
bytes32(uint256(1))
);
assertEq(
string(abi.encodePacked(storedPassword)),
"supersecret"
);
}
If the test passes, the contract has a confidentiality problem regardless of what the getter says.
Takeaway
Contracts with few functions should not trick you into shallow reviews.
The real question is not, “Does the function have a guard?”
“Does the contract actually meet the expected business logic and keep the data safe?”
How would you write this contract differently?
Top comments (1)
Really like how you broke this down. The storage slot example makes it very clear why private isn’t actually private. Curious would you lean more toward hashing the password on-chain, or completely moving that logic off-chain for better confidentiality?