Getting Your Code On
Now that you’ve pinned down the ideal functionality of the enclave, and assuming you are comfortable coding up a server to handle requests from each party, we can talk about some of the aspects you probably want to keep in mind.
In AWS, you can treat Nitro Enclaves as a self-contained VM with an Enclave Image File build from a Docker image running inside. The communication in and out of the enclave is via virtual sockets (vsock) only to the parent instance (that is the instance that created the enclave. The parent instance acts as an intermediary between the enclave and the outside world, with the sole exception of the KMS Proxy which speaks directly with AWS Key Management Service.
To start, the purpose of using a trusted execution environment is because the parties don’t trust each other. So we have to assume the users of the enclave are adversarial by nature. This means there is an onus on you, the developer, to develop a secure application which is often easier said than done. A good starting point is to create strict input and output validators. If the IO is forced to conform to a predefined JSON-schema or OpenAPI definition, at least you should be able to validate that while checking for any malicious characters and so on.
You should endeavor to be particularly careful when handling bytes as an input, especially in Python. The YouTuber PwnFunction has a nice introductory tutorial on a known exploit in Python’s Pickle library which you can find here.
There is an ever persisting battle of course between usability and security. Some would strongly argue that enclave source code should be in either C or Rust, while others feel comfortable using languages such as Python in order to take advantage of a particular tool or framework within the enclave itself. Irrespective of your decision, the code can be made more secure by performing rigorous testing, performing static analysis such as SonarCloud, and using internal firewalls to lock down any unexpected communication channels.
Once you’ve hardened the IO of the enclave you may also want to consider your authentication model for within the enclave. This can be as simple as inputting the key management services the enclave can speak to as a build argument (discusses in the next section) or using pre-shared keys with TLS-PSK for example. OAuth-based approaches may cause some additional considerations if the enclave can not directly communicate with a trusted key authority for public-key cryptographic approaches. That’s not to say it’s impossible, but one should certainly think through the challenges and potential risks which may be involved if there is a parent server acting as a man-in-the-middle.
Encoding Guarantees: Build Arguments and Environment Variables
Previously we established that the enclave image (converted from the Docker image) running inside the Nitro Enclave is what is attested when requesting key access from one or more of the parties. Well, we can use this fact to develop reusable enclave images.
For example, assume that we have two parties, Alice and Bob. They would like to be authorised based on some hash of their respective KMS ARNs and would also like to limit the number of function calls made by Alice to 3 and Bob to 2. Now imagine another scenario where Alice and Charlie would like to run the same interaction, but this time Alice can only make 2 function calls and Charlie 4.
In such a scenario you do not want to be hard coding this information each time. Instead, you can leverage Docker build arguments and set them as environment variables within the enclave. This changes the Docker image and as such the attestation hashes that are used to verify the container to the key management services. It also can be highly efficient too, depending on how you create your Docker image as it can take advantage of build caching such that building the image with new arguments can be a painless process.
Persistent Storage, PCI Devices & Resource Requirements
Nitro Enclaves endeavor to ensure security by locking down a virtual machine to a very limited set of functionalities. It operates purely in RAM with dedicated CPUs. As such many functionalities, you may expect may not be available.
For example, one would typically expect to have persistent storage on a Volume on an EC2 instance. However, this of course is outside of the enclave so you have to encrypt data using a KMS before releasing it to the parent instance. In the context of multiparty computation, this poses an interesting question, whose keys to use? To answer this you may wish to consider who manages the parent instance and how many of the parties would need to collaborate together to decrypt the data if the encrypted payload was to ever be leaked. One approach would be to encrypt the packet with each party's KMS in turn and reverse the process when decrypting the payload if returned to the enclave at a later point. There is nothing obviously wrong with this approach, but as we no every encryption and decryption will add to the latency of the enclave overall.
When it comes to data, we’ve found it's best to assume the enclave is transactional and to save and reload persistent data for better confidence. That is not to say you shouldn’t save anything locally but best to consider it as a cache. That way if the enclave were to halt and be restored you have the safety net of being able to reload the state of the enclave.
A second challenge is that no PCI devices are available to the enclave, so if you are hoping to crunch some data on a GPA or equivalent, you may want to think again. Your only obvious solution in such a scenario is to pay for an instance with more powerful CPUs and/or to allocate more of them to the enclave to facilitate threading and multiprocessing.
Finally, one must remember that the entire enclave lives within the permitted resources at launch time. This means the enclave must have enough ram for the enclave image, all RAM required internally, and all RAM which will store the file system and so forth. This is worth keeping in mind as you develop your enclave and taking a resource-efficient design can save you significantly on your monthly AWS bills.
Timing Attacks
Importantly, enclaves give you a guarantee that what code you agree gets run, not that it is safe to run. This is a big difference and the onus is on the developer to make sure that side channels, such as execution time, do not leak sensitive information that was unforeseen when signing off on the ideal functionality.
Let’s have a look at how the timing of execution may inadvertently change the guessing probability of a party's secret input. Suppose Alice inputs a decision tree that Bob wishes to use to classify some data. If the decision tree is not balanced, ie if there is a different number of comparisons required depending on the path taken through the branches of the decision tree, then simple timing of the execution time may reveal what the output of the classification was.
While not always trivial to achieve, you should endeavor to create fixed-time programs for enclaves if the timing is likely to reveal superfluous information to one or more parties involved.
Summary
While not delving into actual code, we’ve tried to outline some pointers towards building your first few enclaves. Even the pros can make a mistake with a poorly defined ideal functionality or an insecure implementation. Take your time, give it a shot and be comfortable to make some mistakes initially. Enclave technology is still very new and there is great reward in being an early pioneer in developing and leveraging enclave applications.
Why Oblivious?
At Oblivious, we’ve built the first full-service enclave management system for multiparty applications. It’s called Ignite and it allows data scientists and machine learning practitioners to take advantage of prebuilt enclaves for data exploration, analysis, training, and inference. If you are interested in the technology, reach out to us to get started today!
Top comments (0)