DEV Community

Cover image for Types of Execution Environments, Attestation and SGX
Ricardo Ivรกn Vieitez Parra for Exact Realty Limited

Posted on • Originally published at exact.realty on

Types of Execution Environments, Attestation and SGX

Execution Environments

OMTP refer to an execution environment as the combination of hardware and software components that can be used to execute and support applications. A typical execution environment consists of a processing unit, memory, input and output (I/O) ports and an operating system. Because application execution requires an execution environment, applications are ultimately limited by any constraints placed onto them by their execution environment. This means, for example, that applications can use only as much memory as their execution environment allows, and that they can only perform tasks supported and allowed by their execution environment.

Rich Execution Environments (REEs)

Traditionally, computing is effected in execution environments that not only permit the loading and execution of arbitrary programs but may also themselves be manipulated in arbitrary ways, such as by attaching a hardware debugger to observe and change the internal state. As it is impossible for such environments to make any verifiable assertions as to their state, they are inherently untrustworthy. Alternatively, the term Rich Execution Environments (REEs) is sometimes also used to refer to this type of execution environments, such as by GlobalPlatform.

To understand the difficulty of establishing trust in REEs, consider the example of a messaging application that requires uploading a person's contacts to its developers' systems for the purpose of allowing that person to discover contacts already using said application. While this can be a valuable feature, the main disadvantage of such a system is that the users do not have a means of auditing or controlling how the information about their contacts will be used. While honest developers might use this information only for the stated purpose of contact discovery, unscrupulous developers might instead collect this information, aggregate it and then surreptitiously sell social graphs in the open market. Because at this point a sceptical user has only the word of the application developers that contact information is used for the stated purpose and only for the stated purpose, convincing prospective sceptical customers to upload their contacts requires demonstrating that their execution environment is such that the contacts uploaded cannot possibly be misused. However, providing such proof is impossible with an REE because a malicious developer could alter the execution environment in any number of ways such that contact information is exfiltrated.

For example, suppose that the application developers hire an independent, objective and trusted auditor to inspect their systems for compliance to their stated policy and that not only does the auditor find the systems compliant, but also inspects the systems on a regular basis to ensure ongoing compliance. If the developers had malicious intent, they could attempt to fool the auditor (and their users) in various ways. A crude approach could be changing their server configuration files after each audit, but more sophisticated techniques are possible, such as modifying the hardware in their servers to exfiltrate the desired information.

Furthermore, even if the developers were honest and trustworthy, the nature of REEs results in the contact-use policy still being difficult to enforce by technical means because other actors involved might still interfere in ways that result in the misuse of contact information, such as the developers' employees and suppliers. Some of these breaches could include an employee reading the server's memory directly, a malicious operating system transmitting data on user activity, or even an unauthorised third-party accessing the server and modifying the software being executed.

Trusted Execution Environments (TEEs)

TEEs address the inherent untrustworthiness of REEs by providing an isolated environment for user applications, the properties of which are well-known. Continuing the previous example of contact discovery, consider messaging application developers willing to go to great lengths to substantiate their contact use policy. They may accomplish this by limiting the execution environment to prevent arbitrary state manipulation and by providing a means of establishing a verifiable chain of trust. One of the ways they can accomplish this is to configure the contact discovery service and then hand it over to a trusted third party (trusted third party in this context means an entity that is trusted to be honest by the application users). The trusted third party then proceeds to thoroughly audit the entire execution environment (including both hardware and software components) and publish the relevant findings. Then, the entire machine is secured1 so that the execution environment cannot be modified further. When a user uploads their contact information to the contact discovery service, they also consult the trusted third party as to whether the service they are connecting to corresponds to the audited system.

Intuitively, this scheme provides a high degree of confidence that the system a user is connecting to corresponds to one that behaves as per the published audit findings providing the user trusts the auditor. However, the example described is not practical for several reasons. Firstly, the increasing complexity of computer software and hardware and the interactions between them make it extremely difficult for an auditor to be decisively confident that a system behaves in a certain way, as doing so would require evaluating each of the components that comprise the system. Secondly, the example relies on an idealised notion of securing a machine that is not practical; physical access to the machine may be needed to effect repairs and other maintenance tasks, logical access may be needed to perform software updates. Thirdly, because of the restrictions placed on the entire system, any modifications would require going over the impractical processes of a full audit and securing once again.

A more practical approach would be to isolate certain critical system components (such as software for contact discovery) from the rest of the system, and then audit these critical components. Hardware manufacturers are in a unique position to provide such isolated execution environments that allow users to restrict the interactions between sensitive components from the rest of the system, and that facilitate auditing by providing just enough functionality for implementing these critical operations. In GlobalPlaform terminology, these environments are known as Trusted Execution Environments (TEEs) and are defined as follows by M. Sabt et al.:

[A] Trusted Execution Environment (TEE) is a tamper-resistant processing environment that runs on a separation kernel. It guarantees the authenticity of the executed code, the integrity of the runtime states (e.g. CPU registers, memory and sensitive I/O), and the confidentiality of its code, data and runtime states stored on a persistent memory. In addition, it shall be able to provide a remote attestation that proves its trustworthiness for third-parties. The content of TEE is not static; it can be securely updated. The TEE resists against all software attacks as well as the physical attacks performed on the main memory of the system. Attacks performed by exploiting backdoor security flaws are not possible.

TEEs have numerous privacy-enhancing applications that may benefit users. One of them is, as discussed earlier, private contact discovery; the Signal application uses a contact discovery service enhanced using Intel SGX, a TEE technology, to protect its users' privacy. A similar application of TEEs is performing malware analysis in a remote cloud service, so that the service may not identify users by the contents of their devices, such as the applications they have installed, especially important as 98.93% of users may be uniquely identified by the list of applications they have installed.

Components of a trusted platform

Sealing and Unsealing

In the context of TEEs, sealing and unsealing refer to cryptographic primitives providing authenticated encryption and decryption, respectively, with a secret that exists only within the TEE to securely store some of the TEE state in untrusted storage. Some TEE platforms, such as Intel SGX, provide a mechanism for deriving sealing secrets deterministically based on the TEE identity, which allow for unsealing information in different instances of the same TEE. Sealing can be useful to reduce the state information that is needed within a TEE, such as when the TEE is memory-constrained or for persistence when the TEE state may be lost (such as due to a power failure). A high-level overview of the sealing primitive is shown below.

Sealing primitive

Attestation

A crucial component of TEEs is their providing a protocol to provide attestations, verifiable assertions about the state or properties of the TEE; in particular, they can prove the presence of a particular TEE. Note that attestations are indispensable for TEEs, as without them it is impossible for an observer, including local observers with full access to a system, to differentiate between a TEE and another execution environment, whether it be another TEE or an REE. Hence, although it is indeed possible to use TEEs without using attestations (for example, for their isolation properties), they provide weaker security guarantees.

The trusted part of a TEE stems from the concept of a root of trust, that is, the notion that TEEs are trusted because some entity makes assertions about the properties of the TEE, and that these assertions are believed because the entities verifying the assertion trust the one making them. Trust in attestations entails both trusting the processing environment used in the TEE to meet certain specifications (a static component) and the specific state or application in the TEE (a semi-dynamic component).

Attestation Example

Although various attestation schemes are possible, it may be helpful to consider a simple attestation model to understand the different parts that make an attestation. A TEE manufacturer produces TEE hardware in which each device ฮด is assigned a unique per-device asymmetric key pair (ฮดโ‚š, ฮดโ‚›) stored in the device. The public part of this key, ฮดโ‚š, is signed with the device manufacturer's key pair (๐™ผโ‚š, ๐™ผโ‚›) to produce a signature ฯƒ. When a TEE is initialised, the secret key (ฮดโ‚›) is used to sign the TEE state ๐‘บ2 to produce a signature ฮฃ. An attestation could consist of the tuple (๐™ผโ‚š, ฮฃ, ฮดโ‚š, ฮฃ, ๐‘บ). This would allow anyone to verify that the state ๐‘บ was present in the TEE ฮด. In particular, ฮฃ establishes trust on the device ฮด because the manufacturer is trusted; then, the trust on the presence of state ๐‘บ is possible because the device ฮด is trusted; lastly, if the state ๐‘บ corresponds to a well-known and application, the entire TEE can be trusted to execute this application according to the device specifications.

Intel Software Guard Extensions

Intel Software Guard Extensions (SGX) is an extension to the Intel architecture, available since the Skylake microarchitecture, that provides a TEE for user software in the form of isolated software containers known as enclaves. SGX has the stated goal of reducing application attack surface and protecting applications from potentially malicious privileged code, such as the operating system and a hypervisor. An SGX enclave is a protected region within the memory space of an application, which after initialisation can only be accessed by software resident in the enclave. Except where noted, the background information on the operation of SGX enclaves in this section is based on the Intelยฎ 64 and IA-32 Architectures Software Developer's Manual.

Enclave Properties

Enclave Signing

Each SGX enclave is required to be signed with a 3072-bit RSA key, along with including identifiers of the enclaveprogram and its version. The public RSA key is used to identify the enclave developer, and a 256-bit identifier is derived from measuring this public key, which is called MRSIGNER. By signing several enclaves with the same key, some convenience features are available, such as the ability to derive common sealing keys for exchanging information between enclaves made by the same developer.

Debug and Production Modes

SGX enclaves exist in two flavours, which are determined at build time: debug and production enclaves. These two behave identically, except that external processes may use the EDBGRD and EDBGWR instructions for reading from and writing to debug enclave memory, and may similarly set breakpoints and traps within debug enclaves. As a result, the state of debug enclaves can be arbitrarily compromised and they are not TEEs. While any SGX-capable processor may instantiate debug enclaves, only enclaves signed with a key whitelisted by Intel may be executed in production mode.

Enclave Identity

The initial enclave state is measured upon initialisation by applying a hash function on every memory assigned to the enclave, which includes its code and data. This providing an enclave identity called MRENCLAVE, which is represented as a 256-bit value. This identity, along with MRSIGNER, represents the enclave identity. Both measurements may be used to derive enclave- or signer-specific keys (for example, using the EGETKEY instruction) that depend on a processor-unique secret.

Execution Model

An enclave may be accessed by using the EENTER instruction, which must point to a well-known entry point in the enclave, each of which is called an ecall. Once inside an enclave, the enclave may transfer control back to the calling process by using the EEXIT instruction or its execution may be interrupted by an Asynchronous Exit Event (AEX), caused by external events (such as interrupts) that occur while an enclave is being executed. If an enclave execution is executed as a result of an AEX, the enclave execution may be resumed using the ERESUME instruction. An enclave may call EEXIT to return once it has finished its current task, or to perform an ocall, that is, to execute code outside the enclave, such as a system call. In the latter case, the function performing the ocall may not have finished executing and may expect the called code to return by following some convention to call back into the enclave.

Memory Model

The processor manages access to any memory pages assigned to SGX enclaves (referred to as trusted memory) and triggers page faults for unauthorised accesses, and any memory pages swapped to disk are sealed to provide confidentiality and integrity, even from privileged processes. By contrast, memory pages not assigned to enclaves (referred to as untrusted memory), do not offer these protections any may be arbitrarily manipulated by the operating system or other processes.

Attestation Model

SGX provides local attestations, which depend upon a processor-unique secret and can only be verified by enclaves executing on the same processor, and remote attestations, which can be verified by anyone, and do not require an enclave for verification.

Local Attestations

Any SGX enclave may produce a report that can be verified by other enclaves executed on the same processor, referred to as local attestations. These attestations enable secure information exchange between enclaves, and may be used both for synchronous communication (e.g., for establishing a secure channel between enclaves) and asynchronous communication (e.g., for storing information that may be later used by another enclave). The local attestation process is illustrated in the figure below.

Local attestation

Generation

Local attestations are generated using the EREPORT instruction, which takes two inputs, TARGETINFO and REPORTDATA, and produces a report as an output. TARGETINFO identifies the enclave intended to verify the generated report and includes the target enclave MRENCLAVE as well as other attributes, such as whether it is a debug enclave. REPORTDATA is a 512-bit data structure that may contain arbitrary information intended to be verified by the target enclave. The generated report is authenticated with a symmetric key derived from the provided target MRENCLAVE and contains REPORTDATA and information about the attesting enclave, including MRENCLAVE, MRSIGNER and other attributes, such as whether it was a debug enclave.

Verification

Local attestations can only be verified by the target enclave, which must be executed in the same processor that was used for generating the report. The verification process consists of deriving a report key using the EGETKEY instruction and verifying that the MAC attached to the report is valid.

Remote Attestations

In addition to local attestations, SGX supports the generation of remote attestations. Unlike local attestations, verifying remote attestations does not require the use of an enclave nor is it limited to the same processor.

Generation

The process for generating a remote attestation is similar to that for local attestations but involves some additional steps. A remote attestation begins with a report made for an Intel-provisioned quoting enclave (QE). The quoting enclave then receives and verifies the report, and converts the report into a quote, which is signed with the QE private key.

The quoting enclave (QE) is provisioned by Intel as part of the Intel SGX Platform Software (PSW). The QE is tasked with verifying a local attestation report (described earlier) and using its public key to sign the report, which can be verified later. The signature uses a group signature scheme called Intel Enhanced Privacy ID (EPID) to ensure that a signature cannot be linked to a particular signer, but just to a member of the group.

Verification

Remote attestation quotes may be verified by validating whether the signature they contain is valid, which can be done by using either an EPID public key certificate or an attestation verification service, such as the Intel Attestation Service (IAS).

Plausible Deniability

The ISO non-repudiation framework states that the goal of non-repudiation is โ€›to collect, maintain, make available and validate irrefutable evidence concerning a claimed event or action in order to resolve disputes about the occurrence or non-occurrence of the event or actionโ€™.

Plausible deniability is a concept closely related to non-repudiation in that it has the opposite stated goal: it is desired that any evidence regarding a disputed event be plausibly refutable. In particular, a protocol executed between two entities, Alice and Bob, and observed by an external observer Eve is plausibly deniable if its execution does not generate any evidence that can be used to irrefutably demonstrate Alice's or Bob's involvement. Note that this definition permits mutual authentication of Alice and Bob insofar as this mutual authentication is non-transferrable (i.e., it has no meaning for a third party).

Following Y. Dodis et al. and M. di Raimondo et al., we can model deniability in the paradigm of simulation. For this, we assume an arbitrary protocol ฯ€ in which there is a sender ๐ด, a receiver ๐ต, an informant ๐ธ who observes the protocol execution, a misinformer ๐‘€ who does not observe the protocol run but executes a simulated protocol ฯ€* and a judge ๐ฝ who will rule at the end of the protocol whether communication took place between ๐ด and ๐ต. The protocol ฯ€ is deniable if ๐ฝ is not able to distinguish between ๐ธ and ๐‘€.

Authentication

Authentication was presented earlier in the context of authenticated encryption as a cryptographic process conferring integrity. In the context of communications, authentication has a broader meaning that encompasses the entire communication, including provenance. Generally speaking, for a message โ„ณ, the authenticator ๐“ allows a verifier ๐‘‰ to assess not only whether the message โ„ณ was tampered with but also whether it originated from the expected party, that is that โ„ณ was sent by some party ๐ด and not from an unauthorised party ๐–ข. Although authentication is useful to verify that messages are genuine, the way that โ„ณ and ๐“ are tied together, possibly to a sender ๐ด, may reveal information to parties external to the protocol as to ๐ด's involvement.

Working in the deniability framework set forth above, authentication is deniable if it does not give the informant ๐ธ any advantage over the misinformant ๐‘€ in convincing the judge. We will examine three different authentication schemes and see how deniability is affected.

Scenario 1: No authentication

In this case, the protocol ฯ€ provides no authentication. Note that it is impossible for the judge to tell which messages are genuine (meaning that the misinformant is able to forge messages that are indistinguishable from any real messages sent by the sender). This protocol is therefore deniable.

Scenario 2: Digital signatures

In this case, the protocol ฯ€ uses digital signatures for authentication. The sender has a key pair (๐ˆโ‚, ๐ขโ‚), where ๐ขโ‚ is used for verification and is public and ๐ˆโ‚ is used for generating an authenticator and is a secret known only by the sender. Assuming that generating a valid authenticator without knowledge of ๐ˆโ‚ is infeasible, the judge can now distinguish the informant from the misinformant because the latter is unable to furnish messages that validate correctly.

Scenario 3: Symmetric authentication

In this case, the protocol ฯ€ uses symmetric authentication, such as that provided by AES-GCM. The sender and the receiver agree on a key ๐พ and the sender transmits messages encrypted and authenticated under this key, which are verified and decrypted by the receiver using the same key. Note that because ๐พ is a shared secret, the receiver (or anyone who knows ๐พ) can forge messages that are indistinguishable from genuine messages from the sender. Because forgeries are possible, the judge cannot distinguish the informant from the misinformant.


  1. Securing the machine refers in this case to applying a combination of physical and technical tamper-evident seals such that any modifications to the execution environment be noticed if not completely prevented. As an example, administrator-level software access could be disabled, the entire software stack could be placed in read-only memory and the entire machine could be encased in concrete and then shipped to a secure location.ย โ†ฉ

  2. For the purposes of this example, the state ๐‘บ may be assumed to contain a nonce, a timestamp or some other similar measure that assures freshness.ย โ†ฉ

Top comments (0)