DEV Community

Cover image for Starting with Barbican, Sandbox and Trusted Computing
Hang Yin
Hang Yin

Posted on • Originally published at Medium

Starting with Barbican, Sandbox and Trusted Computing

Barbican

In 1268 A.D., Kublai Khan, emperor of Chinese Yuan Dynasty, sent two of his generals -- Liu Zheng and A Shu -- led their armies to attack Xiangyang, which marked the launch of the war against Song Dynasty. Their armies were famed as "God's Whip", yet they failed, and struggled attacking Xiangyang for 6 years. In 1269, he sent his prime minister Shi Tianze to support, and started to build walls to blockage Xiangyang and the Han river channel. They then successfully repelled Song’s reinforcements for several times. Mongolia decided to seize Xiangyang at all costs, increasing armies to over 100,000 people in just one year .

However, the defensive projects established by the Xiangyang’s castellan, Lv Wende, should not be underestimated. Building by the Han River, protected by steeps, rich with foods and reserves, Xiangyang stayed impregnable until 1274 for Mongolia’s continuous blockage. In 1279, the Southern Song Dynasty came into its end as the final emperor jumped into the sea with Lu Xiufu, his last loyal minister.

Other than Mongo’s death, united people, solid spirit and Song’s support, the key reason why a small city Xiangyang could defend for 6 years is interesting: Xiangyang was built with the Han River as its natural moat and with a brilliant structure outside its each gate: a barbican.

Some of you may feel confused about the design. Our ancients may lack technology and scientific knowledge, but they are as smart (or even smarter) as we are . They did so based on their hundreds years of battling experience combined with a comprehensive defensive system consisting of architectures and strategies.

Once you broke into the fist gate you would find yourself caged, having nowhere to go but becoming a human target aimed by archers standing on the outside wall.

The brilliance of Chinese Barbican could be concluded as:

To set dual protection mechanism on weak and key sections;
To restrict actions of untrusted objects;
To build a structure facilitating recognizing and eliminating untrusted objects.

Such design was then applied to computer defense system, called the “Sandbox Model”.

Sandbox

In the field of computer security, Sandbox is a security mechanism that provides an isolated environment for suspecisious programs. It first let the suspicious program to fully perform while recording its every move. If the program behaved as a virus, the Sandbox would “rollback”: erase the traces and actions of the virus and restore the system to a normal state.

Similar to Chinese Barbican, its design theoretically guarantees the following:

Dual protection: system-level and container-level sandboxes can be enable through virtualized environments;
Restriction: Sandbox provides disk and memory space that is recycled after use. Network access, access to real systems, and reading of input devices would be prohibited or strictly restricted.
Restoration: once a virus is found in a Sandbox, files, registries, etc. would be restored at other places to protect them from unauthorized access or anonymous operation.

The problem is, it relies on detection capabilities. If viruses cover themselves or evade security scanning in some way, they can infect users' devices and eventually invade company networks and business-critical systems.

As the attack methods of malicious users change in more various ways, defenders have to write codes to build higher firewalls, run more complicated tests, which leads to more malicious data records, triguring more false alarms. As the cost of maintenance goes up and the system efficiency goes down, leaving rare efforts and time against new attacks, in security designs using detection-response mechanism, malicious programs could keep testing until they could fool the guards.

Thus, a new security model -- TEE, Trusted Execution Environment -- came out. Contrary to the "Sandbox" model, trusted computing assumes that "trust" is the scarce resource. As a comprehensive technological innovation of logic verification, computing architecture, and computing modes, etc., TEE realizes “Defense Offense” by fixing logical defects, ensuring the logical combination of computing tasks to stay safe from tampering or attacking.

Trusted Computing

Development of Trusted Computing
In 1983, the United States Department of Defense formulated the world's first Trusted Computer System Evaluation Criteria (TCSEC), and for the first time proposed the concepts of Trusted Computing and Trusted Computing Base (TCB), and put the TCB serves as the basis for system security.

In 1990, the International Organization for Standardization and the International Electrotechnical Commission ISO / IEC defined credibility based on behavioral expectations in the directory service series standards they published: If the second object acts exactly as the first object expected, the second one will be considered as “credible”.

In 1999, well-known IT companies such as IBM, HP, Intel, and Microsoft initiated the establishment of the Trusted Computing Platform Alliance (TCPA, Trusted Computing Platform Alliance), which marks the entry of trusted computing into the industry.

In 2003, In Advanced Trusted Environment v11.pdf, TEE was defined as “a set of software and hardware components that provide the necessary facilities for the application”, and one of two defined security levels needs to be met. The first is to defend software attacks, and the second is to defend both software and hardware attacks.

Now let’s learn about some important concepts in TEE computer theory.

Roots of Trust and Chain of Trust
Trusted computing technology first set a root of trust in a computer system, gradually testing to trust layer by layer: from the root to the hardware platform, operation system, and applications.

Roots of Trust

Roots of Trust (RoT) is a set of functions that is always trusted by the computer’s operating system (OS). The RoT serves as separate compute engine controlling the trusted computing platform cryptographic processor on the PC or mobile device it is embedded in.According to the classification of the security chip and the Trusted Software Stack running on it, there are currently three main trusted computing standards: Trusted Platform Module (TPM), Trusted Cryptography Module (TCM), and Trusted Platform Control Module (TPCM).

Chain of Trust

In computer security, a chain of trust is established by validating each component of hardware and software from the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility.

The “trusted” components could be tested through detection or verification.

Detection: to collect data and status from a component;
Verification: Compare the results of detection with the reference value to see if they are consistent. 

If a component passed both of them, it would be marked as trusted.

There are 3 rules when establishing a chain of trust:

All modules or components, except CRTM (the starting point of the chain of trust building, the first piece of code for trusted metrics), are considered untrusted until they passed the verification and classed into trusted boundary. 
Modules or components inside the trusted boundary can be used as verification agents to perform integrity verification on modules or components that have not yet completed verification.
Only modules or components within the trusted boundary have access to relevant TPM control rights. 

TPM (Trusted Platform Module)
Generally, the trusted computing platform uses the TPM chip machine key and its relevant software as the root of trust.

Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys.

The use of TPM, to some extent, reduces the risk of important data being stolen by malware such as Trojans, but it cannot prevent runtime attacks. If a hacker cracks while a legitimate application is running, he can read decrypted data directly in memory.

To solve a problem like this, we should also find a way to prevent hackers from reading the memory space used by legitimate applications, which is the reason why Trusted Execution Environment (Trusted Execution Environment) is proposed.

TEE (Trusted Execution Environment) & REE (Rich Execution Environment)
In 2010, Global Platform announced for the first time a set of TEE system system standards. Global Platform's series of TEE specifications, which is also the current many commercial or open source products generally refer to this specification and implement it in accordance with its various functional interfaces. Global Platform defines part of the TEE from the interface and protocol implementation level, as well as the definition of typical application specifications. TEE owns characteristics as follow:

Cooperative software-hardware security mechanism: as its essential attribute, Isolation can be implemented through software or hardware.

Shared Computing: through which one computer can share computing and hardware for better performance.

Openness: TEE is necessary when REE exists.

Now you might understand why TEE technology is different from the sandbox model: the former is guided by the presumption of guilt, and the former is guided by the presumption of innocence.

In the model of presumption of guilt, TEE works when REE exists. Generally, TEE and REE could be recognized as Secure World and Normal World. For example, a system usually runs in Normal World, but when it needs to run requests with confidential needs such as face ID verification or private key signature for payments, it has to run in the Secure World.

Trust Zone

In 2006, ARM introduced TrustZone technology which divides the working status of the CPU into Normal World Status (NWS) and Secure World Status (SWS) in ARMv6 architecture.

TrustZone-enabled chips provide hardware-level protection and security isolation of peripheral hardware resources. When CPU is in a normal state, no application can access the secure hardware device, nor can it access the memory, cache, and other peripheral secure hardware devices that are in Secure World state.

In specific application environments, users’ sensitive data can be stored in TEE, and the trusted application (TA, Trusted Application) uses important algorithms and processing logic to complete the data processing. When users’ sensitive data is required for identity verification, the verification result is obtained from the TEE side by defining a specific request number on the REE side.

Intel SGX

In 2013, Intel introduced a TEE-based instruction set extension: SGX (full name Intel Software Guard Extension), which aims to provide mandatory guarantee through hardware security, does not depend on the security status of firmware and software, and provides a trusted execution environment for user space .

SGX realizes isolated operation between different programs through a new set of instruction set extensions and access control mechanisms to protect the confidentiality and completeness of critical user code and data from malicious software.

Different from the idea of ARM TrustZone, SGX would not identify and isolate all malware on the platform; instead, it encapsulates the security operations of legitimate software in an Enclave, protecting it from malware attacks. Neither of the privileged softwares or non-privileged ones could access Enclave.

Once software and data are in Enclave, even the operating system cannot influence the code and data in Enclave. Enclave's security boundary includes only CPU and itself. This design avoids software-based TEE's defects in software security vulnerabilities and threats, which greatly improves system security.

Currently, when it comes to utility in commercial level, Intel SGX is rather mature for it allows
third parties to use the "remote authentication" mechanism to build their own trust certificates.

Shortcomings of SGX

Still, Intel SGX has its shortcomings as following:
Especially rely on Intel platform, namely, “centralized”, which is also the reason why TEE has been criticized among all MPS modes.
Little enclave memory for high performance and multiple programs.
Unable to defend side-channel attacks.
Has to interact with untrusted areas for its enclave itself is in user mode and cannot execute system calls by itself.

Yet Indeed, computer components are designed for different objects, just like we can’t send people to live in the Chinese Barbican. This is also the reason why we found TEE and blockchain could complement each other in proper application scenes.

What Can Blockchian-TEE Structure Do

Through the decentralized network and consensus mechanism, blockchain could highly reduce the security risks and malicious possibilities of manufacturers such as Intel, and ensure the availability of trusted chains (this chain is not a blockchain). Phala Network will be compatible with multiple TEE protocols in the future. Even if Intel were under a massive attack, the blockchain system would recover data by punishing malicious nodes by switching TPM nodes.
The problem of Enclave interacting with untrusted areas could be solved for blockchain is a perfect technology of building trusted network. Therefore, letting TEE interact with blockchain smart contracts through the blockchain is a brilliant complementary combination. This is also the reason why Phala network values cross-chain capabilities.

TEE’s function added to promote blockchain application: technological support on privacy protection

When it comes to privacy protection, data leak events of Facebook might come into your mind. Confidentiality actually covers a wider field including business confidentiality.

There are generally 3 generations of technologies on privacy protection:

The 1st generation: Zcash and Monero. They realized privacy protection of transactions mainly through zero-knowledge proof, ring signature, and confidential transfer, which ensures they can encrypt transactions of original tokens.
The 2nd generation: Aztex, a privacy protocol on Ethereum that uses technology similar to Zcash to achieve transaction privacy for ERC20 tokens. as an extension of the first generation, they can protect transaction privacy as well, yet faint in case if Turing-complete smart contracts are involved.
The 3rd generation: networks like us. We hope to expand the objects from “privacy” to “confidentiality”. Not only users’ privacy, but also confidential datas in any smart contract. We want “Confidential Smart Contract” to calculate like any other Turing-complete smart contracts without data exposure.

Currently confidential smart contract can be implemented through Multi-Party Computing (MPC) and Trusted Execution Environment (TEE).

The former is based on pure cryptographic technologies such as homomorphic encryption and zero-knowledge proof without relying on any hardware. It can be applied in specific fields with high efficiency, such as verifiable random numbers and distributed key generation, but in general computing, they cause an average performance loss of one million times.

The latter has to rely on trusted computing hardware (mainly Intel’s CPU) but can achieve very efficient general-purpose computing. Thus we chose the latter and are committed to realize an universal confidential Turing-complete smart contract.

To conclude, if you read this article to this line, we are trying to make this world a little bit better by fixing privacy and confidential issues in a blockchain way. Our Testnet is on the way so we wrote this article to explain our vision and technological ideas. Hope it’s interesting :)

Top comments (0)