<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oblivious</title>
    <description>The latest articles on DEV Community by Oblivious (@oblivious).</description>
    <link>https://dev.to/oblivious</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oblivious"/>
    <language>en</language>
    <item>
      <title>Attacks on privacy. Why do we need PETs?</title>
      <dc:creator>Jack F.</dc:creator>
      <pubDate>Tue, 10 Aug 2021 15:26:13 +0000</pubDate>
      <link>https://dev.to/oblivious/attacks-on-privacy-why-do-we-need-pets-18an</link>
      <guid>https://dev.to/oblivious/attacks-on-privacy-why-do-we-need-pets-18an</guid>
      <description>&lt;p&gt;&lt;em&gt;In this post we are going to look at some examples of reconstruction attacks i.e. how from seemingly anonymous data, one can reveal most sensitive information about individuals.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s say you are analysing data. Maybe you are running some ML prediction algorithms, training your models, calculating different statistics, and sharing your outputs. It may seem that simply removing all the personally identifiable information such as names, addresses or telephone numbers should suffice to make sure that no private information is revealed after the analysis. That might even be sufficient to be considered anonymous data according to some privacy laws. If so, then surely you don’t need to be too worried, right?&lt;/p&gt;

&lt;p&gt;Perhaps instead you are aggregating data over many individuals so you don’t even think about any privacy issues. An extremely trivial example of how things might go wrong with aggregate statistics is revealing an average salary of say 100 employees and then publishing an average of 101 after a new employee has joined. This allows anyone with access to these aggregates to easily figure a salary of a new employee. Even though that might seem like an obvious thing one can easily avoid, it becomes much trickier when revealing a range of statistics and aggregates, in different contexts. Things get even more challenging when such information is combined with other data sources about the same individuals.&lt;/p&gt;

&lt;p&gt;If you don't follow a structured approach to data sharing, you've got a good chance of compromising the privacy of the data source. Many large companies and governments have made these mistakes, so let's talk about how you can avoid the same peril! &lt;/p&gt;

&lt;h2&gt;
  
  
  Few data points suffice to identify individuals
&lt;/h2&gt;

&lt;p&gt;Even if we think of ourselves as a needle in a haystack of 7.7bn people in the world, a range of studies has shown that very few data points suffice to uniquely or with high probability identify an individual. As an example, 4 spatiotemporal points taken from credit card metadata are sufficient to uniquely reidentify 90% of individuals [1]. &lt;/p&gt;

&lt;p&gt;Similarly, in another study that considered mobility data taken from mobile phone devices with a time resolution of 1h and the spatial resolution determined by the distance between antennas, only 4 randomly drawn points sufficed to identify 95% of individuals (and two randomly drawn points identified over 50%) [2]. The task is even easier for an attacker who cleverly uses non-uniform sampling e.g. by exploiting the fact that calls from an office at 2 am might provide more information about an individual than calls at 3 pm, when the office is crowded. Similar attacks can be performed by using other mobility data from geotagging used by social media platforms, smartphone apps, and others.&lt;/p&gt;

&lt;p&gt;It means that even when you completely remove addresses, account numbers, and other PII it is very easy to reidentify people from such a dataset. Almost all re-identification attacks make use of this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l4qb0sqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgv479147qdml6p87c8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l4qb0sqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgv479147qdml6p87c8j.png" alt="Taking a closer look."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, sensitive information can be compromised even if the identifiers are not unique. It is well known that 87% of Americans can be uniquely identified just from their gender, birthday, and ZIP code [3]. To prevent such attacks, the commonly used method is to group and coarsen the identifiers by reporting only the age brackets, giving only the first three digits of ZIP codes etc. resulting in quasi-identifiers. This is done in such a way as to guarantee k-anonymity. As a result, for any record and any set of quasi-identifiers there are at least k-1 other records with the same quasi-identifiers. It is a very common and natural way of trying to ensure privacy. Unfortunately, it can often fail in protecting sensitive information too. A straightforward example of that is the so-called homogeneity attack.&lt;/p&gt;

&lt;p&gt;Given a dataset of different medical conditions (clearly very sensitive information) for individuals, whose age, ZIP codes, and other identifiers have been coarsened in such a way as to ensure k-anonymity, it may still be possible to recover the sensitive information [4]. Simply all k individuals for a given set of quasi-identifiers can have the same medical conditions. Hence, if a neighbour knows your age, your ZIP code, and gender, it may well be that you fall in the category where all other k-1 individuals have the same condition as you. Basically, the situation arises whenever the sensitive information is not very diverse. The scarcity of data severely impacts k-anonymity. The effect becomes even more dominant for high-dimensional data with a large number of quasi-identifiers, when even ensuring k-anonymity becomes harder [5].&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The lesson from this is that inference attacks are often successful even when very few and coarse-grained data points are revealed.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Linkage attacks - connecting information from different sources
&lt;/h2&gt;

&lt;p&gt;Information disclosed by one dataset might not be all the information publicly available about the individual. This may initially be obvious but implies very non-trivial attacks. By joining information from such a dataset with another one or some background information can allow for very successful inference attacks. Such background information might not even be sensitive. Background information that a particular medical condition is much more prevalent in a given age group or sex can increase the probability of identifying medical conditions for individuals in our previous example. Exploiting side information about individuals can lead to spectacular attacks. Arguably, one of the most famous is the one performed by Latanya Sweeney in 1997. A couple of years before that, Massachusetts Group Insurance Company had shared with researchers and sold to industry medical data that included performed medical procedures, prescribed medications, ethnicity but also people's gender, date of birth, and ZIP code. Governor Bill Weld assured that the data had been fully anonymised. Sweeney paid $20 for the Cambridge Massachusetts voter registration list, which also contained these 3 characteristics. Thus by cross-referencing the two databases, she identified Weld's entry in GIC and his medical records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mqT65xbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u829dyhlf06cww4bei99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mqT65xbu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u829dyhlf06cww4bei99.png" alt="Linkage attacks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another example comes from journalist Svea Eckert and data scientist Andreas Dewes. They set up a fake AI start-up, pretended to be needing some data for training their ML models and they did obtain a free database of browsing history for 3m German users with a long list of 9bn URLs and associated timestamps. All this from a data broker. Even though no other identifiers were available, they still managed to re-identify the browsing history of politicians, judges, and even their work colleagues. One way they could achieve it was by noticing that a Twitter’s user who visits Twitter's analytics page, leaves a trace of his or her username in the corresponding URL. Hence, by going to the corresponding Twitter profiles Eckert and Dewes could identify such individuals. Interestingly, they also found out about a police force’s undercover operation. The information about it was in Google Translate URLs, which contain the whole text one inputs to the translator.&lt;/p&gt;

&lt;p&gt;Even what might seem like fairly insensitive data can tell a lot about us. Netflix learned it the hard way when it shared the database with movie ratings made by its users for the Netflix Prize competition. They stripped off all the PII from the data, but as you probably know by now, it was still possible to identify some of the users. This was done by the research from the University of Texas, which linked Netflix’s dataset to IMDB [6]. In this way information about people’s political preferences and even their sexual orientation was compromised.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The main takeaway from this part is that linking information from different data sources can lead to severe privacy leakages. *&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Attacks on ML models
&lt;/h2&gt;

&lt;p&gt;All the examples so far were considered with attacks based on some publicly released data. However, one does not need to have direct access to such data to learn about sensitive information of individuals. Another example comes from attacks on machine learning models. It has been shown that that one can learn about statistical properties of trained datasets simply from parameters of trained machine learning models. Not only that, it is also possible to perform attacks given only black-box access to a model by using it to run predictions on input data. Researchers from Cornell Tech have shown that even models trained on MLaaS offerings of Google and Amazon can be open to membership inference attacks [7]. In this scenario, an attacker can say whether a given record was used as a training dataset.‌&lt;/p&gt;

&lt;h2&gt;
  
  
  How to handle this?
&lt;/h2&gt;

&lt;p&gt;‌&lt;br&gt;
In the current data economy, a vast of information is shared between companies, organisations, and individuals. Banning this is probably unfeasible and counterproductive in the long term. We believe that privacy-enhancing technologies need to employ in order to tackle the privacy challenges. Multi-party computation can allow for encryption during computation. &lt;/p&gt;

&lt;p&gt;Secure enclaves can ensure that data is processed only according to a pre-agreed specification. Differential privacy can be employed in training ML models, building synthetic data, and sharing aggregates with privacy guarantees. We will be writing more about all these different PETs.&lt;/p&gt;

&lt;p&gt;However, if you have encountered any such privacy challenges and you wish to run PETs in your environment, give us a shout!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[1] De Montjoye, Yves-Alexandre, Laura Radaelli, and Vivek Kumar Singh. "Unique in the shopping mall: On the reidentifiability of credit card metadata." Science 347.6221 (2015): 536-539.&lt;/p&gt;

&lt;p&gt;[2] De Montjoye, Yves-Alexandre, et al. "Unique in the crowd: The privacy bounds of human mobility." Scientific reports 3.1 (2013): 1-5.&lt;/p&gt;

&lt;p&gt;[3] Sweeney, Latanya. "k-anonymity: A model for protecting privacy." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10.05 (2002): 557-570.&lt;/p&gt;

&lt;p&gt;[4] Machanavajjhala, Ashwin, et al. "l-diversity: Privacy beyond k-anonymity." ACM Transactions on Knowledge Discovery from Data (TKDD) 1.1 (2007): 3-es.&lt;/p&gt;

&lt;p&gt;[5] Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.&lt;/p&gt;

&lt;p&gt;[6] Narayanan, Arvind, and Vitaly Shmatikov. "Robust de-anonymization of large sparse datasets." 2008 IEEE Symposium on Security and Privacy (sp 2008). IEEE, 2008.&lt;/p&gt;

&lt;p&gt;[7] Aggarwal, Charu C. "On k-anonymity and the curse of dimensionality." VLDB. Vol. 5. 2005.&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Considerations in Building Enclaves for Multiparty Computation (Part 2)</title>
      <dc:creator>Jack F.</dc:creator>
      <pubDate>Mon, 09 Aug 2021 09:21:31 +0000</pubDate>
      <link>https://dev.to/oblivious/considerations-in-building-enclaves-for-multiparty-computation-part-2-2pcn</link>
      <guid>https://dev.to/oblivious/considerations-in-building-enclaves-for-multiparty-computation-part-2-2pcn</guid>
      <description>&lt;h2&gt;
  
  
  Getting Your Code On
&lt;/h2&gt;

&lt;p&gt;Now that you’ve pinned down the ideal functionality of the enclave, and assuming you are comfortable coding up a server to handle requests from each party, we can talk about some of the aspects you probably want to keep in mind.&lt;/p&gt;

&lt;p&gt;In AWS, you can treat Nitro Enclaves as a self-contained VM with an Enclave Image File build from a Docker image running inside. The communication in and out of the enclave is via virtual sockets (vsock) only to the parent instance (that is the instance that created the enclave. The parent instance acts as an intermediary between the enclave and the outside world, with the sole exception of the KMS Proxy which speaks directly with AWS Key Management Service.&lt;/p&gt;

&lt;p&gt;To start, the purpose of using a trusted execution environment is because the parties don’t trust each other. So we have to assume the users of the enclave are adversarial by nature. This means there is an onus on you, the developer, to develop a secure application which is often easier said than done. A good starting point is to create strict input and output validators. If the IO is forced to conform to a predefined JSON-schema or OpenAPI definition, at least you should be able to validate that while checking for any malicious characters and so on.&lt;/p&gt;

&lt;p&gt;You should endeavor to be particularly careful when handling bytes as an input, especially in Python. The YouTuber PwnFunction has a nice introductory tutorial on a known exploit in Python’s Pickle library which you can find here.&lt;br&gt;
There is an ever persisting battle of course between usability and security. Some would strongly argue that enclave source code should be in either C or Rust, while others feel comfortable using languages such as Python in order to take advantage of a particular tool or framework within the enclave itself. Irrespective of your decision, the code can be made more secure by performing rigorous testing, performing static analysis such as SonarCloud, and using internal firewalls to lock down any unexpected communication channels.&lt;/p&gt;

&lt;p&gt;‌Once you’ve hardened the IO of the enclave you may also want to consider your authentication model for within the enclave. This can be as simple as inputting the key management services the enclave can speak to as a build argument (discusses in the next section) or using pre-shared keys with TLS-PSK for example. OAuth-based approaches may cause some additional considerations if the enclave can not directly communicate with a trusted key authority for public-key cryptographic approaches. That’s not to say it’s impossible, but one should certainly think through the challenges and potential risks which may be involved if there is a parent server acting as a man-in-the-middle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encoding Guarantees: Build Arguments and Environment Variables
&lt;/h2&gt;

&lt;p&gt;Previously we established that the enclave image (converted from the Docker image) running inside the Nitro Enclave is what is attested when requesting key access from one or more of the parties. Well, we can use this fact to develop reusable enclave images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2XRlt59R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9kkfmfeymrxaynwn911.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2XRlt59R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9kkfmfeymrxaynwn911.png" alt="Enclave Build Pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, assume that we have two parties, Alice and Bob. They would like to be authorised based on some hash of their respective KMS ARNs and would also like to limit the number of function calls made by Alice to 3 and Bob to 2. Now imagine another scenario where Alice and Charlie would like to run the same interaction, but this time Alice can only make 2 function calls and Charlie 4.&lt;/p&gt;

&lt;p&gt;In such a scenario you do not want to be hard coding this information each time. Instead, you can leverage Docker build arguments and set them as environment variables within the enclave. This changes the Docker image and as such the attestation hashes that are used to verify the container to the key management services. It also can be highly efficient too, depending on how you create your Docker image as it can take advantage of build caching such that building the image with new arguments can be a painless process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistent Storage, PCI Devices &amp;amp; Resource Requirements
&lt;/h2&gt;

&lt;p&gt;Nitro Enclaves endeavor to ensure security by locking down a virtual machine to a very limited set of functionalities. It operates purely in RAM with dedicated CPUs. As such many functionalities, you may expect may not be available.&lt;/p&gt;

&lt;p&gt;For example, one would typically expect to have persistent storage on a Volume on an EC2 instance. However, this of course is outside of the enclave so you have to encrypt data using a KMS before releasing it to the parent instance. In the context of multiparty computation, this poses an interesting question, whose keys to use? To answer this you may wish to consider who manages the parent instance and how many of the parties would need to collaborate together to decrypt the data if the encrypted payload was to ever be leaked. One approach would be to encrypt the packet with each party's KMS in turn and reverse the process when decrypting the payload if returned to the enclave at a later point. There is nothing obviously wrong with this approach, but as we no every encryption and decryption will add to the latency of the enclave overall.&lt;/p&gt;

&lt;p&gt;When it comes to data, we’ve found it's best to assume the enclave is transactional and to save and reload persistent data for better confidence. That is not to say you shouldn’t save anything locally but best to consider it as a cache. That way if the enclave were to halt and be restored you have the safety net of being able to reload the state of the enclave.&lt;br&gt;
A second challenge is that no PCI devices are available to the enclave, so if you are hoping to crunch some data on a GPA or equivalent, you may want to think again. Your only obvious solution in such a scenario is to pay for an instance with more powerful CPUs and/or to allocate more of them to the enclave to facilitate threading and multiprocessing.&lt;/p&gt;

&lt;p&gt;Finally, one must remember that the entire enclave lives within the permitted resources at launch time. This means the enclave must have enough ram for the enclave image, all RAM required internally, and all RAM which will store the file system and so forth. This is worth keeping in mind as you develop your enclave and taking a resource-efficient design can save you significantly on your monthly AWS bills.&lt;/p&gt;

&lt;h2&gt;
  
  
  Timing Attacks
&lt;/h2&gt;

&lt;p&gt;Importantly, enclaves give you a guarantee that what code you agree gets run, not that it is safe to run. This is a big difference and the onus is on the developer to make sure that side channels, such as execution time, do not leak sensitive information that was unforeseen when signing off on the ideal functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gk3doYdD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5o3ayolyqhzhvwlikxm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gk3doYdD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5o3ayolyqhzhvwlikxm4.png" alt="Timing Attacs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s have a look at how the timing of execution may inadvertently change the guessing probability of a party's secret input. Suppose Alice inputs a decision tree that Bob wishes to use to classify some data. If the decision tree is not balanced, ie if there is a different number of comparisons required depending on the path taken through the branches of the decision tree, then simple timing of the execution time may reveal what the output of the classification was.&lt;/p&gt;

&lt;p&gt;While not always trivial to achieve, you should endeavor to create fixed-time programs for enclaves if the timing is likely to reveal superfluous information to one or more parties involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;While not delving into actual code, we’ve tried to outline some pointers towards building your first few enclaves. Even the pros can make a mistake with a poorly defined ideal functionality or an insecure implementation. Take your time, give it a shot and be comfortable to make some mistakes initially. Enclave technology is still very new and there is great reward in being an early pioneer in developing and leveraging enclave applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Oblivious?‌
&lt;/h2&gt;

&lt;p&gt;At Oblivious, we’ve built the first full-service enclave management system for multiparty applications. It’s called Ignite and it allows data scientists and machine learning practitioners to take advantage of prebuilt enclaves for data exploration, analysis, training, and inference. If you are interested in the technology, reach out to us to get started today!&lt;/p&gt;

</description>
      <category>security</category>
      <category>datascience</category>
      <category>privacy</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Considerations in Building Enclaves for Multiparty Computation (Part 1)</title>
      <dc:creator>Jack F.</dc:creator>
      <pubDate>Wed, 21 Jul 2021 11:16:59 +0000</pubDate>
      <link>https://dev.to/oblivious/considerations-in-building-enclaves-for-multiparty-computation-part-1-nc0</link>
      <guid>https://dev.to/oblivious/considerations-in-building-enclaves-for-multiparty-computation-part-1-nc0</guid>
      <description>&lt;p&gt;&lt;strong&gt;This is a short overview of some of the qualitative considerations you may want to take into account when designing and building multiparty computation protocols on secure enclaves.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Input privacy or multiparty computation is a general term to describe software that combines sensitive input from multiple sources to produce an output, without revealing any parties' sensitive data to one another. As an example, let’s imagine there are two companies, Bank A and Insurance Company B, who wish to collaborate on a new product for their customers. An obvious first question may be to ask, who exactly are our shared customers in terms of their interests and demographics. Needless to say, neither the bank nor insurance company would be willing to hand over their customer lists to one another, so their two options are to find a trusted third party who can broker the transaction or perform some sort of multiparty computation.&lt;/p&gt;

&lt;p&gt;Traditionally the term multiparty computation was specific to purely cryptographic protocols. In particular, homomorphic encryption, which has received a lot of attention from cryptographers since Craig Gentry’s 2009 thesis on Fully Homomorphic Encryption, is often used as a primitive in many multiparty computation protocols. Nevertheless, purely cryptographic protocols have their pros and minuses. They are ultimately the most secure approach if they are designed and implemented correctly. However, there are currently no normative standards available (only informative ISOs and guidelines by community groups) and to the best of our knowledge at the time of writing this, only one company has managed to get FIPS certification which is a slow and expensive process. We have no doubt the users of this technology will ultimately have to foot the bill for this process which may be prohibitive unless it becomes ubiquitous.&lt;/p&gt;

&lt;p&gt;Another approach to multiparty computation is to use a trusted execution environment, or specifically a Secure Enclave. There are broadly two types of secure enclaves, a fully hardware-based enclave such as Intel's SGX chips or a software-defined enclave such as AWS Nitro Enclaves. The former has a security model based on physical interventions while the latter leverages the same technology used for multi-tenancy cloud environments (virtual machines running via EC2s for example).&lt;/p&gt;

&lt;p&gt;Nevertheless, these two technologies do broadly speaking the same functionality from the perspective of the developer. They run your code in an isolated environment with very limited IO. Importantly they attest the code that is running inside the enclave via a cryptographic hash of the program. This attestation, in turn, is used to “prove” to a key management service that the enclave is safe to share decryption keys with. This means two or more parties can encrypt their data separately and send it into an enclave. Only when inside the enclave will the data be decrypted and a pre-agreed program can perform some computation on the data. Equally, data can be encrypted before sending it back out of the enclave.&lt;/p&gt;

&lt;h2&gt;
  
  
  Planning the Enclaves Functionality: The Ideal Functionality
&lt;/h2&gt;

&lt;p&gt;Many of us are familiar with writing code. Whether your language of choice is C, Python, Rust, or PHP (to name a few) the paradigm itself is pretty much the same at a high level. You declare variables, functions, and maybe classes and describe how they interact.&lt;/p&gt;

&lt;p&gt;Unsurprisingly, building an enclave application is simply writing code. However, it differs from the perspective of how the code perceived from different parties' viewpoints. Cryptographers use the term ideal functionalities to describe this. Basically, it boils down to how, in an ideal world, would the system work.&lt;/p&gt;

&lt;p&gt;We can break this down further and ask ourselves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Who are the parties involved in the computation?&lt;/li&gt;
&lt;li&gt;What inputs do each of them provide and what outputs do they receive?&lt;/li&gt;
&lt;li&gt;What order do inputs and outputs get sent to the trusted execution environment?&lt;/li&gt;
&lt;li&gt;What calculation does the trusted execution environment perform?&lt;/li&gt;
&lt;li&gt;What code, parameters or arguments does the trusted execution promise to all parties?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first important step in building any enclave application is to make sure you thoroughly understand the above questions and their answers, and ultimately that the ideal functionality is fit for purpose. Ideal functionalities are often implemented in practice and one needs to be extremely pedantic about exactly what each party learns about each other's secret inputs and outputs. This is compounded when sensitive information is used more than once and privacy loss is compounded. Theoretical frameworks such as UC and AC Composability must be leveraged to keep track of the leakage of sensitive information if there is any.&lt;/p&gt;

&lt;p&gt;Let’s take a very simple example. Assume we have three parties, namely Alice, Bob, and Charlie, with secret inputs A, B, and C respectively. Further, let us assume these are all integer values between 0-10, and a priori the best guess they could make about each other's values is purely random (a uniform distribution over 0-10). Let’s assume our enclave returns A+B+C to Alice and both Bob and Charlie receive no output.&lt;/p&gt;

&lt;p&gt;What did Alice learn? Well, she learned only a single number, the sum of all three sensitive inputs. So she doesn’t know exactly what Bob or Charlie’s values are. But that doesn’t mean she learned nothing about their inputs. If Alice’s value was 5 and the total she received was 12, then it has actually changed her guessing probability over Bob and Charlie’s values (neither can be over 7 in this example).&lt;/p&gt;

&lt;p&gt;While in this trivial example Alice’s guessing probability of Bob and Charlie’s input was simply a truncation of possible values, in real-world applications it typically changes her distribution of uncertainty. These information leaks can easily compound in unintuitive ways and quickly Alice’s best guess at Bob and Charlie’s inputs can become very likely.&lt;/p&gt;

&lt;p&gt;Ultimately all parties in a multiparty computation have to accept the security definition involved and this will very much depend on their circumstances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coming Up in Part 2:
&lt;/h2&gt;

&lt;p&gt;In the next post, we will discuss security considerations in implementations on AWS Nitro.&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>security</category>
      <category>docker</category>
      <category>datascience</category>
    </item>
    <item>
      <title>So what *is* privacy? (In the context of privacy tech)</title>
      <dc:creator>Jack F.</dc:creator>
      <pubDate>Tue, 13 Jul 2021 13:41:11 +0000</pubDate>
      <link>https://dev.to/oblivious/so-what-is-privacy-in-the-context-of-privacy-tech-e4g</link>
      <guid>https://dev.to/oblivious/so-what-is-privacy-in-the-context-of-privacy-tech-e4g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Privacy, as a concept, often lacks a clear definition. However, privacy-enhancing technologies can be generally categorised into two broad desiderata. The purpose of this post is to lay these goals out clearly in accessible terms.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When someone talks about privacy, what on earth do they mean? If you consult a dictionary, you'll likely get a definition based on not being observed by others and free from public or third-party attention. This could lead to many definitions in the context of technology. The term can range in context from preventing browser tracking through cookies, to limiting your digital footprint from being shared by data holders. At Oblivious, we focus on the latter, allowing organizations who collect information to use and manipulate it in a safe and secure manner, such that you (a data subject) can rest assured that no superfluous data sharing is performed.&lt;/p&gt;

&lt;p&gt;Indeed the definition of privacy can cause a lot of confusion. Lawyers, politicians, security experts, and technologists all talk about privacy but often mean quite different things. If you reading this you are maybe aware of privacy-enhancing technologies (PETs) which are technological ways of dealing with privacy problems that, frankly speaking, often arise due to the exploitation of other data technologies that no legal frameworks can sufficiently deal with. Federated learning, homomorphic encryption, differential privacy, secure enclaves are all examples of PETs, which come in handy when you want to ensure privacy.&lt;/p&gt;

&lt;p&gt;In short, the two major groups of privacy technology are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Input privacy allows multiple parties to input sensitive data to be combined for some pre-agreed purpose.&lt;/li&gt;
&lt;li&gt;Output privacy prevents the reverse engineering of inputs given the outputs of a function.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Input Privacy
&lt;/h2&gt;

&lt;p&gt;Let’s start with the first scenario wherein two millionaires want to determine who is richer but do not want to reveal each others' wealth. This is a classic 40-year old problem in computer science called Yao’s millionaire’s problem. It forms the basis of so-called multi-party computation or input privacy. It describes situations when two or more parties wish to evaluate a joint function depended on everyone's sensitive inputs, however, they do not want to reveal those inputs to each other. &lt;/p&gt;

&lt;p&gt;Here, a range of solutions may be employed. All the parties can simply give their inputs to a trusted friend, lawyer or consulting company as often happens in real life. However, if they want to employ cryptography they have even more options - they can use secure multi-party computation protocols (SMPC), which evaluate a function directly on encrypted data. It is based on a range of cryptographic primitives, which are still heavily researched - from garbled circuits to homomorphic encryption. A caveat here is that using these approaches severely slows down the computation and as always with new cryptographic protocols one has to be very careful with the threat models employed and how different subprotocols are combined. If you make a mistake at this step, it could have the consequence of not encrypting the data in the first place!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9pr3eosihh5hyokvzpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9pr3eosihh5hyokvzpm.png" alt="Multiple inputs coming in and out of a function from different parties."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another option is to refer to hardware-based approaches called trusted execution environments or secure enclaves, whereby the parties send their encrypted data to a chip or a virtual machine (VM) with additional security layers. This chip or the cloud provider hosting the VM cryptographically sign and hash the software that is run that combines the data, attesting to the data providers that it is safe for the data to be decrypted. Major chip and cloud providers have moved towards this direction under the umbrella of confidential computing in recent years. As an example, AWS has recently launched its Nitro Enclaves and you can read more about this in our previous blog post.‌&lt;/p&gt;

&lt;p&gt;All three options have their pros and cons, you are either trusting in humans, cryptography, or the chip/cloud providers cryptographic attestation and consequently they are bottlenecked by human (bureaucracy and human processing), cryptographic (often large packet sizes with many rounds of communication) or hardware-based (RAM) limitations. Nevertheless, all three tackle this challenge of data collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Output Privacy
&lt;/h2&gt;

&lt;p&gt;Let’s say we have chosen our favourite approach to input privacy and each party is happy that nobody else can see their sensitive inputs. They might even use it multiple times with slightly different inputs and parties. Can they safely announce the end of their “privacy problem”? Well, not really as we have not looked carefully at the output of this function!&lt;/p&gt;

&lt;p&gt;If one or more parties receive the output, there surely is some information about each others' input in it. It may well be that by running it multiple times with different inputs the others can work out our inputs from the outputs. To prevent this in many cases output privacy techniques can be employed.&lt;br&gt;
Output privacy challenges are very well-known to statistics bureaus. Wherever you live, it is quite likely that within the last 10 years you have taken part in a census. When the census data is aggregated and shared, statistics bureaus employ statistical disclosure control techniques to ensure that no individual or house can be identified from the published data.&lt;br&gt;
How do you do that? One technique that helps in this is the so-called k-anonymity. It is very intuitive and you have probably used or thought about it already without being aware of it. When releasing data you group people together and publish data for that group. For example, you group people under the age brackets, districts, etc, and ensure that in the smallest identifiable group there is at least k number of people.&lt;/p&gt;

&lt;p&gt;Another option, which is very often used in data science, is synthetic data. Large corporates that work with external parties such as data science consultants do not usually give away their proprietary data during the pilot phases of joint projects or for testing purposes but instead often give them fake data that resemble and share statistical properties of the underlying data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxsatqbzo73y4dhj9a86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxsatqbzo73y4dhj9a86.png" alt="Outputs of a function being perturbed by noise to prevent reverse-engineering."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The US census bureau has decided to use another cryptographic technique called differential privacy for its 2020 census. Differential privacy is gaining usage and popularity due to its mathematical guarantees and widespread applicability. It works by adding appropriate noise to the output of a function. The challenge here is to add such amount of noise that the output still provides useful information but prevents anyone who has access to that output from reverse-engineering the original data - in particular information about individuals. The rule of thumb is that the larger the dataset, the smaller noise needs to be added to the aggregated output to ensure privacy. Hence, the large-scale data published by the census bureau are equal to what the true underlying value that they would normally publish is. However, at the township or district level, the noise kicks in and the values differ slightly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing and Contrasting
&lt;/h2&gt;

&lt;p&gt;Reading the above, you might be thinking that these sound very different and, well, you are kind of right! Input privacy really takes into account each known party and their interactions in a calculation. It prevents each party from learning something they shouldn't be able to about other parties' inputs. Output privacy does the same for the party who receives the output of a calculation - typically limiting their ability to learn about individuals rather than aggregates.‌&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting everything together
&lt;/h2&gt;

&lt;p&gt;‌&lt;br&gt;
Now that we are pros when speaking about input privacy and output privacy techniques, it becomes natural to combine them together to ensure privacy in a larger set of use cases. We can evaluate join queries over data coming from multiple sources both without seeing the data in plaintext and giving guarantees about the output privacy. Such end-to-end privacy systems are something that we at Oblivious are very much focused on. If you want to have a run at playing with how secure enclaves can be used in conjunction with differentially private output guarantees, give us a shout!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frj8f9xyeadt9xyxwmgzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frj8f9xyeadt9xyxwmgzb.png" alt="Placing systems with differential privacy inside trusted execution environments."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Buzzword Mapper
&lt;/h2&gt;

&lt;p&gt;As a bonus for making it to the end of the article, we thought we'd map some privacy-tech buzzwords to the type of privacy they enforce. Hopefully, at the next (socially distanced) cocktail party you go to when the cryptographer starts spouting on about one of these you'll at least have a bearing on what they are trying to achieve:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb48im735fb2fry71oih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb48im735fb2fry71oih.png" alt="Buzzwords, what they endeavour to do and how."&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>security</category>
      <category>datascience</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>So what exactly are AWS Nitro Enclaves?</title>
      <dc:creator>Jack F.</dc:creator>
      <pubDate>Sat, 10 Jul 2021 04:08:31 +0000</pubDate>
      <link>https://dev.to/oblivious/so-what-exactly-are-aws-nitro-enclaves-11jf</link>
      <guid>https://dev.to/oblivious/so-what-exactly-are-aws-nitro-enclaves-11jf</guid>
      <description>&lt;p&gt;&lt;em&gt;Secure enclaves and trusted execution environments are becoming ever more popular. AWS recently released their AWS Nitro Enclaves. But what are they and do I really need them?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Secure enclaves and trusted execution environments are becoming ever more popular. AWS recently released their AWS Nitro Enclaves. But what are they and do I really need them?&lt;/p&gt;

&lt;p&gt;Nitro enclaves by AWS are Amazon's approach to creating trusted execution environments (TEEs) which are intended to support functions on sensitive data. Amazon by no means invented the concept, and TEEs or secure enclaves have been growing in popularity over the past number of years. Hardware players like Intel and AMD have created physical chips which support TEEs, while cloud platforms like Google and AWS have developed theirs based on virtual machines. The latter is our focus today.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Assumptions: we're going to assume the reader is familiar with AWS EC2s and Docker.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Nitro Enclaves are Trusted Execution Environments
&lt;/h2&gt;

&lt;p&gt;OK, but how exactly do Nitro Enclaves give trust to their users? Let's start by explaining what exactly they do and then we can dig into how they are useful.&lt;/p&gt;

&lt;p&gt;If you are familiar with EC2, you are likely familiar with the idea of running a virtual machine. When you spin up an EC2 instance, from your perspective Amazon gives you a server that you can access, configure and run programs on via ssh. &lt;/p&gt;

&lt;p&gt;However, they don't actually give you a full physical computer (unless you are using a "bare metal" instance) they actually give you a virtual machine (VM) running on a server with other VMs. In essence, they carve out a number of CPUs and some RAM for your virtual machine to use. Their hypervisor runs under the hood and manages the VMs running on each server, providing security and access to the network and PCI devices like GPUs, volume memory, etc.&lt;/p&gt;

&lt;p&gt;Say you are running an EC2 with 4 cores (CPUs) and 8 GB of RAM, Nitro Enclaves allow you to give back to Amazon 2 cores and 4 GBs of RAM for example. More specifically, you tell AWS to take these resources and run a docker container with them. The docker container can run anything you like, but when you hand it over to Nitro you lose access to it other than a single socket connection using Virtual Sockets. You can't see any internal console messages, logs, anything. Only the input and output of the sockets. Further, only the parent instance (ie the EC2 that created the enclave) can communicate with the docker container running inside the enclave.&lt;/p&gt;

&lt;p&gt;So basically this is just a worse EC2 instance inside an instance? Not quite. The enclave itself has two superpowers that make it exceptionally useful:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The enclave can speak directly to &lt;a href="https://aws.amazon.com/kms/"&gt;Amazon's KMS&lt;/a&gt; (key management service) over TLS. So if you encrypt data of any kind and the code running inside the enclave needs to decrypt, then it can do so without talking explicitly via the parent instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The enclave creates a &lt;a href="https://en.wikipedia.org/wiki/Hash_function"&gt;hash&lt;/a&gt; of the docker container (called an attestation) inside the enclave when it communicates to the KMS. This allows you to create access rules within the KMS so only enclaves with a particular hash (ie a specific pre-agreed docker container, running pre-agreed code) get to decrypt data. This can actually be used to talk to custom KMS or equivalent too, proving what's running in the enclave.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Admittedly neither of these superpowers sounds that impressive at first glance. But give it a minute or two and you begin to see the power it creates.&lt;/p&gt;

&lt;p&gt;Traditionally, when we use a KMS we create rules about who can encrypt and decrypt data based on IAM roles in Amazon. Basically, you can say Alice is allowed to encrypt and decrypt data with some key, or the EC2 instance with ID XYZ can use the key and so forth. The problem with this is that you are trusting Alice or the specified EC2 instance to only use the data for a particular use. You don't have any guarantees they will actually do that though. Alice may be malicious and the EC2 you chose may have been corrupted.&lt;/p&gt;

&lt;p&gt;You also don't really have any reasonable verifiable log of what the data was used for. Perhaps the EC2 is running some code from GitHub, but which version was it running on June 7th, 2020? Not always an easy question to answer. You may have tried to solve this with logs throughout your CI/CD pipeline but you don't actually have any guarantees.&lt;/p&gt;

&lt;p&gt;Now you do with Nitro Enclaves because only the version of code (and every other minute detail of how it runs in a docker container) will have the attestation (hash) that is required to decrypt the data using the KMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what does this empower me to do?
&lt;/h2&gt;

&lt;p&gt;There are two main categories of benefits from Nitro enclaves; verified assurance of how data is used and multiparty computation. Let's discuss each in a little bit more detail:&lt;/p&gt;

&lt;h3&gt;
  
  
  Verified Usage of Data
&lt;/h3&gt;

&lt;p&gt;We've kind of alluded to this already, but suppose you have some sensitive data that you need to keep safe and you want to ensure that it is only used for pre-agreed purposes - then Nitro is your friend. You can keep your data encrypted at all times and only allow it to be decrypted within enclaves that are running a particular pre-approved program. This really helps to achieve internal standards towards GDPRs Data Minimization philosophy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multiparty Computation
&lt;/h3&gt;

&lt;p&gt;This is one of our big focuses at Oblivious. Multiparty computation is any computation that requires input from multiple parties which they are unwilling to share in plain text (ie they won't let the other parties see their data or software). There are typically three approaches to performing multiparty computation: find a trusted third party to facilitate the joint computation, apply a specific cryptographic handshake (typically the most robust but often very resource intensive) or use a secure enclave. In the context of Nitro enclaves, multiparty computation can be performed provided all parties have access to the Amazon KMS. Each party encrypts their data and sends it to the party who is hosting the enclave. A pre-agreed docker image is running in that party's EC2 and the enclave attests this when requesting the keys to decrypt the data within the enclave. This is a game-changer for secure SaaS, whereby you trust the security of AWS, but not necessarily the counterparty you are working with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are there any drawbacks I should know at this point?
&lt;/h2&gt;

&lt;p&gt;Yes, more than a couple! Amazon Nitro is a great step forward for AWS users but there are a few things you should probably also know:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The security model is very much tied to that of Amazon's EC2s. This is pretty robust and has more certifications than you can count,  nevertheless, Nitro Enclaves are still a new technology and you should be aware that new technologies always bring some potential unforeseen risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enclaves generally, albeit Intel's chips through to Nitro enclaves, don't protect against many obvious side-channel attacks. For example, imagine we used it to make predictions to say if an image contained a picture of a cat or a dog. Hypothetically it took 1.2s to run whenever there was a dog in it and 1s whenever there was a cat, then the parent instance could simply log the run times and know exactly what was contained on encrypted images being sent in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory is in RAM. To keep the enclave extra safe, all of the resources required by the docker container need to be stored in RAM and RAM is expensive. You can of course pass larger chunks of data in and out of the enclave as required, but encryption and decryption must be performed in each direction to keep that data safe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No PCI device access. If you were hoping to start crunching lots of data on a NVIDIA GPU, you'll be a little frustrated as you cant use any compute other than the CPUs delegated to the enclave at build time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Debugging can be a real pain too. As we've been working a lot with Nitro we've developed some tools that make our life a lot easier, but at the very beginning of our journey with Nitro, the debugging was a right pain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If security is your number one concern, there are a lot of ways to break the security of the code contained in the docker image is insecure in the first place. All the enclave guarantees is the container specified is what's running, not that it's in any way safe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The enclaves only have that direct connection to Amazon's KMS, not Google's or Azure's for example. So if you are hoping to do multiparty computation you need all parties to be using the Amazon KMS to encrypt their data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  So why has Oblivious been tinkering with AWS Nitro Enclaves?
&lt;/h2&gt;

&lt;p&gt;Multiparty computation (MPC) and privacy technologies are what we build at Oblivious. We started the company focussing on building very specific MPC cryptography protocols but as we began to engage with a larger number of customers we realized there was an ever more pressing challenge to balance security, privacy, and flexibility. Enclave technologies offer a different set of trade-offs to purely cryptographic protocols. Not better or worse, just different.&lt;/p&gt;

&lt;p&gt;We found that while large organizations can afford to pay for bespoke crypto, 99% of businesses cannot. Enclaves offer a very flexible alternative approach for mainstream companies, especially those who already have the software they wish to secure as prototypes or which leverage other larger frameworks and libraries. Unfortunately, it is still not trivial to implement software on enclaves, manage access to the enclaves, and assign roles and users to code running within enclaves in the context of multiparty computation.&lt;/p&gt;

&lt;p&gt;That's why we built &lt;em&gt;Ignite&lt;/em&gt;, an enclave management system (EMS) for AWS Nitro. This allows every AWS user to take advantage of Nitro enclaves for data analysis, machine learning, and differentially private data access control. Today, Ignite is available to early access users only, but if you are interested in becoming an early access member or getting a notification when it goes live for public use, drop us a line at &lt;em&gt;ignite(at)oblivious.ai&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>privacy</category>
      <category>security</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
