<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hideki Okamoto</title>
    <description>The latest articles on DEV Community by Hideki Okamoto (@hokamoto).</description>
    <link>https://dev.to/hokamoto</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hokamoto"/>
    <language>en</language>
    <item>
      <title>Cloud GPU instance with PyTorch and TensorFlow easy setup in 10 minutes</title>
      <dc:creator>Hideki Okamoto</dc:creator>
      <pubDate>Thu, 22 Dec 2022 09:42:25 +0000</pubDate>
      <link>https://dev.to/hokamoto/cloud-gpu-instance-with-pytorch-and-tensorflow-easy-setup-in-10-minutes-2fo5</link>
      <guid>https://dev.to/hokamoto/cloud-gpu-instance-with-pytorch-and-tensorflow-easy-setup-in-10-minutes-2fo5</guid>
      <description>&lt;p&gt;&lt;strong&gt;(10/29/2024 Update) This article has been updated to reflect the release of the RTX 4000 Ada GPU instance. Some screenshots and descriptions still reference the older RTX 6000 GPU, so interpret them accordingly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PyTorch from Meta (ex-Facebook) and TensorFlow from Google are highly popular deep learning frameworks. While GPU is essential to the development and training process of deep learning models, it is time-consuming to build an environment to make GPU available for these frameworks and to make both PyTorch and TensorFlow usable in a single environment. This article shows how to set up an environment with GPU-enabled PyTorch and TensorFlow installed on Akamai Connected Cloud (formerly Linode) in 10 minutes. The procedures in this article make it easy to set up a dedicated deep learning environment in the cloud, even for those unfamiliar with setting up a Linux server.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Akamai Connected Cloud (formerly Linode)?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.linode.com/" rel="noopener noreferrer"&gt;Akamai Connected Cloud (ACC / formerly Linode)&lt;/a&gt; is an IaaS acquired by Akamai in February 2022. ACC offers simple and predictable pricing, which includes SSD storage and a certain amount of network transfers within the price, which are often expensive in other cloud computing providers. There are no price differences by region, making spending on cloud providers more predictable. For example, a virtual machine with 16GB of memory with NVIDIA RTX 4000 Ada costs $350 per month (as of October 2024), including 500GB of SSD storage and 1TB of network transfer. If you use a service for just part of the month, hourly billing enables you to only be charged for the time the machine is present on the account.&lt;/p&gt;

&lt;p&gt;You can use the &lt;a href="https://www.linode.com/ja/estimator/" rel="noopener noreferrer"&gt;Cloud Estimator&lt;/a&gt; tool provided by ACC to compare prices with other cloud providers.&lt;/p&gt;

&lt;h1&gt;
  
  
  The goal of this article
&lt;/h1&gt;

&lt;p&gt;In this article, we will set up Docker Engine Utility for NVIDIA (nvidia-docker), a container virtualization platform that supports NVIDIA's GPUs, on a GPU instance on ACC, where we will deploy &lt;a href="https://catalog.ngc.nvidia.com/containers" rel="noopener noreferrer"&gt;NGC Container&lt;/a&gt;, containers officially provided by Facebook and Google for deep learning. You can set up the environment in about 10 minutes with almost no prior knowledge of ACC, Docker, or NGC Container by using StackScripts, ACC's deployment automation function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linode.com/docs/products/tools/stackscripts/" rel="noopener noreferrer"&gt;Tools - StackScripts&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwbk1myxo1q9hbet4e5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwbk1myxo1q9hbet4e5v.png" alt="NVIDIA Container Toolkit" width="522" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The environment built with this procedure includes a sample Jupyter Notebook that can be used with &lt;a href="https://openai.com/blog/whisper/" rel="noopener noreferrer"&gt;OpenAI Whisper&lt;/a&gt;, a speech recognition model that has been widely praised for its extremely high recognition accuracy so that even those who do not develop deep learning models themselves can experience the benefits of GPU instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F324k3fmqijqua1sz9uds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F324k3fmqijqua1sz9uds.png" alt="Voice Recognition with OpenAI Whisper" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you provide the Object Storage credentials to the StackScript, the PyTorch and TensorFlow containers will automatically mount the external Object Storage, which can be used to retrieve training data from or to store your deep learning models. Using Object Storage is optional. You can skip it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setup a GPU instance with PyTorch and TensorFlow
&lt;/h1&gt;

&lt;p&gt;First, open the StackScript I have prepared for you from the following link. This StackScript will automatically install nvidia-docker, PyTorch, and TensorFlow. (You must be logged into your ACC account to access the link.) If you can't open this StackScript for some reason, I have uploaded &lt;a href="https://github.com/hokamoto/stackscripts/blob/main/deeplearning.sh" rel="noopener noreferrer"&gt;the contents of this StackScript to GitHub&lt;/a&gt; for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;deeplearning-gpu&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://cloud.linode.com/stackscripts/1102035" rel="noopener noreferrer"&gt;https://cloud.linode.com/stackscripts/1102035&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Deploy New Linode"&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35qishha2f9dzoxhvbp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35qishha2f9dzoxhvbp4.png" alt="Deploy New Linode" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;StackScript has a feature called UDF (User Defined Fields) that automatically creates an input form with the parameters required for deployment. This StackScript requires you to set the login credential of a non-root user who can SSH into the virtual machine, Access Key to mount Object Storage as external storage (optional). If you want to mount Object Storage, create a bucket and obtain an Access Key in advance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linode.com/docs/products/storage/object-storage/guides/manage-buckets/" rel="noopener noreferrer"&gt;Guides - Create and Manage Buckets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linode.com/docs/products/storage/object-storage/guides/access-keys/" rel="noopener noreferrer"&gt;Guides - Manage Access Keys&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The regions where both Object Storage and RTX 4000 Ada GPU instances are available are as follows as of October 2024.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Seattle, WA, US&lt;/li&gt;
&lt;li&gt;Chicago, IL, US&lt;/li&gt;
&lt;li&gt;Paris, FR&lt;/li&gt;
&lt;li&gt;Osaka, JP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddfj4fd6bj6koxrnmxlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddfj4fd6bj6koxrnmxlu.png" alt="Linode configuration" width="800" height="1147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since GPU instances are available only in limited regions, select a virtual machine type first, then the region. Here I have selected Dedicated 32 GB + RTX6000 GPU x1 in the Singapore region for example.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l2fzzhtkopgksk0dxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l2fzzhtkopgksk0dxp.png" alt="Singapore region" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
Name the virtual machine, enter the root password, and click "Create Linode".&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4v4jcq0k6zibnil0ywp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4v4jcq0k6zibnil0ywp.png" alt="Name the virtual machine" width="800" height="806"&gt;&lt;/a&gt;&lt;br&gt;
The screen will transition to the virtual machine management dashboard. Wait a few minutes until the virtual machine status changes from PROVISIONING to RUNNING. The IP address of the virtual machine you just created is displayed on the same screen, so take note of it.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuy06y8jnlfbgm1xczib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuy06y8jnlfbgm1xczib.png" alt="IP address" width="800" height="184"&gt;&lt;/a&gt;&lt;br&gt;
The virtual machine is now booted. The installation process of nvidia-docker and NGC Containers will proceed automatically in the background. Wait 10 minutes for the installation to complete before proceeding to the next step.&lt;/p&gt;
&lt;h1&gt;
  
  
  Starting a container
&lt;/h1&gt;

&lt;p&gt;Now let's log in to the virtual machine via SSH. If the setup process performed by StackScript is complete, the following message will appear when you log in. If you do not see this message, log out and wait a few minutes before logging in again. If you have inadvertently started a virtual machine that does not have a GPU, you will get the message "GPU is not available. This StackScript should be used for GPU instances." In that case, please start a GPU instance and redo the procedure from the beginning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;% ssh root@45.118.XX.XX
root@45.118.XX.XX's password:

(snip)

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="c"&gt;#############################################################################&lt;/span&gt;
&lt;span class="go"&gt;You can launch a Docker container with each of the following commands:

pytorch: Log into an interactive shell of a container with Python and PyTorch.
tensorflow: Log into an interactive shell of a container with Python and TensorFlow.
pytorch-notebook: Start Jupyter Notebook with PyTorch as a daemon. You can access it at http://[Instance IP address]/
tensorflow-notebookm: Start Jupyter Notebook with TensorFlow as a daemon. You can access it at http://[Instance IP address]/

Other commands:
stop-all-containers: Stop all running containers.
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="c"&gt;#############################################################################&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following five commands are available on the machine created by this StackScript.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;pytorch&lt;/td&gt;
&lt;td&gt;Start a container with PyTorch installed and enter its interactive shell&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tensorflow&lt;/td&gt;
&lt;td&gt;Start a container with TensorFlow installed and enter its interactive shell&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pytorch-notebook&lt;/td&gt;
&lt;td&gt;Start Jupyter Notebook with PyTorch installed as a daemon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tensorflow-notebook&lt;/td&gt;
&lt;td&gt;Start Jupyter Notebook with TensorFlow installed as a daemon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;stop-all-containers&lt;/td&gt;
&lt;td&gt;Stop all running containers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each container has the directories &lt;code&gt;/workspace/HOST-VOLUME/&lt;/code&gt; and &lt;code&gt;/workspace/OBJECT-STORAGE/&lt;/code&gt; to mount the host machine directory and external Object Storage. The container created by the above command is configured to remove the container when it is stopped (&lt;code&gt;--rm&lt;/code&gt; option of &lt;code&gt;docker run&lt;/code&gt; is set), so place the files you want to keep in &lt;code&gt;/workspace/HOST-VOLUME/&lt;/code&gt; or &lt;code&gt;/workspace/OBJECT-STORAGE/&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff9gt5j3wthz4laj6x15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff9gt5j3wthz4laj6x15.png" alt="Directory structure" width="800" height="224"&gt;&lt;/a&gt;&lt;br&gt;
Let's spin up Jupyter Notebook with PyTorch as a daemon and run a speech recognition model &lt;a href="https://openai.com/blog/whisper/" rel="noopener noreferrer"&gt;OpenAI Whisper&lt;/a&gt;. Run the &lt;code&gt;pytorch-notebook&lt;/code&gt; command from the console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;root@45-118-XX-XXX:~#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;pytorch-notebook
&lt;span class="go"&gt;[I 04:36:22.823 NotebookApp] http://hostname:8888/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b
        http://hostname:8888/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Jupyter Notebook should now be started. If you get the error "Bind for 0.0.0.0:80 failed: port is already allocated." Stop the existing container first with the &lt;code&gt;stop-all-containers&lt;/code&gt; command. If you get the above result without any problem, replace &lt;em&gt;hostname&lt;/em&gt; of the URL with the IP address of the virtual machine that you noted when creating the virtual machine, delete &lt;code&gt;:8888&lt;/code&gt;, and access the virtual machine from a web browser. The token will change each time the container is started. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL displayed in console&lt;/th&gt;
&lt;th&gt;URL to be entered into a web browser&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://hostname:8888/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b" rel="noopener noreferrer"&gt;http://hostname:8888/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="http://45.118.XX.XX/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b" rel="noopener noreferrer"&gt;http://45.118.XX.XX/?token=0ee3290287b3bd90f2e8e3ab447965d3e074267f0d60420b&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Click on &lt;code&gt;Voice Recognition with OpenAI Whisper.ipynb&lt;/code&gt; in &lt;code&gt;HOST-VOLUME&lt;/code&gt; to open it.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi7zc4k1c48ovqzybbrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi7zc4k1c48ovqzybbrg.png" alt="Jupyter Notebook" width="800" height="306"&gt;&lt;/a&gt;&lt;br&gt;
Click &lt;code&gt;Cell&lt;/code&gt;-&amp;gt;&lt;code&gt;Run All&lt;/code&gt; in the menu to run OpenAI Whisper. The first time you run it, it will take a few minutes to download dependent software and deep learning models.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hp335d1sza80jvgcygt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hp335d1sza80jvgcygt.png" alt="OpenAI Whisper" width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the execution completes without problems, the last cell will show the result of the speech recognition: "I'm getting them for $12 a night."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations! Now you have GPU-enabled PyTorch and TensorFlow&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Deleting the instance
&lt;/h1&gt;

&lt;p&gt;You can delete the virtual machine that you have finished by clicking "Delete" in the ACC Management Console. The contents of &lt;code&gt;/workspace/HOST-VOLUME/&lt;/code&gt; (&lt;code&gt;/root/shared/&lt;/code&gt; from the host OS) will be deleted, so move any files you want to keep to another location.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rfiifc0c2sr3dxfytz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rfiifc0c2sr3dxfytz4.png" alt="Delete instance" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are charged even for powered-off virtual machines. Delete virtual machines that you do not want to be charged for.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linode.com/docs/products/platform/billing/#if-my-linode-is-powered-off-will-i-be-billed" rel="noopener noreferrer"&gt;Platform - Billing&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Access control for the instance
&lt;/h1&gt;

&lt;p&gt;Access to the virtual machines created in the above procedure via SSH requires password authentication or public key authentication, and access to Jupyter Notebook requires token authentication. If you want to add access control based on the IP address of the client, refer to the following articles to apply firewalls to port 22 (SSH) and port 80 (HTTP).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linode.com/docs/products/networking/cloud-firewall/get-started/" rel="noopener noreferrer"&gt;Cloud Firewall - Get Started&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more advanced access control, Akamai's zero-trust solution, Enterprise Application Access, can be used for integration with external Identity Providers and SSO support.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.akamai.com/resources/product-brief/enterprise-application-access-product-brief" rel="noopener noreferrer"&gt;Enterprise Application Access&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/pulse/deploying-zero-trust-architecture-linode-cloud-minutes-luca-moglia/" rel="noopener noreferrer"&gt;Deploying a Zero Trust Architecture on Linode Cloud in Minutes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Enabling HTTPS
&lt;/h1&gt;

&lt;p&gt;Follow the steps below to enable HTTPS in Jupyter Notebook for production use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jupyter-notebook.readthedocs.io/en/stable/public_server.html#notebook-public-server" rel="noopener noreferrer"&gt;Running a public notebook server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The five commands listed above are defined as aliases for docker commands in &lt;code&gt;/root/.bash_profile&lt;/code&gt;. When HTTPS is enabled, the argument of the &lt;code&gt;-p&lt;/code&gt; option of the docker command used by the &lt;code&gt;pytorch-notebook&lt;/code&gt; and &lt;code&gt;tensorflow-notebook&lt;/code&gt; commands should also be modified to the appropriate port such as &lt;code&gt;443&lt;/code&gt;. And finally, execute &lt;code&gt;ufw allow 443/tcp&lt;/code&gt; so that the firewall allows port 443.&lt;/p&gt;

</description>
      <category>linode</category>
      <category>gpu</category>
      <category>pytorch</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>Visualizing Akamai DataStream 2 logs with Elasticsearch and Kibana on Linode</title>
      <dc:creator>Hideki Okamoto</dc:creator>
      <pubDate>Fri, 23 Sep 2022 02:10:51 +0000</pubDate>
      <link>https://dev.to/hokamoto/visualizing-akamai-datastream-2-logs-with-elasticsearch-and-kibana-2c94</link>
      <guid>https://dev.to/hokamoto/visualizing-akamai-datastream-2-logs-with-elasticsearch-and-kibana-2c94</guid>
      <description>&lt;p&gt;Setting up Elasticsearch and Kibana on Linode for visualizing Akamai DataStream 2 logs: All steps can be done using only a web browser and do not require login to the Linux console. The installation procedure takes only 10 minutes, excluding DataStream 2 activation time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Akamai DataStream 2
&lt;/h1&gt;

&lt;p&gt;Akamai DataStream 2 is a free feature that streams access logs from Akamai Intelligent Edge Platform to designated destinations in near real-time. As of November 2023, DataStream 2 can deliver access logs to the following destinations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3&lt;/li&gt;
&lt;li&gt;Azure Storage&lt;/li&gt;
&lt;li&gt;Custom HTTPS endpoint&lt;/li&gt;
&lt;li&gt;Datadog&lt;/li&gt;
&lt;li&gt;Elasticsearch&lt;/li&gt;
&lt;li&gt;Google Cloud Storage&lt;/li&gt;
&lt;li&gt;Loggly&lt;/li&gt;
&lt;li&gt;New Relic&lt;/li&gt;
&lt;li&gt;Oracle Cloud&lt;/li&gt;
&lt;li&gt;S3-compatible destinations (incl. Linode Object Storage)&lt;/li&gt;
&lt;li&gt;Splunk&lt;/li&gt;
&lt;li&gt;Sumo Logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;See also: &lt;a href="https://techdocs.akamai.com/datastream2/docs/stream-logs" rel="noopener noreferrer"&gt;DataStream 2 - Stream logs to a destination&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Elasticsearch / Kibana
&lt;/h1&gt;

&lt;p&gt;Elasticsearch is a full-text search engine developed by Elastic. Its source code is publicly available under a dual license, the Server Side Public License and the Elastic License. Kibana is data visualization software for Elasticsearch and is offered under the same terms as Elasticsearch. The combination of this two software with data collection pipeline software called Logstash is called ELK Stack, which has evolved from its original use as a full-text search engine and is now popular as a log analysis and data visualization platform.&lt;/p&gt;

&lt;h1&gt;
  
  
  The goal of this article
&lt;/h1&gt;

&lt;p&gt;I will explain how to use Akamai DataStream 2 to deliver access logs to Elasticsearch running on Linode in near real-time and visualize the logs with Kibana. By following the steps you can create a dashboard like a screenshot below, which graphs a typical field out of the &lt;a href="https://techdocs.akamai.com/datastream2/docs/data-set-parameters" rel="noopener noreferrer"&gt;45 fields included in DataStream 2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah72p3s7u36opijswyee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah72p3s7u36opijswyee.png" alt="Kibana Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01ozf1lae37yroj9cpzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01ozf1lae37yroj9cpzz.png" alt="Kibana Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will deploy the stack to Linode, an IaaS (Infrastructure as a Service) provider acquired by Akamai in February 2022. &lt;a href="https://www.linode.com/" rel="noopener noreferrer"&gt;Linode&lt;/a&gt; has a deployment automation feature called StackScripts, which allows you to have Elasticsearch+Kibana ready to receive access logs in about 10 minutes (Apart from configuring Elasticsearch+Kibana, activating DataStream 2 takes about 1.5 hours separately.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See also: &lt;a href="https://www.linode.com/products/stackscripts/" rel="noopener noreferrer"&gt;Linode StackScripts&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This article explains how to build an Elasticsearch and Kibana environment from scratch, but if you are already running these environments and only need Elasticsearch Index Mapping, Kibana Data View, Visualization, and Dashboard definition files for Akamai DataStream 2, these are also available on GitHub for download.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/hokamoto/stackscripts/tree/main/elasticsearch-kibana" rel="noopener noreferrer"&gt;Elasticsearch, Kibana definition files&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Install Elasticsearch and Kibana on Linode
&lt;/h1&gt;

&lt;p&gt;First, open the StackScript I have prepared for you from the following link. This StackScript will automatically install Elasticsearch and Kibana. (You must be logged into your Linode account to access the link.) If you can't open this StackScript for some reason, I have uploaded &lt;a href="https://github.com/hokamoto/stackscripts/blob/main/elasticsearch-kibana.sh" rel="noopener noreferrer"&gt;the contents of this StackScript to GitHub&lt;/a&gt; for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;elasticsearch-kibana-for-akamai-datastream2&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://cloud.linode.com/stackscripts/1059555" rel="noopener noreferrer"&gt;https://cloud.linode.com/stackscripts/1059555&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Deploy New Linode"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5b9thx07yavhi07d7rwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5b9thx07yavhi07d7rwy.png" alt="Deploy New Linode"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;StackScript has a feature called UDF (User Defined Fields) that automatically creates an input form with the parameters required for deployment. This StackScript requires you to set the login credential of a non-root user who can SSH into the virtual machine, passwords for Elasticsearch and Kibana administrative users, and authentication information for DataStream 2 to feed logs to Elasticsearch during the deployment process for the virtual machine. You need to specify these parameters when deploying the machine. Enter the required parameters in the text box below. The values entered here will be used later, so keep a note of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dfrjqmxds6ytvxb5dyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dfrjqmxds6ytvxb5dyj.png" alt="Deployment parameters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the region where the virtual machine will be created and the type of virtual machine. Use one that has a minimum of 8 GB of memory, as Elasticsearch and Kibana will fail to launch without it. Here I select Dedicated 8 GB Linode in the Tokyo region of Japan. If you intend to use this setup to visualize the logs of a high-traffic website, you may need to choose an even higher-performance instance type. See the "Considerations for production use" section at the bottom of this article for more information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpk7189ouvbzpoht26ta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpk7189ouvbzpoht26ta.png" alt="Select the region"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the virtual machine, enter the root password, and click "Create Linode".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujek33u8obun1bozyerx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujek33u8obun1bozyerx.png" alt="Virtual machine name and the root password"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screen will transition to the virtual machine management dashboard. Wait a few minutes until the virtual machine status changes from PROVISIONING to RUNNING. The IP address of the virtual machine you just created is displayed on the same screen, so take note of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lpbx9djjznknxhamsoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lpbx9djjznknxhamsoi.png" alt="Provisioning status and the IP address"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check and note the Reverse DNS value for the virtual machine from the Network tab of the virtual machine, as it will be needed in the DataStream 2 configuration procedure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgmtfmirdyfecym53kj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgmtfmirdyfecym53kj7.png" alt="Reverse DNS value"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The virtual machine is now booted. The installation process of Elasticsearch and Kibana will proceed automatically in the background. Wait 10 minutes for the installation to complete before proceeding to the next step.&lt;/p&gt;
&lt;h2&gt;
  
  
  Log in to Kibana
&lt;/h2&gt;

&lt;p&gt;Let's make sure you can log in to Kibana by accessing &lt;code&gt;http://[IP address of the virtual machine]:5601/&lt;/code&gt; from your web browser. Enter &lt;code&gt;elastic&lt;/code&gt; as the login user and the password you specified when deploying the virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs6fyco0vbom5h23rhw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs6fyco0vbom5h23rhw4.png" alt="Kibana login screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the hamburger button in the upper left corner to display the menu and click Analytics -&amp;gt; Dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pq0q3wulk01ljp4gj22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pq0q3wulk01ljp4gj22.png" alt="Open Kibana dashboards"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A dashboard named "Akamai" was automatically created by the StackScript, so open it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuwy9kf042u1c1gdylb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuwy9kf042u1c1gdylb7.png" alt="Open Akamai dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpel4iqsci81yag81n7b6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpel4iqsci81yag81n7b6.png" alt="Akamai dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the dashboard appears as shown above, you have done the steps correctly so far. At this point, there is no data because DataStream 2 has not yet been set up.&lt;/p&gt;
&lt;h1&gt;
  
  
  Configure DataStream 2
&lt;/h1&gt;

&lt;p&gt;You need to set up DataStream 2 from the Akamai Control Center as well. Click the hamburger button in the upper left corner to display the menu, then click COMMON SERVICES -&amp;gt; DataStream, and follow the steps below to create a stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rxl3s8agh8ig2e3syyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rxl3s8agh8ig2e3syyg.png" alt="Open DataStream configurations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu2hkrpwce9mvwgbj9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu2hkrpwce9mvwgbj9f.png" alt="Create a new stream"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the stream and mark the checkboxes for the delivery properties for which you want to enable log delivery via DataStream 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1mtacjyjnu9haa7msw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1mtacjyjnu9haa7msw9.png" alt="Configure the new DataStream"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A screen will appear to select the fields in the access log to be sent, so as an example, check Include all for all categories. As of September 2022, a total of 45 fields would be selected. Also, at the bottom of the configuration screen, select JSON as the log format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Please select the fields of the log to be collected taking into account the laws and regulations regarding the protection of PII.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flno94unsmhrmcgl1usiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flno94unsmhrmcgl1usiu.png" alt="Select the fields in the access log"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75rfyvrj4gu48hgkvncg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75rfyvrj4gu48hgkvncg.png" alt="Select the log format"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, set the destination for DataStream 2: &lt;code&gt;Elasticsearch&lt;/code&gt; for Destination, any name for Display name, &lt;code&gt;http://[Reverse DNS hostname]:9200/_bulk&lt;/code&gt; for Endpoint using the Reverse DNS hostname that you noted when creating the virtual machine, &lt;code&gt;datastream2&lt;/code&gt; for Index name, and the Username and Password that you entered when deploying the virtual machine. Also, mark the Send compresses data checkbox and click the "Validate &amp;amp; Save" button in the lower right corner of the screen. If all the values are correct, you will see the message "Destination details are valid" in the lower right corner and the screen will change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu0lsld8j7njyggtxd5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu0lsld8j7njyggtxd5f.png" alt="Configure destination parameters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0yix7hyz3ouit3kh1kv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0yix7hyz3ouit3kh1kv.png" alt="Complete the DataStream configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, a summary of the settings will be displayed to confirm that the settings are correct. Check the "Activate stream upon saving" checkbox. It takes about 1.5 hours for DataStream 2 to begin log streaming. If you would like to receive an email notification when logs are started to be delivered, check "Receive an email once activation is complete." and enter your email address. When the activation process of DataStream 2 starts, a message will be displayed indicating that DataStream 2 settings are required also on the Property Manager. Click "Proceed to Property Manager" after noting the name of DataStream 2 that you have just set.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhquddzlyxuwxv9zd8v26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhquddzlyxuwxv9zd8v26.png" alt="Activating DataStream"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Enable DataStream 2 in Akamai delivery properties
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;This section assumes that you understand the basic operations of the Akamai Property Manager.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Delivering logs through DataStream 2 requires adding "DataStream 2" behavior on the property. After completing the DataStream 2 setup steps as described above, create a new version of the property, add the two behaviors "DataStream" and "Log Request Details" to the default rules, and configure the behaviors referring to the example configuration below. This can be done in parallel while waiting for DataStream 2 to be activated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyauugv7z7poplcjpjnkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyauugv7z7poplcjpjnkb.png" alt="DataStream 2 behaviors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
    &lt;th&gt;Stream version&lt;/th&gt;
&lt;td&gt;DataStream 2&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Stream names&lt;/th&gt;
&lt;td&gt;Name specified during DataSteam 2 setup steps&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Sampling rate&lt;/th&gt;
&lt;td&gt;Percentage of logs to be sent (100 means all logs)&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Log *** Header&lt;/th&gt;
&lt;td&gt;Whether to log the corresponding header in the request&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Cookie Mode&lt;/th&gt;
&lt;td&gt;Whether to log cookies in the request&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Include Custom Log Field&lt;/th&gt;
&lt;td&gt;Whether to log the custom log field&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Custom Log Field&lt;/th&gt;
&lt;td&gt;Value to populate the custom log field (I include TLS Cipher Suite used as an example, but it can be left blank)&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;th&gt;Log Akamai Edge Server IP Address&lt;/th&gt;
&lt;td&gt;Whether to log the IP address of the edge server that processed the request &lt;strong&gt;(This option must be On)&lt;/strong&gt;
&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Please select the fields of the log to be collected taking into account the laws and regulations regarding the protection of PII.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the configuration is finished, save and activate the property. Access logs will begin to appear in Kibana after both DataStream 2 and the property are activated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah72p3s7u36opijswyee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah72p3s7u36opijswyee.png" alt="Kibana dashboard with logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations! Now you can see access logs from Akamai in near real-time!&lt;/strong&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Advanced Usage
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Conditional access log delivery
&lt;/h2&gt;

&lt;p&gt;Streaming all access logs to the log analysis infrastructure through DataStream 2, and then filtering the logs according to necessity on the log analysis infrastructure side is the typical usage. It is also possible to use DataStream 2 to feed only logs that meet certain conditions. This is useful especially when the amount of logs is huge and you want to reduce the load on the log analysis infrastructure, or when you are not interested in the logs of requests successfully done and only want to see the logs of errors. As an implementation example, if "DataStream" behavior is removed from the default rule of the property and enabling "DataStream" under the following conditions, logs would be sent only when the request path is under &lt;code&gt;/foo/bar/&lt;/code&gt; and the response code is neither &lt;code&gt;200&lt;/code&gt; &lt;code&gt;206&lt;/code&gt; &lt;code&gt;304&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnw4vj3ts1vea5645fzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnw4vj3ts1vea5645fzm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Using the custom field
&lt;/h2&gt;

&lt;p&gt;A field called custom field exists in the access log sent from DataStream 2. You can set any string up to 1000 bytes here, so you can include built-in variables of the edge server or property user variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See also: &lt;a href="https://techdocs.akamai.com/property-mgr/docs/built-vars" rel="noopener noreferrer"&gt;Built-in variables&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the example below, the transfer time taken from the edge server to return a response to the client is set in the custom field.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrtax74cn3ri8hnxxco8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrtax74cn3ri8hnxxco8.png" alt="Log Request Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This value can be parsed as a Runtime field in Kibana's Data View to make it a new field.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73ci3yp425xty9d4z21n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73ci3yp425xty9d4z21n.png" alt="Configure a Runtime filed for Kibana"&gt;&lt;/a&gt; &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73ci3yp425xty9d4z21n.png" rel="noopener noreferrer"&gt;Enlarge the above image&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;String ctt=dissect('ctt:%{ctt_val}').extract(doc["customField"].value)?.ctt_val;
if (ctt != null) emit(Integer.parseInt(ctt));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0pf4lg0ajob7ksj53v1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0pf4lg0ajob7ksj53v1.png" alt="Parsed custom field"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Akamai's edge computing platform, &lt;a href="https://techdocs.akamai.com/edgeworkers/docs" rel="noopener noreferrer"&gt;EdgeWorkers&lt;/a&gt;, allows you to set user variables from its JavaScript code. Custom fields can be set to contain debugging information for EdgeWorkers applications or values of interest to the business logic implemented in EdgeWorkers, which can then be aggregated in the logging analysis infrastructure. Note that as of June 2023, the custom field is limited to 1000 bytes in length.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See also: &lt;a href="https://techdocs.akamai.com/edgeworkers/docs/request-object#setvariable" rel="noopener noreferrer"&gt;Request Object setVariable()&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug EdgeWorkers
&lt;/h2&gt;

&lt;p&gt;Although EdgeWorkers terminates the script when &lt;a href="https://techdocs.akamai.com/edgeworkers/docs/resource-tier-limitations" rel="noopener noreferrer"&gt;various limitations&lt;/a&gt; (CPU time, memory usage, execution time, etc.) are exceeded, you may encounter situations where it only works correctly under certain conditions that depend on the content of the request. Error statistics can be viewed from the Akamai Control Center, but the details of each request at the time of the error are not available. With DataStream 2, not only the request information when an error occurs, but also the detailed operation status of the EdgeWorkers runtime is available in the fields named &lt;code&gt;ewExecutionInfo&lt;/code&gt; and &lt;code&gt;ewUsageInfo&lt;/code&gt;, which can be useful for troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See also: &lt;a href="https://techdocs.akamai.com/edgeworkers/docs/datastream2-reports" rel="noopener noreferrer"&gt;DataStream 2 log details&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5jd6tulugceakgpdclm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5jd6tulugceakgpdclm.png" alt="EdgeWorker runtime status"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring of Web Application Firewall activities
&lt;/h2&gt;

&lt;p&gt;Since Akamai's Web Application Firewall (WAF) comes with an advanced log analysis feature called Web Security Analytics (WSA), most cyber attack analyses can be completed with WSA. On the other hand, DataStream 2 also includes a summary of the WAF's detection results in the log fields, so you can take advantage of fields not seen by WSA for supplementary analysis, or take advantage of advanced features such as machine learning that the analysis tool has. The following screenshot shows how the DataStream 2 data shows that the WAF has detected a directory traversal attack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg65v1p8f3uk8mzml683.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg65v1p8f3uk8mzml683.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualize Common Media Client Data (CMCD)
&lt;/h2&gt;

&lt;p&gt;Common Media Client Data (CMCD) is a standardized data format for sending various metrics collected by video players to servers, such as CDNs. The CTA WAVE project published the CMCD specification in September 2020.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdn.cta.tech/cta/media/media/resources/standards/pdfs/cta-5004-final.pdf" rel="noopener noreferrer"&gt;Web Application Video Ecosystem - Common Media Client Data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video players that support CMCD send various information to the server as HTTP request headers or query parameters. You can add CMCD into DataStream 2 logs of Akamai Adaptive Media Delivery (AMD), so you can visualize video playback quality and information with Elasticsearch + Kibana built in this article. This allows you to correlate CDN logs and quality metrics in addition to existing video QoS &amp;amp; QoE measurement tools. For more information on the benefits of using CMCD, please refer to the following article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.akamai.com/blog/cloud/get-your-player-analytics-with-cmcd" rel="noopener noreferrer"&gt;Get More from Your Player Analytics and CDN Logs with CMCD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most video players send CMCD as query parameters, but if you want to send CMCD data via request headers, you need to add CORS header settings in AMD. Please refer to the following documentation to change the CORS headers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techdocs.akamai.com/adaptive-media-delivery/docs/common-media-client-data-amd" rel="noopener noreferrer"&gt;Common Media Client Data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On April 19, 2023, I have updated the StackScript referenced in this article. Elasticsearch + Kibana installations using the StackScript after this date will create index templates, data views, and dashboards that support CMCD.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0g0xc9jxb678vsiwuuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0g0xc9jxb678vsiwuuc.png" alt="CMCD Dashboard"&gt;&lt;/a&gt; &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfddll8fz9setp2pc2cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfddll8fz9setp2pc2cm.png" alt="CMCD Fields"&gt;&lt;/a&gt; &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfddll8fz9setp2pc2cm.png" rel="noopener noreferrer"&gt;Enlarge the above image&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Considerations for production use
&lt;/h1&gt;

&lt;p&gt;For the sake of simplicity, this article does not cover some considerations that should be taken into account for production use. For example, the following points at the least should be considered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://techdocs.akamai.com/datastream2/docs/set-up-alerts" rel="noopener noreferrer"&gt;Configure &lt;strong&gt;Datastream - Upload Failures&lt;/strong&gt; alert&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Use a higher-performance instance type to accommodate the volume of logs&lt;/li&gt;
&lt;li&gt;Enable HTTPS on Elasticsearch API endpoints&lt;/li&gt;
&lt;li&gt;Make Elasticsearch nodes redundant&lt;/li&gt;
&lt;li&gt;Allocate additional storage for virtual machines using Linode Block Storage&lt;/li&gt;
&lt;li&gt;Design the lifecycle management of access logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The instance type you need to use depends on the amount of logs from DataStream 2. Please refer to &lt;a href="https://www.elastic.co/jp/blog/benchmarking-and-sizing-your-elasticsearch-cluster-for-logs-and-metrics" rel="noopener noreferrer"&gt;Benchmarking and sizing your Elasticsearch cluster for logs and metrics&lt;/a&gt; as a good starting point to select a proper instance type.&lt;/p&gt;

&lt;p&gt;This article disables HTTPS for Elasticsearch to skip the SSL certificate issuance procedure. For production use, it is recommended to get an SSL server certificate issued by a certificate authority and enable HTTPS for communication between DataStream 2 and Elasticsearch. DataStream 2 does not accept self-signed certificates. You can find related information by searching with keywords such as "&lt;a href="https://www.google.com/search?q=Elasticsearch+Let%27s+Encrypt" rel="noopener noreferrer"&gt;Elasticsearch Let's Encrypt&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;Since StackScript rewrites the Elasticsearch configuration file to disable HTTPS, to make HTTPS enabled again, you need to change &lt;code&gt;xpack.security.http.ssl&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; in &lt;code&gt;/etc/elasticsearch/elasticsearch.yml&lt;/code&gt; after deploying the SSL server certificate. Then change the destination endpoint of DataStream 2 from &lt;code&gt;http://&lt;/code&gt; to &lt;code&gt;https://&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Appendix
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://techdocs.akamai.com/edge-diagnostics/docs/error-codes" rel="noopener noreferrer"&gt;&lt;code&gt;errorCode&lt;/code&gt; set by DataStream 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techdocs.akamai.com/property-mgr/docs/built-vars" rel="noopener noreferrer"&gt;Built-in variables of Property Manager&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>akamai</category>
      <category>elasticsearch</category>
      <category>linode</category>
      <category>kibana</category>
    </item>
  </channel>
</rss>
