<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jay Clifford</title>
    <description>The latest articles on DEV Community by Jay Clifford (@jayclifford345).</description>
    <link>https://dev.to/jayclifford345</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jayclifford345"/>
    <language>en</language>
    <item>
      <title>Tutorial: Modifying Grafana's Source Code</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Mon, 07 Aug 2023 10:25:41 +0000</pubDate>
      <link>https://dev.to/jayclifford345/tutorial-modifying-grafanas-source-code-3p0f</link>
      <guid>https://dev.to/jayclifford345/tutorial-modifying-grafanas-source-code-3p0f</guid>
      <description>&lt;h2&gt;
  
  
  A story of exploration and guesswork.
&lt;/h2&gt;

&lt;p&gt;So this blog is a little different from my usual tutorials…&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/xZ9gwBO3TMM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;A little background: I have been working with Jacob Marble to test and “demo-fy” his work with InfluxDB 3.0 and the OpenTelemetry ecosystem (If you would like to learn more I highly recommend checking out this &lt;a href="https://www.influxdata.com/blog/opentelemetry-tutorial-collect-traces-logs-metrics-influxdb-3-0-jaeger-grafana/"&gt;blog&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;During the project, we identified a need to enable specific Grafana features for InfluxDB data sources, particularly the &lt;em&gt;trace to logs&lt;/em&gt; functionality. Grafana is an open-source platform, and one of its major advantages is the ability to modify its source code to suit our unique requirements. However, diving into the codebase of such a robust tool can be overwhelming, even for the most seasoned developers.&lt;/p&gt;

&lt;p&gt;Despite the complexity, we embraced the challenge and dove headfirst into Grafana's source code. We tumbled, we stumbled, and we learned a great deal along the way. And now, having successfully modified Grafana to meet our specific project needs, I believe it's time to share this acquired knowledge with you all.&lt;/p&gt;

&lt;p&gt;The purpose of this blog is not just to provide you with a step-by-step guide for tweaking Grafana's source code, but also to inspire you to explore and adapt open-source projects to your needs. It's about imparting a method and a mindset, cultivating a culture of curiosity, and encouraging more hands-on learning and problem-solving.&lt;/p&gt;

&lt;p&gt;I hope that this guide will empower you to modify Grafana's source code for your projects, thereby expanding the horizons of what's possible with open-source platforms. It’s time to roll up your sleeves and venture into the depths of Grafana's code. &lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;So our problem lies within the &lt;a href="https://grafana.com/docs/grafana/latest/panels-visualizations/visualizations/traces/"&gt;Trace visualisation&lt;/a&gt; of Grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CUzhabnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08ql725ikv9yoq45la68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CUzhabnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08ql725ikv9yoq45la68.png" alt="traces" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the visualisation performs rather well with InfluxDB except for one button which appears to be disabled:** Logs for this span**. This button is automatically disabled when our trace data source (in this case, Jaeger with InfluxDB 3.0 acting as the &lt;a href="https://www.jaegertracing.io/docs/next-release/deployment/#sidecar-model"&gt;gRPC storage engine&lt;/a&gt;) has not been configured with a log data source. A log data source within Grafana is usually represented by default using the &lt;a href="https://grafana.com/docs/grafana/latest/explore/logs-integration/"&gt;log explorer interface&lt;/a&gt;, common log data sources are; &lt;a href="https://grafana.com/oss/loki/"&gt;Loki&lt;/a&gt;, &lt;a href="https://opensearch.org/"&gt;OpenSearch&lt;/a&gt; and &lt;a href="https://www.elastic.co/"&gt;ElasticSearch&lt;/a&gt;. So let's head across to the Jaeger data source and configure that…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tgaM4oua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vf82apcwgdhoivo6q1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tgaM4oua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vf82apcwgdhoivo6q1b.png" alt="data sources" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data sources can be navigated to via &lt;strong&gt;Connections -&amp;gt; Data Sources&lt;/strong&gt;. We currently have three configured; FlightSQL, InfluxDB and Jaeger. If we open the Jaeger configuration and navigate to the Trace to Logs section we want to essentially be able yo select either InfluxDB or FlightSQL as our Data source. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eYbxR2iX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7fizrk8rvp48m1f02do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eYbxR2iX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7fizrk8rvp48m1f02do.png" alt="traces to logs" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Houston, we have a problem. It appears Grafana doesn’t recognise InfluxDB as a log data source. Fair enough; only recently has InfluxDB become a viable option for logs. So, what are our options?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We lie down, accept the issue, and hope that in the future this feature becomes generic enough to support more data sources.&lt;/li&gt;
&lt;li&gt;Take action and make the change ourselves. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Well, by now you know what option we chose.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;In this section, I will summarize the steps I took to discover what changes needed to be made. How to implement the changes for your own data source and finally how to build your own custom build of Grafana OSS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovery
&lt;/h3&gt;

&lt;p&gt;So the first step is to understand where to even begin. Grafana is a huge Open Source platform with many components so I needed to narrow down the search. So the first thing I did was search the Grafana repository for signs of life.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--asNNrTN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y6eeeweio8fg7bm35su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--asNNrTN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y6eeeweio8fg7bm35su.png" alt="github" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see I made this little discovery by using the keyword &lt;strong&gt;trace&lt;/strong&gt;. Which led me to the directory &lt;strong&gt;TraceToLogs&lt;/strong&gt;. This led me to this section of code within &lt;strong&gt;&lt;a href="https://github.com/grafana/grafana/blob/main/public/app/core/components/TraceToLogs/TraceToLogsSettings.tsx"&gt;TraceToLogsSettings.tsx&lt;/a&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;TraceToLogsSettings&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onOptionsChange&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;Props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supportedDataSourceTypes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;loki&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elasticsearch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-splunk-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-opensearch-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-falconlogscale-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;googlecloud-logging-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

  &lt;span class="p"&gt;];&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section of code seems to create a static list of data sources supported by the Trace to Logs feature. We can confirm this by some of the common suspects within the list (Loki, Elasticsearch, etc.). Based on this our first alteration to the Grafana source code should be to add our data sources to this list.&lt;/p&gt;

&lt;p&gt;Now as the coding pessimist that I am, I knew this probably wouldn’t be the only change we needed to make but it's a good place to start. So I did the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I forked the Grafana repo&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cloned the repo:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://github.com/InfluxCommunity/grafana

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before I made those modifications I wanted to do some more searching to see if there are any changes I should be making. One line stood out to me in &lt;strong&gt;TraceToLogsSettings&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;updateTracesToLogs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useCallback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Partial&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nx"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nx"&gt;TraceToLogsOptionsV2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

      &lt;span class="c1"&gt;// Cannot use updateDatasourcePluginJsonDataOption here as we need to update 2 keys, and they would overwrite each&lt;/span&gt;

      &lt;span class="c1"&gt;// other as updateDatasourcePluginJsonDataOption isn't synchronized&lt;/span&gt;

      &lt;span class="nx"&gt;onOptionsChange&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;

        &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

        &lt;span class="na"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

          &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

          &lt;span class="na"&gt;tracesToLogsV2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;traceToLogs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

            &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

          &lt;span class="p"&gt;},&lt;/span&gt;

          &lt;span class="na"&gt;tracesToLogs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

        &lt;span class="p"&gt;},&lt;/span&gt;

      &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="p"&gt;},&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;onOptionsChange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogs&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It was &lt;strong&gt;TraceToLogsOptionsV2&lt;/strong&gt;. When I searched for places this interface was used I found the following entry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6zWCgnnE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3a5pb6qftk3ludc6l29q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6zWCgnnE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3a5pb6qftk3ludc6l29q.png" alt="TraceToLogsOptionsV2" width="744" height="1134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it appears we might also have work to do in the &lt;strong&gt;&lt;a href="https://github.com/grafana/grafana/blob/main/public/app/features/explore/TraceView/createSpanLink.tsx#L332"&gt;createSpanLink.tsx&lt;/a&gt;&lt;/strong&gt; file. Within I found this section of code, so my question was what exactly is this code doing?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--367tXqkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0k9p2riukestxwcv1bnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--367tXqkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0k9p2riukestxwcv1bnc.png" alt="case statement" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To cut a long story short the case statement essentially tells the trace visualisation to check what log data source has been defined (if any) and to define a query interface relevant to that data source. If the specified data source is not found within this case statement then the button is simply disabled which meant changing the original file won’t be enough as we suspected.&lt;/p&gt;

&lt;p&gt;Okay, we have now completed our investigation. Let's move on to the code changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modification
&lt;/h3&gt;

&lt;p&gt;We have two files to modify:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/grafana/grafana/blob/main/public/app/core/components/TraceToLogs/TraceToLogsSettings.tsx"&gt;TraceToLogsSettings.tsx&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/grafana/grafana/blob/main/public/app/features/explore/TraceView/createSpanLink.tsx#L332"&gt;createSpanLink.tsx&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's start with the simplest to tackle and go from there. &lt;/p&gt;

&lt;h4&gt;
  
  
  TraceToLogsSettings
&lt;/h4&gt;

&lt;p&gt;So this file was relatively simple to change. All we needed to do was modify the static list of supported log input sources like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;TraceToLogsSettings&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onOptionsChange&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;Props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supportedDataSourceTypes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;loki&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elasticsearch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-splunk-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-opensearch-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-falconlogscale-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;googlecloud-logging-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;influxdata-flightsql-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;influxdb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// external&lt;/span&gt;

  &lt;span class="p"&gt;];&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see I have added two. I ran a quick build of the Grafana project to see how this affected our data source configuration (we will discuss how to build at the end).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NAG8HSOy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nmb6kfl0fbzfo55zrum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NAG8HSOy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nmb6kfl0fbzfo55zrum.png" alt="datasource list" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey presto! We have a result. Now this still didn’t enable the button within our Trace View but we already knew this would requiremore work. &lt;/p&gt;

&lt;h4&gt;
  
  
  createSpanLink
&lt;/h4&gt;

&lt;p&gt;Now, onto the meat of our modification. For the record, I am not a TypeScript developer. What I do know is that the file has a whole bunch of examples we can use to attempt a blind copy-and-paste job with a few modifications. I ended up doing this for both plugins but to keep the blog short we will focus on the InfluxDB official plugin. &lt;/p&gt;

&lt;p&gt;My hypothesis was to use the Grafana Loki interface as the basis for the InfluxDB interface. The first included adding data source types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LokiQuery&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../plugins/datasource/loki/types&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;InfluxQuery&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../plugins/datasource/influxdb/types&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are easy to locate when Grafana has an official plugin for your data source since it's embedded within the official repository. For our community plugin I had two options: define a static interface within the file or provide more query parameters. I chose the latter.&lt;/p&gt;

&lt;p&gt;The next step was to modify the case statement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
   &lt;span class="c1"&gt;// TODO: This should eventually move into specific data sources and added to the data frame as we no longer use the&lt;/span&gt;

    &lt;span class="c1"&gt;//  deprecated blob format and we can map the link easily in data frame.&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logsDataSourceSettings&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;customQuery&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;

        &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;defaultKeys&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

      &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logsDataSourceSettings&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;loki&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

          &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForLoki&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-splunk-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

          &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

          &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForSplunk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;influxdata-flightsql-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

            &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; OR &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryFlightSQL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;influxdb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

            &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; OR &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForInfluxQL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elasticsearch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-opensearch-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

          &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;labelValueSign&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; AND &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

          &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForElasticsearchOrOpensearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;grafana-falconlogscale-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

          &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; OR &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

          &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForFalconLogScale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;googlecloud-logging-datasource&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

          &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getFormattedTags&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tagsToUse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;joinBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; AND &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

          &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getQueryForGoogleCloudLogging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;traceToLogsOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see I added two new cases:** &lt;code&gt;influxdata-flightsql-datasource&lt;/code&gt; and&lt;strong&gt; &lt;code&gt;influxdb&lt;/code&gt;&lt;/strong&gt;. I then copied from Loki the two function calls within the case: &lt;code&gt;getFormattedTags&lt;strong&gt; &lt;/strong&gt;&lt;/code&gt;and&lt;strong&gt; &lt;code&gt;getQueryFor&lt;/code&gt;&lt;/strong&gt;. It appeared one I could leave alone (&lt;strong&gt;&lt;code&gt;getFormattedTags&lt;/code&gt;&lt;/strong&gt;) as this seemed to be the same for the majority of the cases. However,I would need to define my own &lt;strong&gt;&lt;code&gt;getQueryFor&lt;/code&gt;&lt;/strong&gt; function. &lt;/p&gt;

&lt;p&gt;Let's take a look at the new &lt;code&gt;getQueryForInfluxQL&lt;/code&gt; function that’s called in the &lt;code&gt;influxdb&lt;/code&gt; case statement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;getQueryForInfluxQL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

  &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TraceSpan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TraceToLogsOptionsV2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;

&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;InfluxQuery&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;filterByTraceID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;filterBySpanID&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

      &lt;span class="na"&gt;refId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

      &lt;span class="na"&gt;rawQuery&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

      &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customQuery&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

      &lt;span class="na"&gt;resultFormat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;logs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT time, "severity_text", body, attributes FROM logs WHERE time &amp;gt;=${__from}ms AND time &amp;amp;lt;=${__to}ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filterByTraceID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;traceID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;filterBySpanID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spanID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT time, "severity_text", body, attributes FROM logs WHERE "trace_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.traceId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND "span_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.spanId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND time &amp;gt;=${__from}ms AND time &amp;amp;lt;=${__to}ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filterByTraceID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;traceID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT time, "severity_text", body, attributes FROM logs WHERE "trace_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.traceId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND time &amp;gt;=${__from}ms AND time &amp;amp;lt;=${__to}ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filterBySpanID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spanID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT time, "severity_text", body, attributes FROM logs WHERE "span_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.spanId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND time &amp;gt;=${__from}ms AND time &amp;amp;lt;=${__to}ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="na"&gt;refId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;rawQuery&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;resultFormat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;logs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So there is quite a lot here, but let me highlight the important parts. First of all, I started with an exact copy of the Loki function. Then, I made the following changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I changed the return interface from ‘&lt;code&gt;LokiQuery | undefined&lt;/code&gt;‘ to  ‘ &lt;code&gt;InfluxQuery | undefined&lt;/code&gt;‘ –the data source type we imported earlier.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, I focused on the return payload. After some digging within the InfluxQuery type file, I came up with this:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="na"&gt;refId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;rawQuery&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="na"&gt;resultFormat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;logs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The InfluxDB data source had a hany parameter which allowed me to define the result format (usually metrics) and also I now knew the data source would be expecting a raw query rather than an expression.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lastly, I had to define the queries which would run when the user clicked the button. These depended on what filter features the user had toggled within the data source settings (filter by traceID, spanID or both). So I modified the &lt;code&gt;if&lt;/code&gt; statement defined within the Loki function and constructed static InfluxQL queries. From there, I then used the Grafana placeholder variables found within other data sources to make the queries dynamic. Here is an example:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filterByTraceID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;traceID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;filterBySpanID&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spanID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT time, "severity_text", body, attributes FROM logs WHERE "trace_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.traceId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND "span_id"=&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;${__span.spanId}&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt; AND time &amp;gt;=${__from}ms AND time &amp;amp;lt;=${__to}ms&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Full disclosure, it took me a good minute to find out about the **&lt;code&gt;&amp;gt;=${&lt;strong&gt;from}ms&lt;/strong&gt;&lt;/code&gt; and &lt;strong&gt;&lt;code&gt;&amp;lt;=${&lt;/code&gt;&lt;/strong&gt;to}ms. This ended up being a brute force build and error case. &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Building
&lt;/h3&gt;

&lt;p&gt;Phew! We’re past the hard bit. Now onto the build process. I have quite a few years of experience with Docker, so this part was stress-free for me, but I imagine it could be daunting for new Docker users. Luckily, Grafana has some easy-to-follow &lt;a href="https://github.com/grafana/grafana/blob/main/contribute/developer-guide.md"&gt;documentation&lt;/a&gt; for the task. To paraphrase, these are the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the following build command (this can take a while and make sure your docker VM has enough memory if using macOS or Windows)&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
make build-docker-full  

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The build process produces a Docker image called: &lt;strong&gt;grafana/grafana-oss:dev&lt;/strong&gt; . We could just use this image, but as a formality, I like to retag the image and push it to my Docker registry.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker tag grafana/grafana-oss:dev jaymand13/grafana-oss:dev2

docker push jaymand13/grafana-oss:dev2

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This way I have checkpoints when I am brute forcing changes.&lt;/p&gt;

&lt;p&gt;There we have it! A fully baked Grafana dev image to try out with our changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The results and conclusion
&lt;/h2&gt;

&lt;p&gt;So after investigating, making the changes, and building our new Grafana container, let's take a look at our results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DPyQolYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfp4zm9yw4t1wni2o012.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DPyQolYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfp4zm9yw4t1wni2o012.png" alt="result" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With our changes, the &lt;strong&gt;Logs for this span&lt;/strong&gt; button now activates. We also have this neat little &lt;strong&gt;Log&lt;/strong&gt; button that appears next to each span. A confession: the blue &lt;strong&gt;Logs for this span&lt;/strong&gt; button currently only works within the Grafana Explorer tab, but the new &lt;strong&gt;Log&lt;/strong&gt; link works within our dashboard. To quickly explain the differences; Grafana Dashboards are custom-built by users and can include 1 or many data sources with a variety of different visualisations. Data Explorers on the other hand provide an interface for drill-down and investigation activities like you see in the above screenshot. Still, it’s not a huge problem compared to how little we needed to change to get here. &lt;/p&gt;

&lt;p&gt;And so, we've reached the end of our dive into the intricacies of modifying Grafana's source code. Over the course of this tutorial, I hope you've not only gained a practical understanding of how to customize Grafana for your specific requirements, but also an appreciation for the flexibility and potential of open-source platforms in general.&lt;/p&gt;

&lt;p&gt;Remember, in the realm of open-source, there's no limit to how much we can tweak, adjust, and reimagine to suit our needs. I hope this guide serves you well as you delve deeper into your own projects, and that it brings you one step closer to mastering the powerful tool that is Grafana. For me, my journey continues as I now plan to add exemplar support to this OSS build. If you would like to try this out yourself you can find the OpenTelemetry example &lt;a href="https://github.com/InfluxCommunity/opentelemetry-demo"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>observability</category>
    </item>
    <item>
      <title>Client Library Deep Dive: Python (Video Tutorial)</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Thu, 03 Aug 2023 09:46:03 +0000</pubDate>
      <link>https://dev.to/jayclifford345/client-library-deep-dive-python-video-tutorial-fmi</link>
      <guid>https://dev.to/jayclifford345/client-library-deep-dive-python-video-tutorial-fmi</guid>
      <description>&lt;p&gt;To round the first part of the series off I thought I would include a video tutorial using the new Python client for InfluxDB 3.0. &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tpdONTm1GC8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflection
&lt;/h2&gt;

&lt;p&gt;Its been a wild ride so far...&lt;/p&gt;

&lt;p&gt;Learning the ropes of Pyarrow, Apache Flight and Parquet. When I set out on this journey I didn't imagine that would mean building and maintaining a Client library for InfluxDB 3.0 as a Developer Advocate. I must admit it was a refreshing feeling acting like a full time developer again.&lt;/p&gt;

&lt;p&gt;I am excited to continue this journey learning different parts of the Apache Ecosystem within the columnar space, as InfluxDB 3.0's use cases expand so do mine. &lt;/p&gt;

&lt;h2&gt;
  
  
  Contribute
&lt;/h2&gt;

&lt;p&gt;I would love to hear your thoughts on the client library; use it in anger, submit feature requests, write examples or even build features. A community project is only ever as good as the community around it. &lt;/p&gt;

&lt;p&gt;Here is the repo: &lt;a href="https://github.com/InfluxCommunity/influxdb3-python"&gt;"client lib repo"&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>video</category>
      <category>influxdb</category>
      <category>apache</category>
    </item>
    <item>
      <title>Client Library Deep Dive: Python (Part 2)</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Thu, 03 Aug 2023 09:30:45 +0000</pubDate>
      <link>https://dev.to/jayclifford345/client-library-deep-dive-python-part-2-4lkc</link>
      <guid>https://dev.to/jayclifford345/client-library-deep-dive-python-part-2-4lkc</guid>
      <description>&lt;h2&gt;
  
  
  Working with the new InfluxDB 3.0 Python CLI and Client Library
&lt;/h2&gt;

&lt;p&gt;By Jay Clifford&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7lblthFz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfdgi5i8irq0nmz2ovgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7lblthFz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfdgi5i8irq0nmz2ovgq.png" alt='"banner"' width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, we are back for Part 2! Last time we discussed the new community Python library for InfluxDB 3.0. Let's talk about a bolt-on application that uses the client library as the core of its development, the &lt;a href="https://github.com/InfluxCommunity/influxdb3-python-cli"&gt;InfluxDB 3.0 Python CLI&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python CLI
&lt;/h2&gt;

&lt;p&gt;Okay, so following the same format as before, what were the reasons for building the CLI? Well, there are two primary reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We wanted to give users a data browsing tool that leveraged the new Flight endpoint. Python gave us the opportunity to prototype fast before we invested work in a more robust CLI offering. It also allowed us to leverage some interesting data manipulation libraries that could extend the scope of the Python CLI. &lt;/li&gt;
&lt;li&gt;We wanted a robust way to test the newly created &lt;a href="https://github.com/InfluxCommunity/influxdb3-python"&gt;InfluxDB 3.0 Python Client library&lt;/a&gt;, as you will see most of the tooling and functionality in use.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;

&lt;p&gt;Let's talk about the installation process because, I must admit, Python doesn’t provide the most user-friendly packaging and deployment methods unless you use it daily. I recommend installing the CLI in a Python Virtual Environment first for test purposes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv ./.venv 
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate
&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; –upgrade pip

&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;influxdb3-python-cli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This set of commands creates our Virtual Python Environment, activates it, updates our Python package installer, and finally installs the new CLI.&lt;/p&gt;

&lt;p&gt;If you would like to graduate from a Python Virtual Environment and move the CLI to your path, you can do so with a sudo install (You have to be careful here not to cause permission issues with packages):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; pip &lt;span class="nb"&gt;install &lt;/span&gt;influxdb3-python-cli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a CLI config
&lt;/h3&gt;

&lt;p&gt;The first thing you want to do is create a connection config. This feature acts like the current InfluxDB &lt;code&gt;influx&lt;/code&gt; CLI by saving your connection credentials for InfluxDB to use later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
influx3 create config &lt;span class="se"&gt;\&lt;/span&gt;

&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"poke-dex"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;

&lt;span class="nt"&gt;--database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pokemon-codex"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;

&lt;span class="nt"&gt;--host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"us-east-1-1.aws.cloud2.influxdata.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;

&lt;span class="nt"&gt;--token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&amp;amp;lt;your token&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;

&lt;span class="nt"&gt;--org&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&amp;amp;lt;your org ID&amp;gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;–name
   &lt;/td&gt;
   &lt;td&gt;Name to describe your connection config. This must be unique.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;–token
   &lt;/td&gt;
   &lt;td&gt;This provides authentication for the client to read and write from InfluxDB Cloud &lt;a href="https://www.influxdata.com/products/influxdb-cloud/serverless/"&gt;Serverless&lt;/a&gt; or &lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/"&gt;Dedicated&lt;/a&gt;. Note: you need a token with read &lt;em&gt;and&lt;/em&gt; write authentication if you wish to use both features.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;–host
   &lt;/td&gt;
   &lt;td&gt;InfluxDB host — this should only be the domain without the protocol (https://) 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;–org
   &lt;/td&gt;
   &lt;td&gt;Cloud Serverless still requires a user’s organization ID for writing data to 3.0. Dedicated users can just use an arbitrary string.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;–database
   &lt;/td&gt;
   &lt;td&gt;The database you wish to query from and write to.
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Config commands
&lt;/h4&gt;

&lt;p&gt;Config commands also exist to activate, update, delete, and list current active configs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;

```bash
&lt;p&gt;
influx3.py config update --name="poke-dex" --host="new-host.com"
&lt;/p&gt;
&lt;p&gt;
```


   &lt;/p&gt;
&lt;/td&gt;
   &lt;td&gt;The update subcommand updates an existing configuration. The --name parameter is required to specify which configuration to update. All other parameters (--host, --token, --database, --org, --active) are optional.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;

```bash
&lt;p&gt;
influx3.py config use --name="poke-dex"
&lt;/p&gt;
&lt;p&gt;
```


   &lt;/p&gt;
&lt;/td&gt;
   &lt;td&gt;The use subcommand sets a specific configuration as the active one. The --name parameter is required to specify which configuration to use.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;

```bash
&lt;p&gt;
influx3.py config delete --name="poke-dex"
&lt;/p&gt;
&lt;p&gt;
```


   &lt;/p&gt;
&lt;/td&gt;
   &lt;td&gt;The delete subcommand deletes a configuration. The --name parameter is required to specify which configuration to delete.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;

```bash
&lt;p&gt;
influx3.py config list
&lt;/p&gt;
&lt;p&gt;
```


   &lt;/p&gt;
&lt;/td&gt;
   &lt;td&gt;The list subcommand lists all the configurations.
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Writing and querying
&lt;/h3&gt;

&lt;p&gt;You can use the CLI to either directly call the application, followed by the commands you wish to run, or run it through an interactive &lt;a href="https://www.digitalocean.com/community/tutorials/what-is-repl"&gt;REPL&lt;/a&gt;. I personally believe the REPL approach provides a better flow, so let’s demo some of the features.&lt;/p&gt;

&lt;p&gt;Once you created your config you simply enter the following to activate the REPL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
influx3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which leads to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
influx3

InfluxDB 3.0 CLI.

&lt;span class="o"&gt;(&amp;gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Query
&lt;/h4&gt;

&lt;p&gt;Let’s first take a look at the query options. Within the REPL you have 3 query options: SQL, InfluxQL, and chatGPT (more on this later). Let’s drop into the SQL REPL and run a basic query against the Trainer data we generated in the previous blog:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
InfluxDB 3.0 CLI.

&lt;span class="o"&gt;(&amp;gt;)&lt;/span&gt; sql

&lt;span class="o"&gt;(&lt;/span&gt;sql &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; SELECT &lt;span class="k"&gt;*&lt;/span&gt; FROM caught

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I wouldn’t normally recommend querying without some form of time-based WHERE clause, but I wanted to highlight how the CLI can handle large datasets. It uses ‘mode = chunk’ from the Python Client Library to break large datasets into manageable Arrow batches. From there we have three options. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can either hit **TAB **to see the next portion of data, if  one exists.&lt;/li&gt;
&lt;li&gt;Press &lt;strong&gt;F&lt;/strong&gt; to save the current Arrow batch to a file type of our choosing (JSON, CSV, Parquet, ORC, Feather).&lt;/li&gt;
&lt;li&gt;Press **CTRL-C **to return back to the SQL REPL.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s take a look at the option 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
| 3961 |       82 | Venusaur                  |        83 |   80 | 0003 |      12 |     7 |      80 | 2023-07-06 13:41:36.588000 | ash       | Grass    | Poison   |

| 3962 |       64 | Dratini                   |        45 |   41 | 0147 |       6 |     7 |      50 | 2023-07-06 14:30:32.519000 | jessie    | Dragon   |          | 



Press TAB to fetch next chunk of data, or F to save current chunk to a file

Enter the file name with full path &lt;span class="o"&gt;(&lt;/span&gt;e.g. /home/user/sample.json&lt;span class="o"&gt;)&lt;/span&gt;: ~/Desktop/all-trainer-data.csv

Data saved to ~/Desktop/all-trainer-data.csv.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a sample of the CSV file created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"attack","caught","defense","hp","id","level","num","speed","time","trainer","type1","type2"

49,"Bulbasaur",49,45,"0001",12,"1",45,2023-07-06 14:30:41.886000000,"ash","Grass","Poison"

62,"Ivysaur",63,60,"0002",7,"1",60,2023-07-06 14:30:32.519000000,"ash","Grass","Poison"

62,"Ivysaur",63,60,"0002",8,"1",60,2023-07-06 14:30:38.519000000,"ash","Grass","Poison"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we reach the end of our dataset, it prompts us to press &lt;strong&gt;ENTER **to drop back into the SQL REPL. Just remember if you feel like you’re pressing **TAB **forever, you can always drop out of the query with **CTRL-C.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s look at a more interesting example with the InfluxQL REPL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;sql &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; &lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;span class="o"&gt;(&amp;gt;)&lt;/span&gt; influxql

&lt;span class="o"&gt;(&lt;/span&gt;influxql &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; SELECT count&lt;span class="o"&gt;(&lt;/span&gt;caught&lt;span class="o"&gt;)&lt;/span&gt; FROM caught WHERE &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; now&lt;span class="o"&gt;()&lt;/span&gt; - 2d GROUP BY trainer

|    | iox::measurement   | &lt;span class="nb"&gt;time&lt;/span&gt;                | trainer   |   count |

|---:|:-------------------|:--------------------|:----------|--------:|

|  0 | caught             | 1970-01-01 00:00:00 | ash       |     625 |

|  1 | caught             | 1970-01-01 00:00:00 | brock     |     673 |

|  2 | caught             | 1970-01-01 00:00:00 | gary      |     645 |

|  3 | caught             | 1970-01-01 00:00:00 | james     |     664 |

|  4 | caught             | 1970-01-01 00:00:00 | jessie    |     663 |

|  5 | caught             | 1970-01-01 00:00:00 | misty     |     693 | 
 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;influxql &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; SELECT count&lt;span class="o"&gt;(&lt;/span&gt;caught&lt;span class="o"&gt;)&lt;/span&gt; FROM caught WHERE &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; now&lt;span class="o"&gt;()&lt;/span&gt; - 2d  GROUP BY &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;1d&lt;span class="o"&gt;)&lt;/span&gt;,trainer ORDER BY &lt;span class="nb"&gt;time&lt;/span&gt;

|    | iox::measurement   | &lt;span class="nb"&gt;time&lt;/span&gt;                | trainer   |   count |

|---:|:-------------------|:--------------------|:----------|--------:|

|  0 | caught             | 2023-07-05 00:00:00 | ash       |     nan |

|  1 | caught             | 2023-07-06 00:00:00 | ash       |     625 |

|  2 | caught             | 2023-07-07 00:00:00 | ash       |     148 |

|  3 | caught             | 2023-07-05 00:00:00 | brock     |     nan |

|  4 | caught             | 2023-07-06 00:00:00 | brock     |     673 |

|  5 | caught             | 2023-07-07 00:00:00 | brock     |     180 |

|  6 | caught             | 2023-07-05 00:00:00 | gary      |     nan |

|  7 | caught             | 2023-07-06 00:00:00 | gary      |     645 |

|  8 | caught             | 2023-07-07 00:00:00 | gary      |     155 |

|  9 | caught             | 2023-07-05 00:00:00 | james     |     nan |

| 10 | caught             | 2023-07-06 00:00:00 | james     |     664 |

| 11 | caught             | 2023-07-07 00:00:00 | james     |     157 |

| 12 | caught             | 2023-07-05 00:00:00 | jessie    |     nan |

| 13 | caught             | 2023-07-06 00:00:00 | jessie    |     663 |

| 14 | caught             | 2023-07-07 00:00:00 | jessie    |     144 |

| 15 | caught             | 2023-07-05 00:00:00 | misty     |     nan |

| 16 | caught             | 2023-07-06 00:00:00 | misty     |     693 |

| 17 | caught             | 2023-07-07 00:00:00 | misty     |     178 |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will save this one as a &lt;a href="https://www.influxdata.com/glossary/apache-parquet/"&gt;Parquet&lt;/a&gt; file for later.&lt;/p&gt;

&lt;h4&gt;
  
  
  Write
&lt;/h4&gt;

&lt;p&gt;Moving on from using the CLI for querying, let’s talk about the write functionality. Now, this feature set isn’t as fleshed out as I would like it to be but it covers the basics. We can drop into the write REPL and write data to InfluxDB using line protocol like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;influxql &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; &lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;span class="o"&gt;(&amp;gt;)&lt;/span&gt; write

&lt;span class="o"&gt;(&lt;/span&gt;write &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; caught,id&lt;span class="o"&gt;=&lt;/span&gt;0115,num&lt;span class="o"&gt;=&lt;/span&gt;1,trainer&lt;span class="o"&gt;=&lt;/span&gt;brock &lt;span class="nv"&gt;attack&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;125i,caught&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"KangaskhanMega Kangaskhan"&lt;/span&gt;,defense&lt;span class="o"&gt;=&lt;/span&gt;100i,hp&lt;span class="o"&gt;=&lt;/span&gt;105i,level&lt;span class="o"&gt;=&lt;/span&gt;13i,speed&lt;span class="o"&gt;=&lt;/span&gt;100i,type1&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Normal"&lt;/span&gt; 1688741473083000000

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next let’s have a look at the &lt;code&gt;write_file&lt;/code&gt; feature. For this we need to drop out of the REPL entirely and use flag commands when calling ‘influx3’. Let’s load our count results into a new table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;write &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; &lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;span class="o"&gt;(&amp;gt;)&lt;/span&gt; &lt;span class="nb"&gt;exit

&lt;/span&gt;Exiting …

influx3 write_file &lt;span class="nt"&gt;--help&lt;/span&gt;

usage: influx3 write_file &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-h&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; FILE &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--measurement&lt;/span&gt; MEASUREMENT] &lt;span class="nt"&gt;--time&lt;/span&gt; TIME &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--tags&lt;/span&gt; TAGS]

options:

  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt;            show this &lt;span class="nb"&gt;help &lt;/span&gt;message and &lt;span class="nb"&gt;exit&lt;/span&gt;

  &lt;span class="nt"&gt;--file&lt;/span&gt; FILE           the file to import

  &lt;span class="nt"&gt;--measurement&lt;/span&gt; MEASUREMENT

                        Define the name of the measurement

  &lt;span class="nt"&gt;--time&lt;/span&gt; TIME           Define the name of the &lt;span class="nb"&gt;time &lt;/span&gt;column within the file

  &lt;span class="nt"&gt;--tags&lt;/span&gt; TAGS           &lt;span class="o"&gt;(&lt;/span&gt;optional&lt;span class="o"&gt;)&lt;/span&gt; array of column names which are tags. Format should be: tag1,tag2

influx3 write_file &lt;span class="nt"&gt;--file&lt;/span&gt; ~/Desktop/count.parquet &lt;span class="nt"&gt;--time&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="nt"&gt;--tags&lt;/span&gt; trainer &lt;span class="nt"&gt;--measurement&lt;/span&gt; summary

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
(influxql &amp;gt;) SELECT count, trainer, time  FROM summary

|    | iox::measurement   | time                |   count | trainer   |

|---:|:-------------------|:--------------------|--------:|:----------|

|  0 | summary            | 2023-07-05 00:00:00 |     nan | ash       |

|  1 | summary            | 2023-07-05 00:00:00 |     nan | brock     |

|  2 | summary            | 2023-07-05 00:00:00 |     nan | gary      |

|  3 | summary            | 2023-07-05 00:00:00 |     nan | james     |

|  4 | summary            | 2023-07-05 00:00:00 |     nan | jessie    |

|  5 | summary            | 2023-07-05 00:00:00 |     nan | misty     |

|  6 | summary            | 2023-07-06 00:00:00 |     625 | ash       |

|  7 | summary            | 2023-07-06 00:00:00 |     673 | brock     |

|  8 | summary            | 2023-07-06 00:00:00 |     645 | gary      |

|  9 | summary            | 2023-07-06 00:00:00 |     664 | james     |

| 10 | summary            | 2023-07-06 00:00:00 |     663 | jessie    |

| 11 | summary            | 2023-07-06 00:00:00 |     693 | misty     |

| 12 | summary            | 2023-07-07 00:00:00 |     148 | ash       |

| 13 | summary            | 2023-07-07 00:00:00 |     180 | brock     |

| 14 | summary            | 2023-07-07 00:00:00 |     155 | gary      |

| 15 | summary            | 2023-07-07 00:00:00 |     157 | james     |

| 16 | summary            | 2023-07-07 00:00:00 |     144 | jessie    |

| 17 | summary            | 2023-07-07 00:00:00 |     178 | misty     |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Experimental feature (ChatGPT)
&lt;/h3&gt;

&lt;p&gt;So with chatGPT and OpenAI being all the rage these days, I looked to see if their Python package could benefit the CLI. Interestingly it does… Because InfluxDB has been open source since its inception, chatGPT has become pretty well-versed in building InfluxQL queries. Take a look at this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;(&lt;/span&gt;chatgpt &lt;span class="o"&gt;&amp;gt;)&lt;/span&gt; give me a list of the top 10 caught with an attack higher than 100 from caught

Run InfluxQL query: SELECT &lt;span class="k"&gt;*&lt;/span&gt; FROM caught WHERE attack &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 100 LIMIT 10

|    | iox::measurement   | &lt;span class="nb"&gt;time&lt;/span&gt;                       |   attack | caught                    |   defense |   hp |   &lt;span class="nb"&gt;id&lt;/span&gt; |   level |   num |   speed | trainer   | type1    | type2   |

|---:|:-------------------|:---------------------------|---------:|:--------------------------|----------:|-----:|-----:|--------:|------:|--------:|:----------|:---------|:--------|

|  0 | caught             | 2023-07-06 13:09:36.095000 |      110 | Dodrio                    |        70 |   60 | 0085 |      19 |     1 |     100 | jessie    | Normal   | Flying  |

|  1 | caught             | 2023-07-06 13:09:36.095000 |      125 | Pinsir                    |       100 |   65 | 0127 |       6 |     1 |      85 | brock     | Bug      |         |

|  2 | caught             | 2023-07-06 13:10:53.995000 |      130 | CharizardMega Charizard X |       111 |   78 | 0006 |       6 |     1 |     100 | brock     | Fire     | Dragon  |

|  3 | caught             | 2023-07-06 13:10:53.995000 |      150 | BeedrillMega Beedrill     |        40 |   65 | 0015 |      12 |     1 |     145 | jessie    | Bug      | Poison  |

|  4 | caught             | 2023-07-06 13:10:53.995000 |      102 | Nidoking                  |        77 |   81 | 0034 |      20 |     1 |      85 | gary      | Poison   | Ground  |

|  5 | caught             | 2023-07-06 13:10:53.995000 |      105 | Primeape                  |        60 |   65 | 0057 |      16 |     1 |      95 | misty     | Fighting |         |

|  6 | caught             | 2023-07-06 13:10:53.995000 |      120 | Golem                     |       130 |   80 | 0076 |       8 |     1 |      45 | ash       | Rock     | Ground  |

|  7 | caught             | 2023-07-06 13:10:53.995000 |      105 | Muk                       |        75 |  105 | 0089 |       5 |     1 |      50 | brock     | Poison   |         |

|  8 | caught             | 2023-07-06 13:10:53.995000 |      105 | Muk                       |        75 |  105 | 0089 |      19 |     1 |      50 | james     | Poison   |         |

|  9 | caught             | 2023-07-06 13:10:53.995000 |      105 | Muk                       |        75 |  105 | 0089 |      16 |     2 |      50 | james     | Poison   |         |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This feature currently only uses ChatGPT 3.5 and requires an OpenAPI token. If you would like instructions on how to use this feature, check out this part of the &lt;a href="https://github.com/InfluxCommunity/influxdb3-python-cli/blob/main/README.md#alpha-openai-chatgpt-support"&gt;README&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future hopes
&lt;/h2&gt;

&lt;p&gt;The future is bright for the Python CLI as our development team pushes forward with tooling for InfluxDB 3.0. For now, the scope is to keep it as a bolt-on tool for Python developers and those who want an easily extendable CLI. Here is my current laundry list for the project: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;Feature
   &lt;/td&gt;
   &lt;td&gt;Status
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Improve OpenAI functionality:
&lt;ul&gt;

&lt;li&gt;Upgrade to chatgpt 4

&lt;/li&gt;
&lt;li&gt;Add call functions

&lt;/li&gt;
&lt;li&gt;Extend to SQL
&lt;/li&gt;
&lt;/ul&gt;
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Find a better way to package and distribute. Currently looking into Pyinstaller as an option.
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Extended write functionality.
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Provide post query exploration support (Pandas functions) 
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Integrate delta sharing
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;So there you have it, Part 2 done and dusted. I really enjoyed writing this blog series on both the Python Client Library and CLI. Having such a heavy hand in the inspection makes writing about them far more exciting and easy. I hope these blogs inspire you to join our new community-based libraries and tooling. If you want to chat about how to get involved, you can reach me via &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt; or &lt;a href="https://community.influxdata.com/"&gt;Discourse&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>influxdb</category>
      <category>arrow</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Client Library Deep Dive: Python (Part 1)</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Thu, 03 Aug 2023 09:17:22 +0000</pubDate>
      <link>https://dev.to/jayclifford345/client-library-deep-dive-python-part-1-353b</link>
      <guid>https://dev.to/jayclifford345/client-library-deep-dive-python-part-1-353b</guid>
      <description>&lt;h2&gt;
  
  
  Working with the new InfluxDB 3.0 Python CLI and Client Library
&lt;/h2&gt;

&lt;p&gt;By Jay Clifford&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--byNyytKG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn0dadxngqgrxi74qzmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--byNyytKG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn0dadxngqgrxi74qzmh.png" alt='"banner"' width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Community Client libraries are back with &lt;a href="https://www.influxdata.com/products/influxdb-overview/"&gt;InfluxDB 3.0&lt;/a&gt;. If you would like an overview of each client library then I highly recommend checking out &lt;a href="https://www.influxdata.com/blog/querying-writing-influxdb-cloud-status-client-libraries/"&gt;Anais’s blog on their status&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this two-part blog series, we do a deep dive into the new &lt;a href="https://github.com/InfluxCommunity/influxdb3-python"&gt;Python Client Library&lt;/a&gt; and &lt;a href="https://github.com/InfluxCommunity/influxdb3-python-cli"&gt;CLI&lt;/a&gt;. By the end, you should have a good understanding of the current features, how the internals work, and my future ideas for both projects. From there my hope is that it gives you the opportunity to contribute to, and have your say in their future.&lt;/p&gt;

&lt;p&gt;In this post (Part 1), we will focus primarily on the Client Library because it underlies the Python CLI. &lt;/p&gt;

&lt;h2&gt;
  
  
  Python client library
&lt;/h2&gt;

&lt;p&gt;So, let's start off with the Python client library. The scope was simple: build a library that could write to and query InfluxDB 3.0. Because the write endpoint didn’t change inInfluxDB 3.0, we could bring forward much of the functionality from the V2 library, such as batch writes, data parsing, point objects, and much more. However, on the query side of things, we had to completely remake it. We wanted to focus on the capabilities of &lt;a href="https://arrow.apache.org/docs/format/Flight.html"&gt;Arrow Flight &lt;/a&gt;and bring support for both SQL and InfluxQL-based queries. &lt;a href="https://arrow.apache.org/docs/python/index.html"&gt;PyArrow&lt;/a&gt; also opened up better ecosystem support for libraries such as Pandas and Polars, but I’ll have more on this later. &lt;/p&gt;

&lt;p&gt;Let's build a simple Python application together that writes and queries InfluxDB 3.0.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;

&lt;p&gt;To install the client library (I recommend making a Python Virtual Environment first):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv ./.venv &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; –upgrade pip

&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;influxdb3-python

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This set of commands creates our Virtual Python Environment, activates it, updates our Python package installer, and, finally, installs the new client library.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a client
&lt;/h3&gt;

&lt;p&gt;In this section, we import our newly installed library and establish a client. I also discuss some configuration parameters and the reasoning behind them. &lt;/p&gt;

&lt;p&gt;Let's create a main.py file with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;influxdb_client_3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Point&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;datetime&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"eu-central-1-1.aws.cloud2.influxdata.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;org&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"6a841c0c08328fb1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"pokemon-codex"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example shows a minimal configuration for the client. Like previous clients, it requires the following parameters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;token
   &lt;/td&gt;
   &lt;td&gt;This provides authentication for the client to read and write from InfluxDB Cloud &lt;a href="https://www.influxdata.com/products/influxdb-cloud/serverless/"&gt;Serverless&lt;/a&gt; or &lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/"&gt;Dedicated&lt;/a&gt;. Note: you need a token with read-and-write authentication if you wish to use both features.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;host
   &lt;/td&gt;
   &lt;td&gt;InfluxDB host — this should only be the domain without the protocol (https://) 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;org
   &lt;/td&gt;
   &lt;td&gt;Cloud Serverless still requires the users’ organization ID for writing data to 3.0. Dedicated users can just use an arbitrary string.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;database
   &lt;/td&gt;
   &lt;td&gt;The database you wish to query and write from.
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I recommend creating a client on a per-database basis, though you can update the &lt;code&gt;_database&lt;/code&gt; instance variable if you only want to create one client.&lt;/p&gt;

&lt;p&gt;Next, let's take a look at the advanced parameters of the client:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;flight_client_options
   &lt;/td&gt;
   &lt;td&gt;This provides access to parameters for the flight query protocol. You can find configuration options &lt;a href="https://arrow.apache.org/docs/python/generated/pyarrow.flight.FlightClient.html"&gt;here&lt;/a&gt;.  
 
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/flight_options_example.py"&gt;Example&lt;/a&gt;.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;write_client_options
   &lt;/td&gt;
   &lt;td&gt;This provides access to the parameters used by the V2 write client, which you can find &lt;a href="https://github.com/influxdata/influxdb-client-python#writes"&gt;here&lt;/a&gt;. 
 
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/batching-example.py"&gt;Example.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;**kwargs
   &lt;/td&gt;
   &lt;td&gt;Lastly, this provides access to the parameters used by the V2 client, which you can find &lt;a href="https://github.com/influxdata/influxdb-client-python#via-file"&gt;here&lt;/a&gt;.
&lt;p&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/batching-example.py"&gt;Example.&lt;/a&gt; (gzip compression)
   &lt;/p&gt;
&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's continue our original example by discussing the write functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing data
&lt;/h3&gt;

&lt;p&gt;So now that we established our client, in this section we look at the different methods you can use to write data to InfluxDB 3.0. Most will be familiar to you as they follow the same ingestion method as V2.&lt;/p&gt;

&lt;p&gt;Let's start off with basic point building:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# Continued from the Client's example \
&lt;/span&gt; \
&lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"trainer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ash"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0006"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"num"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"charizard"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"attack"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"defense"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"hp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"speed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"fire"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"flying"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;\

                                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;"Error writing point: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, you can see we build our line protocol using an instance of the Point class, which then translates into line protocol: \&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Point,trainer=ash,id=0006,num=1 caught="charizard",level=10i,attack=30i,defense=40i,hp=200i,speed=10i,type1="fire",type2="flying" &amp;amp;lt;timestamp&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also format this as an array of points: \&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="c1"&gt;# Adding first point
&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

    &lt;span class="n"&gt;Point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"trainer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ash"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0006"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"num"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"charizard"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"attack"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"defense"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"hp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"speed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"fire"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"flying"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Adding second point
&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

    &lt;span class="n"&gt;Point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"trainer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ash"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"0007"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"num"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"caught"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bulbasaur"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"attack"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"defense"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"hp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;190&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"speed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"grass"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"type2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"poison"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also write via dictionary encoding and structured data methods. One of my favorite ingest methods is via Pandas DataFrame. &lt;/p&gt;

&lt;p&gt;Let's take a look at an example utilizing this method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# Convert the list of dictionaries to a DataFrame
&lt;/span&gt;
&lt;span class="n"&gt;caught_pokemon_df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;set_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'timestamp'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Print the DataFrame
&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;caught_pokemon_df&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;caught_pokemon_df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data_frame_measurement_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'caught'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

             &lt;span class="n"&gt;data_frame_tag_columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'trainer'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'num'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;"Error writing point: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example creates a Pandas DataFrame of our caught Pokemon for this session. We set the index of our dataframe to the timestamp of when the Pokemon was caught and then provide the dataframe plus the following write parameters to the ‘write()’ function:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;data_frame_measurement_name
   &lt;/td&gt;
   &lt;td&gt;The name of the measurement you wish to write your Pandas DataFrame into. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;data_frame_tag_columns
   &lt;/td&gt;
   &lt;td&gt;A list of strings containing the column names you wish to make tags.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;data_frame_timestamp_column
   &lt;/td&gt;
   &lt;td&gt;Use this parameter to set the timestamp column if your index is not set to the timestamp. 
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Make sure to check out the full example &lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/pokemon-trainer/pandas-write.py"&gt;here&lt;/a&gt;. You can also find a batching example &lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/pokemon-trainer/basic-write.py"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Writing data from a file
&lt;/h4&gt;

&lt;p&gt;A much-requested feature of the previous client library was more ways to upload and parse different file data formats. Leveraging the utilities of PyArrow, we can now support the upload of files in the following formats:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;CSV
   &lt;/td&gt;
   &lt;td&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/file-import/csv_write.py"&gt;Example here.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;JSON
   &lt;/td&gt;
   &lt;td&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/file-import/json_write.py"&gt;Example here.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Feather
   &lt;/td&gt;
   &lt;td&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/file-import/feather_write.py"&gt;Example here.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;ORC
   &lt;/td&gt;
   &lt;td&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/file-import/orc_write.py"&gt;Example here.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Parquet
   &lt;/td&gt;
   &lt;td&gt;
&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/file-import/parquet_write.py"&gt;Example here.&lt;/a&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Querying data
&lt;/h3&gt;

&lt;p&gt;Now that we wrote some data into InfluxDB 3.0, let’s talk about how to query it back out. 3.0 provides a fully supported Apache Arrow Flight endpoint, which allows users to query using SQL or InfluxQL.&lt;/p&gt;

&lt;p&gt;Let's first take a look at a basic time series query in both SQL and InfluxQL;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;influxdb_client_3&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;

    &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"eu-central-1-1.aws.cloud2.influxdata.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;org&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"6a841c0c08328fb1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"pokemon-codex"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;sql&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'''SELECT * FROM caught WHERE trainer = 'ash' AND time &amp;gt;= now() - interval '1 hour' LIMIT 5'''&lt;/span&gt;

&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'sql'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'all'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;influxql&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'''SELECT * FROM caught WHERE trainer = 'ash' AND time  &amp;gt; now() - 1h LIMIT 5'''&lt;/span&gt;

&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;influxql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'influxql'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'pandas'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in this example we used the same client to query both with InfluxQL and SQL. Let's take a quick look at the query parameters to see how they shape our returned result.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;query
   &lt;/td&gt;
   &lt;td&gt;This parameter currently accepts the string literal of your SQL or InfluxQL query. We hope to add prepared statements to this soon.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;language
   &lt;/td&gt;
   &lt;td&gt;This parameter accepts a string literal of either ‘sql’ or ‘influxql’
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;mode
   &lt;/td&gt;
   &lt;td&gt;There are currently 5 return modes:
&lt;ol&gt;

&lt;li&gt;‘all’: this returns all the data queried as a &lt;a href="https://www.influxdata.com/blog/apache-arrow-basics-coding-apache-arrow-python/#:~:text=pyarrow%20client%20library.-,The%20basics,-In%20Apache%20Arrow"&gt;PyArrow Table&lt;/a&gt;.

&lt;/li&gt;
&lt;li&gt;‘pandas’: Returns all data as a Pandas DataFrame

&lt;/li&gt;
&lt;li&gt;‘chunk’: Returns a flight reader so a user can iterate through large queries in smaller sample sizes (&lt;a href="https://github.com/InfluxCommunity/influxdb3-python/blob/main/Examples/query_type.py"&gt;see example&lt;/a&gt;)

&lt;/li&gt;
&lt;li&gt;‘reader’: Attempts to convert the stream to a RecordBatchReader  

&lt;/li&gt;
&lt;li&gt;‘schema’: returns the query payload schema
&lt;/li&gt;
&lt;/ol&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Future hopes
&lt;/h2&gt;

&lt;p&gt;Rome wasn’t built in a day, and there are plenty of quality-of-life improvements and new features to add. Here is a table outlining a few: \&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;Feature
   &lt;/td&gt;
   &lt;td&gt;Status
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Merge the Write API from the V2 Client to remove the external library dependency. 
   &lt;/td&gt;
   &lt;td&gt;In progress
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Prepared Statements for queries
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Arrow table writer for InfluxDB
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Improve Polars support
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Integrate delta sharing
   &lt;/td&gt;
   &lt;td&gt;TO DO
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Try it out for yourself
&lt;/h2&gt;

&lt;p&gt;We built the foundations of what I hope will be a great community-driven client library for InfluxDB 3.0 in Python. My call to action is if you haven’t already done so, try out the library and put it through its paces. There are so many edge cases we might not be aware of and we won’t find those without community help. I am eagerly awaiting issues and feature requests.&lt;/p&gt;

</description>
      <category>python</category>
      <category>arrow</category>
      <category>influxdb</category>
      <category>pokemon</category>
    </item>
    <item>
      <title>OpenTelemetry Tutorial: Collect Traces, Logs &amp; Metrics with InfluxDB 3.0, Jaguar &amp; Grafana</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Tue, 30 May 2023 12:58:47 +0000</pubDate>
      <link>https://dev.to/jayclifford345/opentelemetry-tutorial-collect-traces-logs-metrics-with-influxdb-30-jaguar-grafana-2lon</link>
      <guid>https://dev.to/jayclifford345/opentelemetry-tutorial-collect-traces-logs-metrics-with-influxdb-30-jaguar-grafana-2lon</guid>
      <description>&lt;p&gt;By Jay Clifford &amp;amp; Jacob Marble&lt;/p&gt;

&lt;p&gt;Here at InfluxData, we recently announced &lt;a href="https://www.influxdata.com/products/influxdb-overview/"&gt;InfluxDB 3.0&lt;/a&gt;, which expands the number of use cases that are feasible with InfluxDB. One of the primary benefits of the new storage engine that powers InfluxDB 3.0 is its ability to store &lt;a href="https://www.influxdata.com/blog/tracing-influxdb-iox/"&gt;traces&lt;/a&gt;, metrics, events, and logs in a single database. &lt;/p&gt;

&lt;p&gt;Each of these types of time series data has unique workloads, which leaves some unanswered questions. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What schema should I follow?&lt;/li&gt;
&lt;li&gt;How do I convert my traces to line protocol?&lt;/li&gt;
&lt;li&gt;How does InfluxDB connect with the larger observability ecosystem?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Luckily this is where our work within OpenTelemetry comes into play. If you want to learn more about what OpenTelemetry is, we recommend this &lt;a href="https://www.influxdata.com/blog/getting-started-with-opentelemetry-observability/"&gt;blog&lt;/a&gt; by Charles Mahler. However, the aim of &lt;em&gt;this&lt;/em&gt; blog is to take you through a working example of OpenTelemetry and InfluxDB 3.0.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the demo
&lt;/h2&gt;

&lt;p&gt;Let's start with the fun part by running the demo and then discussing the theory behind it. Here is the demo we are going to deploy:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kSrKVMqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7pffv5fndoxgkphojz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kSrKVMqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7pffv5fndoxgkphojz2.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This demo uses Hot R.O.D. to simulate traces, logs, and metrics. We then use OpenTelemetry Collector to collect that data and write it to InfluxDB 3.0. Finally, we use Grafana and Jaeger UI to visualize this data in a highly summarized view.&lt;/p&gt;

&lt;p&gt;To run the demo you can use &lt;a href="https://killercoda.com/influxdata/course/demos/otel"&gt;this KillerCoda demo environment&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Simply create an account and follow the tutorial to run through the demo without worrying about a local installation. For local installs, we have all the info you need in a GitHub repo, so check out the &lt;a href="https://github.com/InfluxCommunity/influxdb-observability"&gt;repository readme&lt;/a&gt; for up-to-date installation instructions. &lt;/p&gt;
&lt;h3&gt;
  
  
  Walkthrough
&lt;/h3&gt;

&lt;p&gt;Let's run through the demo with Killercoda so you can see what to expect:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You will see the demo provisioning script ticking away on the first load. It shouldn’t take too long. You will know it finished loading once &lt;code&gt;InfluxDB OTEL Demo&lt;/code&gt; appears.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lj1NHzyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8cz4oswrr7pq9gbbnh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lj1NHzyu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8cz4oswrr7pq9gbbnh4.png" alt="Image description" width="800" height="693"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you don’t already have one, follow the steps to create &lt;a href="https://cloud2.influxdata.com/signup"&gt;a free InfluxDB 3.0 Cloud account&lt;/a&gt; for the demo.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_P8wEdkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2h5re1ahtvol3f88kvqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_P8wEdkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2h5re1ahtvol3f88kvqi.png" alt="Image description" width="800" height="1067"&gt;&lt;/a&gt; &lt;strong&gt;Make sure to follow the instructions for creating a bucket in InfluxDB called &lt;code&gt;otel&lt;/code&gt; and generating a read-and-write token for the new &lt;code&gt;otel&lt;/code&gt; bucket.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, we provide credentials for InfluxDB 3.0 so the demo can write and query from our &lt;code&gt;otel&lt;/code&gt; bucket. To do this, select the &lt;code&gt;Editor&lt;/code&gt; tab within Killercoda and navigate to &lt;code&gt;influxdb-observability/.env&lt;/code&gt;. This is where we update our connection credentials.&lt;br&gt;
&lt;strong&gt;Note: Make sure to update INFLUXDB_ADDR to point to your InfluxDB Cloud region. Also, remove any protocols from the address (e.g., HTTPS://).&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0JQyTHss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw4h3hpd39ceuccuou9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0JQyTHss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw4h3hpd39ceuccuou9z.png" alt="Image description" width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once we finish updating the environment, we can start the demo. Simply click this command to spin up the demo: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5O0KNc5e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3fbftsyz0qildqt3i8gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5O0KNc5e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3fbftsyz0qildqt3i8gm.png" alt="Image description" width="800" height="202"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;--file&lt;/span&gt; demo/docker-compose.yml &lt;span class="nt"&gt;--project-directory&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now we get to the fun part! Let's generate some traces. First of all, let's open the HotROD application. Within a local installation, this runs via &lt;code&gt;localhost:8080&lt;/code&gt;. To access this address in Killercoda, follow the hyperlinks provided in the screenshot below.  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--clY99C3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wp16jg0zp516ddg7nqiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--clY99C3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wp16jg0zp516ddg7nqiv.png" alt="Image description" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From there we can start generating traces by clicking on the different buttons. Each one simulates ordering a car for the indicated task, which triggers the background service calls and creates the trace. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tzch3B7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nbhlv0kin3budwpqkw8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tzch3B7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nbhlv0kin3budwpqkw8a.png" alt="Image description" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you head over to your InfluxDB 3.0 Cloud instance, you can explore the &lt;code&gt;otel&lt;/code&gt; bucket schema to see how we translate the OpenTelemetry data structure into InfluxDB’s line protocol, which consists of measurements, tags, and fields. &lt;strong&gt;(Note: We will cover line protocol more in-depth in the next blog.)&lt;/strong&gt; &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JzWjNsMs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atpi5c9atgxjw72agubg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JzWjNsMs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atpi5c9atgxjw72agubg.png" alt="Image description" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If we head back to our Killercoda instance and open up Grafana, we can explore our OpenTelemetry dashboard and configuration. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9p9J4EKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzhy126n2o8g41zzyrrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9p9J4EKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzhy126n2o8g41zzyrrq.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Grafana explained
&lt;/h2&gt;

&lt;p&gt;This section breaks down some of the key features of the Grafana dashboard. Let's start with the data sources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nKzdfHdS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rqqfotjn6dmzj8nw03ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nKzdfHdS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rqqfotjn6dmzj8nw03ip.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we connected to two data sources – &lt;a href="https://www.influxdata.com/glossary/apache-arrow-flight-sql/"&gt;Flight SQL&lt;/a&gt; and Jaeger. Both sources pull data directly from InfluxDB 3.0, but we will discuss how they work and their differences in more detail in the next blog. For now, you just need to know the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flight SQL – This is the direct SQL query interface for InfluxDB 3.0. It is great for general-purpose time series-based queries and metrics summaries.&lt;/li&gt;
&lt;li&gt;Jaeger – This functions as the bridging interface for metrics, logs, and traces between InfluxDB 3.0 and Grafana visualizations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at our dashboard again and map our data sources to the different panels. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6LxMQBVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5zdzrorf038de3wripk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6LxMQBVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5zdzrorf038de3wripk.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We use Flight SQL to generate our general-purpose navigation and summary overviews:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt; queries InfluxDB for all unique services found within the &lt;code&gt;otel&lt;/code&gt; bucket. We use the service names as data links for the rest of our visualization. For example, clicking &lt;code&gt;redis&lt;/code&gt; filters my summary results and trace list to only include the service &lt;code&gt;redis&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2cvo8xw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o3ttc1tje5ra73mee8zk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2cvo8xw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o3ttc1tje5ra73mee8zk.png" alt="Image description" width="786" height="614"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Duration&lt;/strong&gt; returns the duration in nanoseconds of each spanID over the selected time period. As part of our SQL query, we convert this value to seconds to improve readability. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--01d6Lb8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nj2evypf8mo40e1sjzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--01d6Lb8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nj2evypf8mo40e1sjzq.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Rate&lt;/strong&gt; calculates the service error rate, as a percentage, based on the number of errors flagged in the &lt;code&gt;otel.status&lt;/code&gt; column. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w-nRovGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fir8ihhvf4g2dxcr0bfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w-nRovGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fir8ihhvf4g2dxcr0bfl.png" alt="Image description" width="772" height="664"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use Jaeger to drill into our traces with the following visualizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Latency Histogram&lt;/strong&gt; creates a histogram based on the latency of traces within the service. This panel groups trace spans based on their detected latency range. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--acZ6nEg1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5kncj94kqzh883esaqle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--acZ6nEg1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5kncj94kqzh883esaqle.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traces&lt;/strong&gt; provides a table of traces associated with the selected service. The table includes trace name, start time, and duration. Users can select TraceID’s to generate the next two visualizations: Relationships and Trace. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UAfF0Af6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw8hg8lsdmfc7nzyunhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UAfF0Af6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw8hg8lsdmfc7nzyunhl.png" alt="Image description" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Relationships&lt;/strong&gt;: After selecting a TraceID in the Traces panel, users can navigate span relationships within the trace using the span tree. We’ll have more on this in the next blog. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ec5D38y7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b82dcu07hlegamf0en5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ec5D38y7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b82dcu07hlegamf0en5d.png" alt="Image description" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trace&lt;/strong&gt;: This panel allows users to navigate raw trace data for the selected TraceID and its associated log data. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6VHBlUhh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pe7pznguwrk292yykzlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6VHBlUhh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pe7pznguwrk292yykzlu.png" alt="Image description" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We hope this tutorial provides you with a solid foundation to learn more about OpenTelemetry and how InfluxDB 3.0 will play a pivotal role in future observability solutions. Stick around for the next blog where we will delve into the theory of OpenTelemtry, breaking down each component in the demo architecture and discussing data schemas. Until then play with the &lt;a href="https://killercoda.com/influxdata/course/demos/otel"&gt;demo&lt;/a&gt;, fork the &lt;a href="https://github.com/InfluxCommunity/influxdb-observability"&gt;repo&lt;/a&gt;, and see if you can apply these components to your own use case. If you have any questions please do not hesitate to reach out to us on &lt;a href="https://influxcommunity.slack.com/"&gt;Slack&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>otel</category>
      <category>influxdb</category>
      <category>jaguar</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Querying InfluxDB IOx Using the New Flight SQL Plugin for Grafana</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Mon, 20 Feb 2023 12:33:57 +0000</pubDate>
      <link>https://dev.to/jayclifford345/querying-influxdb-iox-using-the-new-flight-sql-plugin-for-grafana-1nd3</link>
      <guid>https://dev.to/jayclifford345/querying-influxdb-iox-using-the-new-flight-sql-plugin-for-grafana-1nd3</guid>
      <description>&lt;h2&gt;
  
  
  A quick start guide to installation, configuration, and usage
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/grafana/"&gt;Grafana&lt;/a&gt; has been a staple visualization tool used alongside InfluxDB since its inception. With the &lt;a href="https://www.influxdata.com/blog/announcing-general-availability-new-database-engine"&gt;release of InfluxDB Cloud powered by IOx&lt;/a&gt;, there is now a new way to integrate InfluxDB and Grafana: &lt;strong&gt;&lt;a href="https://arrow.apache.org/blog/2022/02/16/introducing-arrow-flight-sql/"&gt;Flight SQL&lt;/a&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Two of our engineers, Brett and Helen, have been working hard to create a new Grafana plugin called Flight SQL. This open-source plugin allows users to perform &lt;a href="https://www.influxdata.com/products/sql/"&gt;SQL&lt;/a&gt; queries directly against InfluxDB IOx and other storage engines compatible with &lt;a href="https://www.influxdata.com/glossary/apache-datafusion/"&gt;Apache DataFusion&lt;/a&gt;. This blog post provides a quick start guide to installing, configuring, and utilizing the data source plugin. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YHMZUAQN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk6vbhhwwxe3kmiekc84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YHMZUAQN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk6vbhhwwxe3kmiekc84.png" alt="dashboard" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: If you are looking for an InfluxDB OSS tutorial with either Flux or InfluxQL, check out the following &lt;a href="https://www.influxdata.com/blog/getting-started-influxdb-grafana/"&gt;blog&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the plugin
&lt;/h2&gt;

&lt;p&gt;We are currently releasing the Flight SQL plugin for Grafana as an experimental package. To install it, check out the most up-to-date instructions &lt;a href="https://docs.influxdata.com/influxdb/cloud-iox/visualize-data/grafana/"&gt;here&lt;/a&gt;. Since the package is currently experimental, it is unsigned and requires you to explicitly approve the installation of the plugin (The instructions will help you through this).&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the plugin
&lt;/h2&gt;

&lt;p&gt;To configure the FlightSQL plugin check out the following &lt;a href="https://docs.influxdata.com/influxdb/cloud-iox/visualize-data/grafana/?t=Docker#configure-the-flight-sql-datasource"&gt;documentation&lt;/a&gt;. My top tip here is to make sure you specify your host URL in the following manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;us-east-1-1.aws.cloud2.influxdata.com:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;You will find that if you include any protocols or trailing paths after the port, the connection will timeout.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you complete the steps in the above documentation, you will receive the following connection success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rbaFRkTq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4n5geczphz9o23ro2cxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rbaFRkTq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4n5geczphz9o23ro2cxc.png" alt="Image description" width="800" height="858"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start examples
&lt;/h2&gt;

&lt;p&gt;If you would like to skip some of the manual setups and are familiar with Docker, I also created a repository called &lt;a href="https://github.com/InfluxCommunity/InfluxDB-IOx-Quick-Starts"&gt;InfluxDB-IOx-Quick-Starts&lt;/a&gt;. This repository contains a series of Telegraf and Grafana examples deployed using Docker-Compose. To utilize this repository, run the following instructions: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Clone the repository:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clone https://github.com/InfluxCommunity/InfluxDB-IOx-Quick-Starts.git 
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an environment file within the repository&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;InfluxDB-IOx-Quick-Starts &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;touch&lt;/span&gt; .env 

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the following environment variables to the file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export INFLUX_HOST=&amp;lt;INSERT_HOST&amp;gt; 

export INFLUX_TOKEN=&amp;lt;INSERT_TOKEN&amp;gt;

export INFLUX_ORG=&amp;lt;INSERT_ORG&amp;gt;

export INFLUX_BUCKET=&amp;lt;INSERT_BUCKET_NAME&amp;gt;

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;Note: make sure to remember to only include the host:port like so:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;us-east-1-1.aws.cloud2.influxdata.com:443
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;Save Changes.&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Source your newly created environment file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now you can deploy both Telegraf and Grafana using docker-compose:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; system-monitoring/docker-compose.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the quick start is running, you can access Grafana at &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;. You can log in with the default &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt;: &lt;strong&gt;admin&lt;/strong&gt; and &lt;strong&gt;admin&lt;/strong&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Dashboard overview
&lt;/h2&gt;

&lt;p&gt;Now that we installed and configured the plugin, let's take a tour of an example Grafana dashboard using the Flight SQL data source. Linux System is a beloved dashboard with the InfluxDB Community, so we will use this dashboard as an example of how to convert Flux to SQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global variables for Flight SQL
&lt;/h3&gt;

&lt;p&gt;Before we move on to the query conversions, it is worth mentioning that there are a series of global Grafana variables you can use to make your queries dynamic and streamlined. Here is a non-exhaustive list:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Variable&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Description&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;$__timeRange(time)
   &lt;/td&gt;
   &lt;td&gt;This variable allows you to dynamically set the query time range. For instance, if you selected to see the last 15 minutes of data from the drop-down you would see an equivalent example to the conversion column.
&lt;p&gt;
&lt;strong&gt;Example Conversion:&lt;/strong&gt; 
’2023-01 01T00:00:00Z’ and time &amp;lt;= ’2023-01-01T01:00:00Z’
   &lt;/p&gt;
&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;$__dateBin(time)
   &lt;/td&gt;
   &lt;td&gt;This variable is a dynamic short form for creating an interval window, and is useful for aggregation. The interval is set based on the dashboard specification.  

&lt;p&gt;
&lt;strong&gt;Example Conversion:&lt;/strong&gt; 
date_bin(interval ’30 second’, time, interval ’1970-01-01T00:00:00Z’)
   &lt;/p&gt;
&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;$__timeGroup(time, hour)
   &lt;/td&gt;
   &lt;td&gt;As we know

 ```dateBin```

 is effectively a window period. This gives us a way to reduce the amount of data returned via aggregations or selectors.

 ```timeGroup```

 differs by producing projections that can be grouped on.
&lt;p&gt;
&lt;strong&gt;Example Conversion:&lt;/strong&gt; 
datepart(’minute’, time),datepart(’hour’, time),datepart(’day’, time),datepart(’month’, time),datepart(’year’, time);
   &lt;/p&gt;
&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can find a full variable list by clicking &lt;strong&gt;Show Query Help&lt;/strong&gt; while constructing your query: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cfPquT-4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwlq0utvj7z6h9z5x6cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cfPquT-4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwlq0utvj7z6h9z5x6cb.png" alt="Image description" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the list of hosts
&lt;/h3&gt;

&lt;p&gt;The InfluxDB dashboard introduced a dashboard variable that allowed you to filter based on the hostname tag. Let's take a look at the conversion.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Flux&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;FlightSQL&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;

import "influxdata/influxdb/v1"
v1.measurementTagValues(bucket: v.bucket, measurement: "cpu", tag: "host")

   &lt;/td&gt;
   &lt;td&gt;

SELECT distinct (host) FROM cpu 
WHERE $__timeRange(time)

   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Aggregation Window (Average)
&lt;/h3&gt;

&lt;p&gt;One of the most commonly used functions within Flux is the aggregate window. This allows us to window our time series data by a specific interval (e.g. 30 seconds, 2 minutes, 1 year) and then perform an aggregator (mean, mode, median, etc.) or a selector (max, min, first, last, etc.) function on the windowed data points.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Flux&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Flight SQL&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;

from(bucket: v.bucket)
  |&amp;gt; range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |&amp;gt; filter(fn: (r) =&amp;gt; r._measurement == "cpu")
  |&amp;gt; filter(fn: (r) =&amp;gt; r._field == "usage_user" or r._field == "usage_system" or r._field == "usage_idle")
  |&amp;gt; filter(fn: (r) =&amp;gt; r.cpu == "cpu-total")
  |&amp;gt; filter(fn: (r) =&amp;gt; r.host == v.linux_host)
  |&amp;gt; aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)


   &lt;/td&gt;
   &lt;td&gt;

SELECT
  $__dateBin(time) ,
  avg(usage_user) AS 'usage_user',
  avg(usage_system) AS 'usage_system',
FROM cpu
WHERE host='${linux_host}' AND cpu='cpu-total' AND  $__timeRange(time)
GROUP BY time

   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Calculating the derivative
&lt;/h3&gt;

&lt;p&gt;Now let’s not beat around the bush here: not all time-based queries are simple within SQL. The derivative of a value with respect to time, also known as the rate of change of the value, is calculated by using the rules of differentiation. The derivative of a function gives us the rate of change of the function at a given point. This requires us to know the previous value as well as the current. Let's look at an example: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Flux&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Flight SQL&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
from(bucket: v.bucket)
  |&amp;gt; range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |&amp;gt; filter(fn: (r) =&amp;gt; r._measurement == "diskio")
  |&amp;gt; filter(fn: (r) =&amp;gt; r._field == "read_bytes" or r._field == "write_bytes")
  |&amp;gt; filter(fn: (r) =&amp;gt; r.host == v.linux_host)
  |&amp;gt; derivative(unit: v.windowPeriod, nonNegative: false)
   &lt;/td&gt;
   &lt;td&gt;
SELECT time, (read_bytes_delta_v / delta_t_ns) * 1000000000 as read_bytes, (write_bytes_delta_v / delta_t_ns) * 1000000000 as write_bytes
FROM
(
SELECT
  (lag(read_bytes, 1) OVER (ORDER BY time))  - read_bytes  as read_bytes_delta_v,
   (lag(write_bytes, 1) OVER (ORDER BY time))  - write_bytes  as write_bytes_delta_v,
  (lag(cast(time as bigint), 1) OVER (ORDER BY time)) - cast (time as bigint) as delta_t_ns,
  time
FROM
diskio
WHERE host='${linux_host}' AND $__timeRange(time)
) as sq
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For these types of examples, we plan to provide workarounds like this where possible and slowly develop a series of custom SQL functions to handle these complex time series calculations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this blog post provides enough insight into the new Flight SQL plugin for Grafana to start trialing it. My call to action for you is to start testing the plugin and leaving your feedback within the plugin &lt;a href="https://github.com/influxdata/grafana-flightsql-datasource"&gt;repository&lt;/a&gt;. This will help us improve the overall useability and feature set of the plugin. I would also like to take the time to thank Brett and Helen for their efforts in making an open-source plugin for Flight SQL.&lt;/p&gt;

&lt;p&gt;Come join us on &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt; and the &lt;a href="https://community.influxdata.com/"&gt;forums&lt;/a&gt;. Share your thoughts — I look forward to seeing you there! &lt;/p&gt;

</description>
      <category>grafana</category>
      <category>monitoring</category>
      <category>sql</category>
      <category>arrow</category>
    </item>
    <item>
      <title>Intro to Py-Arrow</title>
      <dc:creator>Jay Clifford</dc:creator>
      <pubDate>Mon, 06 Feb 2023 11:57:12 +0000</pubDate>
      <link>https://dev.to/jayclifford345/intro-to-py-arrow-4h2</link>
      <guid>https://dev.to/jayclifford345/intro-to-py-arrow-4h2</guid>
      <description>&lt;p&gt;So by now, you are probably aware that InfluxData has been busy building the next generation of the InfluxDB storage engine. If you dig a little deeper, you will start to uncover some concepts that might be foreign to you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apache Parquet&lt;/li&gt;
&lt;li&gt;Apache Arrow&lt;/li&gt;
&lt;li&gt;Arrow Flight&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These open-source projects are some of the core building blocks that make up the new storage engine. For the most part, you won’t need to worry about what’s under the hood. Though if you are like me and want a more practical understanding of what some of the projects are, then join me on my journey of discovery.&lt;/p&gt;

&lt;p&gt;The first component we are going to dig into is Apache Arrow. My colleague Charles gave a great high-level overview, which you can find &lt;a href="https://www.influxdata.com/blog/how-apache-arrow-changing-big-data-ecosystem/"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;In short:&lt;br&gt;
&lt;em&gt;“Arrow manages data in arrays, which can be grouped in tables to represent columns of data in tabular data. Arrow also provides support for various formats to get those tabular data in and out of disk and networks. The most commonly used formats are Parquet (You will be exposed to this concept quite a bit).”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For performance reasons, our developers used Rust to code InfluxDB’s new storage engine. I personally like to learn new coding concepts in Python, so we will be making use of the pyarrow client library.&lt;/p&gt;
&lt;h2&gt;
  
  
  The basics
&lt;/h2&gt;

&lt;p&gt;In Apache Arrow, you have two primary data containers/classes: &lt;a href="https://arrow.apache.org/docs/python/data.html"&gt;Arrays and Tables&lt;/a&gt;. We will dig more into what these are later, but let’s first write a quick snippet of code for creating each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;

&lt;span class="c1"&gt;# Create a array from a list of values
&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"sheep"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"cows"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"horses"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"foxes"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int8&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;year&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;# Create a table from the arrays
&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_arrays&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;year&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'animal'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'count'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'year'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So in this example, you can see we constructed 3 arrays of values: animal, count, and year. We can combine these arrays to form the columns of a table. The results of running this code look like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;animal: string
count: int8
year: int16
----
animal: [["sheep","cows","horses","foxes"]]
count: [[12,5,2,1]]
year: [[2022,2022,2022,2022]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now that we have a table to work with, let’s see what we can do with it. The first primary feature of Arrow is to provide facilities for saving and restoring your tabular data (most commonly into the Parquet format, which will feature heavily in future blogs).&lt;/p&gt;

&lt;p&gt;Let’s save and load our newly created table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow.parquet&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pq&lt;/span&gt;

&lt;span class="c1"&gt;# Create a array from a list of values
&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"sheep"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"cows"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"horses"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"foxes"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int8&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;year&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;# Create a table from the arrays
&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_arrays&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;year&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'animal'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'count'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'year'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Save the table to a Parquet file
&lt;/span&gt;&lt;span class="n"&gt;pq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'example.parquet'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Load the table from the Parquet file
&lt;/span&gt;&lt;span class="n"&gt;table2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read_table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'example.parquet'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, to finish the basics, let’s try out a compute function (&lt;a href="https://arrow.apache.org/docs/python/generated/pyarrow.compute.CountOptions.html#pyarrow.compute.CountOptions"&gt;value_counts&lt;/a&gt;). We can apply compute functions to arrays and tables, which then allows us to apply transformations to a dataset. We will cover these in greater detail in the next section but let’s start with a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow.compute&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt;

&lt;span class="c1"&gt;# Create a array from a list of values
&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"sheep"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"cows"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"horses"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"foxes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"sheep"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int8&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;year&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;# Create a table from the arrays
&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_arrays&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;animal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;year&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'animal'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'count'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'year'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;count_y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value_counts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'animal'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;count_y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, call the library pyarrow.compute as pc and use the built-in count function. This allows us to count the number of values within a given array or table. We chose to count up the number of animals, which produces the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- child 0 type: string
  [
    "sheep",
    "cows",
    "horses",
    "foxes"
  ]
-- child 1 type: int64
  [
    2,
    1,
    1,
    1
  ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  A practical example
&lt;/h2&gt;

&lt;p&gt;So I decided to skip listing all the datatypes and processors to you and thought I would show you a more realistic example of using Apache Arrow with InfluxDB’s 3.0.&lt;/p&gt;

&lt;p&gt;So the plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Query InfluxDB 3.0 using the new python client library.&lt;/li&gt;
&lt;li&gt;Then we will use a new function to save the table as a series of partitioned Parquet files to disk.&lt;/li&gt;
&lt;li&gt;Lastly a second script will reload the partitions and perform a series of basic aggregations on our Arrow Table structure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s take a look at the code:&lt;br&gt;
create_parquet.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;''&lt;/span&gt;
&lt;span class="n"&gt;host&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'eu-central-1-1.aws.cloud2.influxdata.com'&lt;/span&gt;    
&lt;span class="n"&gt;org&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'Jay-IOx'&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'factory'&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;InfluxDBClient3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                         &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                         &lt;span class="n"&gt;org&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;org&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                         &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 


&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SELECT vibration FROM machine_data WHERE time &amp;gt;= now() - 1h GROUP BY machineID"&lt;/span&gt;
&lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"influxql"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Saving to parquet files..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="c1"&gt;# partitioning of your data in smaller chunks
&lt;/span&gt;&lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write_dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"machine_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"parquet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="n"&gt;partitioning&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;partitioning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="n"&gt;pa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"iox::measurement"&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
                &lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So this function you will be unfamiliar;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write_dataset(… partitioning=ds.partitioning(…)) : This modified method partitions our table into Parquet files based upon the values within our ‘iox::measurement’ column. This will look like a tree of directories. This method helps to separate large datasets into more manageable assets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s now take a look at the second script, which works with our saved Parquet files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pyarrow.dataset&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;

&lt;span class="c1"&gt;# Loading back the partitioned dataset will detect the chunks                
&lt;/span&gt;&lt;span class="n"&gt;machine_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"machine_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"parquet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;partitioning&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"iox::measurement"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;machine_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Convert to a table
&lt;/span&gt;&lt;span class="n"&gt;machine_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_table&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;machine_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Grouped Aggregation example
&lt;/span&gt;&lt;span class="n"&gt;aggregation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;group_by&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"machineID"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;aggregate&lt;/span&gt;&lt;span class="p"&gt;([(&lt;/span&gt;&lt;span class="s"&gt;"vibration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"mean"&lt;/span&gt;&lt;span class="p"&gt;),(&lt;/span&gt;&lt;span class="s"&gt;"vibration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;),(&lt;/span&gt;&lt;span class="s"&gt;"vibration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="n"&gt;to_pandas&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;             
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aggregation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                                            
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this script we deploy several new functions you might be familiar with if you work with Pandas or other query engines: group_by and aggregate. We use these functions to group our data points based on the measurement and provide a mathematical aggregate to each group (mean, max, min). This generates a new Arrow table based on the aggregations. We then convert the table back to a data frame for readability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this blog empowers you to start digging deeper into Apache Arrow and helps you to understand why we decided to invest in the future of Apache Arrow and its child products. I also hope it gives you the foundations to start exploring how you can build your own analytics applications from this framework. InfluxDB’s new storage engine emphasizes its commitment to the greater ecosystem. For instance, allowing the exportation of Parquet files gives us the opportunity to analyze our data in platforms such as &lt;a href="https://rapidminer.com/"&gt;Rapid Miner&lt;/a&gt; and other analytical platforms.&lt;/p&gt;

&lt;p&gt;My call to action for you is to check out the code &lt;a href="https://github.com/InfluxCommunity/Apache-Arrow-Tutorial"&gt;here&lt;/a&gt; and discover some of the other processor functionality Apache Arrow offers. A lot of the content coming up will be around Apache Parquet, so if there are any products/platforms that use Parquet that you would like us to talk about let us know. Come join us on &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt; and the &lt;a href="https://community.influxdata.com/"&gt;forums&lt;/a&gt;. Share your thoughts — I look forward to seeing you there!&lt;/p&gt;

</description>
      <category>apache</category>
      <category>python</category>
      <category>timeseries</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
