<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lauri Hänninen</title>
    <description>The latest articles on DEV Community by Lauri Hänninen (@lahannin).</description>
    <link>https://dev.to/lahannin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lahannin"/>
    <language>en</language>
    <item>
      <title>Analytics as Code: Managing Analytics Solutions Like Any Other Software</title>
      <dc:creator>Lauri Hänninen</dc:creator>
      <pubDate>Tue, 08 Mar 2022 21:11:53 +0000</pubDate>
      <link>https://dev.to/lahannin/analytics-as-code-managing-analytics-solutions-like-any-other-software-10fh</link>
      <guid>https://dev.to/lahannin/analytics-as-code-managing-analytics-solutions-like-any-other-software-10fh</guid>
      <description>&lt;p&gt;DevOps and CI/CD principles revolutionized software development. Anyone who wants to deliver high-quality software frequently and reliably uses these best practices. We use version control, write automated tests, and automatically deliver code from initial development to production to meet the demands of today’s world, where agility and speed have become critical competitive advantages.&lt;/p&gt;

&lt;p&gt;These best practices are possible because we control the source code. If we couldn’t work with the code, we couldn’t make incremental changes to respond to the growing expectations and demands of our end users. But when we think of our analytics platforms, we’re not really able to access and manage the code we create when we build analytics with them.&lt;/p&gt;

&lt;p&gt;So the obvious question is, if managing the underlying code and taking advantage of software development best practices have taken software development to a new level, why not apply the same techniques to analytics as well?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Analytics as Code?
&lt;/h2&gt;

&lt;p&gt;Analytics as Code is the management of analytics using human and machine-readable configuration files. This means that our analytics solutions— connectors, semantic layer, dashboards, metrics, visualizations, user management, and other analytical objects — are transformed into a manageable piece of code. And this code should be treated exactly the same way as any other application source code.&lt;/p&gt;

&lt;p&gt;The configuration files that define our analytics must be integrated into our version control systems to track, review, and monitor changes. With CI/CD platforms and testing tools, we can automate the integration and testing stages and deploy analytics to end-users faster, with higher quality and lower error rates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4zRJgej__YUC2SFl2SK_1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4zRJgej__YUC2SFl2SK_1w.png" alt="High-level continuous integration process scheme. Adapted from [DataOps &amp;amp; Headless BI: the perfect fit](https://medium.com/gooddata-developers/dataops-headless-bi-the-perfect-fit-2654a923ac01)" width="663" height="541"&gt;&lt;/a&gt;&lt;/p&gt;
High-level continuous integration process scheme. Adapted from https://medium.com/gooddata-developers/dataops-headless-bi-the-perfect-fit-2654a923ac01



&lt;p&gt;This allows us to quickly innovate and experiment with new insights and make them available to end-users at an ever-increasing rate. We can minimize the time and effort required to turn requirements into solutions and improve and reuse them throughout the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  From a manual process to an easy-to-manage, reusable piece of code
&lt;/h2&gt;

&lt;p&gt;Traditionally, all parts of our analytics are defined using the graphical user interface of the analytics platform. But because of a lack of openness and flexibility, the underlying code that we manually generate by clicking and dragging and dropping cannot be exported and managed outside the solution. This “what happens on the platform stays on the platform” approach has begun to limit analytics creation, management, and deployment. And it’s no longer a scalable solution as we strive to respond to today’s fast-paced world of analytics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlcso99ws1okc8664ghd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlcso99ws1okc8664ghd.png" alt="The traditional “what happens on the platform stays on the platform” approach." width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;
The traditional "what happens on the platform stays on the platform" approach.



&lt;p&gt;Analytics as Code is based on modern analytics tools that support the import and export of all underlying metadata — in a declarative format — and provide open APIs to automate the ongoing delivery process. When we can export human-readable configuration files from our entire analytics solution, we can use both the platform interface and our favorite IDEs to manage the code and leverage best software practices. As a result, analytics becomes an easy-to-manage, reusable piece of code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ue7nsgkdm54v90d7pih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ue7nsgkdm54v90d7pih.png" alt="When analytical tools provide open APIs and support the import and export of all underlying metadata." width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;
When analytical tools provide open APIs and support the import and export of all underlying metadata.



&lt;h2&gt;
  
  
  Examples of Analytics as Code configuration files
&lt;/h2&gt;

&lt;p&gt;Below is a metric configuration file imported from an example analytics platform — &lt;a href="https://hub.docker.com/r/gooddata/gooddata-cn-ce" rel="noopener noreferrer"&gt;GoodData.CN Community Edition&lt;/a&gt; — via open APIs. As we can see, the &lt;strong&gt;Total Sales&lt;/strong&gt; configuration is both human and machine-readable:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Metric config file




&lt;p&gt;In this exported configuration file for the &lt;strong&gt;Total Sales by Year&lt;/strong&gt; visualization, we use the created &lt;strong&gt;Total sales&lt;/strong&gt; metric (a reference to the metric in line 17) and slice it by year (line 37). The visualization type — &lt;strong&gt;column chart&lt;/strong&gt; — is specified in line 51.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Visualization config file




&lt;p&gt;Once we have a visualization, we can create a dashboard around it. In the following dashboard configuration file, we specify the layout and visualization (a reference to the created visualization in line 39).&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Dashboard config file




&lt;p&gt;Below we see what the created dashboard looks like on the platform. If we make any changes to the configuration files above — e.g., update the metric, change the type of visualization, or add a new visualization to the dashboard — we can import the files back to the platform, and the solution will update accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rgsfpzj1gjk3qnrbcn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rgsfpzj1gjk3qnrbcn6.png" alt="Created dashboard — GoodData.CN Community Edition" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;
Created dashboard - GoodData.CN Community Edition



&lt;p&gt;If you are interested, the complete config file of this simple example— data connector, physical data model, logical data model, users and user groups, and all previously displayed objects —can be found here: &lt;a href="https://gist.github.com/Lahannin/9e1f2be43d0067814dabab62bf209d2a" rel="noopener noreferrer"&gt;Configuration file&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantage of Analytics as Code
&lt;/h2&gt;

&lt;p&gt;Analytics as Code makes managing and deploying analytics more efficient by dividing analytics into reusable code snippets and utilizing the same principles we use to scale up our other software. Here are some of the benefits that Analytics as Code offers:&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning
&lt;/h3&gt;

&lt;p&gt;When we use configuration files, we can version the entire analytics solution and each object in it. Thus, all parts of our analytics are subject to source control, just like any other code.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD and Collaboration
&lt;/h3&gt;

&lt;p&gt;Our data engineers and analysts can work simultaneously with different parts of the solution — semantic layer, metrics, dashboards, or anything else — and write automated tests to ensure that the logic we use works as it should. They don’t have to worry about breaking the work of others when they push updated versions into production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reusability
&lt;/h3&gt;

&lt;p&gt;We can divide our analytics into modular code components, so our analytical objects become reusable code snippets that can be shared among teams. There is no need to re-create visualizations or metrics for different use cases, as we can reuse existing configuration files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency
&lt;/h3&gt;

&lt;p&gt;Because the configuration files serve as a single source of truth, Analytics as Code ensures consistency across the organization. It ensures that everything works the way we want it every time we deploy or update our analytics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Speed and Quality
&lt;/h3&gt;

&lt;p&gt;We can make incremental changes to the code and quickly deploy updated analytics versions. The faster we develop and deploy our analytics, the higher the quality because we can deploy smaller snippets of code that are much easier to test. And to complete the process, we can quickly gather feedback on changes and respond to them immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation
&lt;/h3&gt;

&lt;p&gt;Declarative configuration files, along with open APIs, allow us to automate hideous manual tasks like (de)provisioning of new tenants and dashboard, metrics, and visualization creation. They also make it possible to programmatically change the configuration of our analytics solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The concept of Analytics as Code is simple; we should treat our analytics in the same way as any other software. This approach complements the functionalities offered by our analytics platforms and helps us move out of the current situation where we are at the mercy of platforms in terms of how we build and manage our analytics.&lt;/p&gt;

&lt;p&gt;It’s time to turn our analytics into an easy-to-manage, reusable piece of code while leveraging software development best practices. By doing so, we can scale our analytics like modern applications, and ensure that we deliver data into people’s hands faster, more reliably, and more agilely so they can use it better for what it is intended for — to make better decisions.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author:&lt;/strong&gt; I'm Lauri Hänninen, Product Marketing Lead at Trezor. I specialize in translating complex technology, from crypto hardware security to B2B SaaS, into stories people actually understand.&lt;/p&gt;

&lt;p&gt;You can find my full professional portfolio at &lt;a href="https://laurihanninen.com/" rel="noopener noreferrer"&gt;laurihanninen.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>datascience</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Headless BI: Metric Standardization in Action</title>
      <dc:creator>Lauri Hänninen</dc:creator>
      <pubDate>Tue, 08 Mar 2022 19:36:53 +0000</pubDate>
      <link>https://dev.to/lahannin/headless-bi-metric-standardization-in-action-5f3p</link>
      <guid>https://dev.to/lahannin/headless-bi-metric-standardization-in-action-5f3p</guid>
      <description>&lt;p&gt;Metric standardization is a hot topic at the moment. Companies are deploying various solutions — metrics stores, metrics layers, and headless BI platforms — to provide consistent metrics to all of their data tools to avoid &lt;a href="https://lahannin.medium.com/danger-zone-inconsistent-metrics-at-work-306f09051a4" rel="noopener noreferrer"&gt;the danger zone of inconsistency&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This article provides a simple example of metric standardization, where different data consumers — SQL client, data science IDE, BI platform, and React application— access a headless BI platform, consume the same metrics, and achieve consistent results.&lt;/p&gt;

&lt;h4&gt;
  
  
  Table of Contents:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;What is headless BI?&lt;/li&gt;
&lt;li&gt;
Setting up the headless BI platform

&lt;ul&gt;
&lt;li&gt;GoodData.CN CE&lt;/li&gt;
&lt;li&gt;GoodData Foreign Data Wrapper&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Consuming the standardized revenue metric

&lt;ul&gt;
&lt;li&gt;SQL clients&lt;/li&gt;
&lt;li&gt;Data science IDEs&lt;/li&gt;
&lt;li&gt;BI platforms&lt;/li&gt;
&lt;li&gt;React applications&lt;/li&gt;
&lt;li&gt;Comparing the results&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Summary&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is headless BI? &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Headless BI means we separate the analytical backend and computing from consumption. This decoupling allows us to expose the universal semantic layer to multiple data tools via APIs and standard protocols.&lt;/p&gt;

&lt;p&gt;Because all data consumers thus have access to a single source of metrics, our data engineers, analysts, and end-users can work with consistent metrics — with the same meaning for everyone — with the tools of their choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the headless BI platform &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This article uses &lt;a href="https://hub.docker.com/r/gooddata/gooddata-cn-ce" rel="noopener noreferrer"&gt;GoodData.CN Community Edition&lt;/a&gt; to introduce the concept of headless BI. GoodData.CN CE runs on our local machines as a container, and we will configure it with the GoodData Foreign Data Wrapper (FDW) needed for the headless BI use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  GoodData.CN CE &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To follow this article, you can download GoodData &lt;a href="https://github.com/gooddata/gooddata-python-sdk" rel="noopener noreferrer"&gt;Python SDK&lt;/a&gt;, which contains a docker-compose file, and run the following command in the root folder:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker-compose up -d 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The docker-compose command starts both the GoodData.CN Community Edition and GoodData FDW containers and loads predefined analytical objects — data connector, semantic model, metrics, visualizations, and dashboard— into GoodData.CN.&lt;/p&gt;

&lt;p&gt;Once the containers are running, let’s go to &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/ &lt;/a&gt;and log in to the platform.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User: demo@example.com
Password: demo123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Below is the logical data model for the &lt;strong&gt;Demo&lt;/strong&gt; workspace created with the docker-compose. Later, this model and the &lt;strong&gt;Revenue&lt;/strong&gt; metric are exposed to external data tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymq4bjvmtlld23kph8nv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymq4bjvmtlld23kph8nv.png" alt="Logical Data Model — GoodData.CN (image by author)" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;
Logical Data Model — GoodData.CN (image by author



&lt;p&gt;The predefined analytical objects also contain a &lt;strong&gt;Revenue&lt;/strong&gt; metric. The metric uses another metric —&lt;strong&gt;Order Amount&lt;/strong&gt; that calculates the income of all orders—and counts revenue only from delivered orders (order status is not &lt;strong&gt;Returned&lt;/strong&gt; nor &lt;strong&gt;Canceled&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlfevj07084yqxu9w4qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlfevj07084yqxu9w4qm.png" alt="Revenue metric—GoodData.CN (image by author)" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;
Revenue metric—GoodData.CN (image by author)



&lt;p&gt;Below is the &lt;strong&gt;Order Amount&lt;/strong&gt; metric used in the &lt;strong&gt;Revenue&lt;/strong&gt; metric:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95bj74kg346n4f2evzbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95bj74kg346n4f2evzbc.png" alt="Order Amount metric — GoodData.CN (image by author)" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;
Order Amount metric — GoodData.CN (image by author)



&lt;p&gt;On the &lt;strong&gt;Analyze&lt;/strong&gt; &lt;strong&gt;tab&lt;/strong&gt;, we can create a simple table that slices the revenue with regions. The results will serve as a benchmark, as we will re-create them with different data tools in future chapters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp855kmwrznvvhjbpv47j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp855kmwrznvvhjbpv47j.png" alt="Revenue by Region — GoodData.CN (image by author)" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;
Revenue by Region — GoodData.CN (image by author)


&lt;h3&gt;
  
  
  GoodData Foreign Data Wrapper &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;GoodData Foreign Data Wrapper is a PostgreSQL foreign data wrapper extension. It is built on top of &lt;a href="https://multicorn.org/" rel="noopener noreferrer"&gt;multicore&lt;/a&gt;, and it makes GoodData.CN’s metrics, calculations, and data available in PostgreSQL as tables.&lt;/p&gt;

&lt;p&gt;We can connect to the running PostgreSQL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;From console using psql --host localhost --port 2543 --user gooddata gooddata123&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From any other client using JDBC string: jdbc:postgresql://localhost:2543/gooddata&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Username: gooddata
    Password: gooddata123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once connected, we can work with GoodData.CN Foreign Data Wrapper. At first, we need to define our GoodData.CN server in PostgreSQL.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
GoodData.CN server in PostgreSQL





&lt;p&gt;Next, we will import the entire semantic model into a special &lt;strong&gt;compute pseudo-table&lt;/strong&gt;. Doing SELECTs from this table will trigger the computation of analytics on GoodData.CN server based on the columns that we have specified on the SELECT.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The compute is called pseudo-table for a reason. It does not adhere to the relational model. The columns that you SELECT map to facts, metrics and labels in your semantic model. Computing results for the select will automati&lt;br&gt;
cally aggregate results on the columns that are mapped to labels in your semantic model. In other words cardinality of the compute table changes based on the columns that you SELECT.&lt;br&gt;
 ― &lt;a href="https://gooddata-fdw.readthedocs.io/en/latest/foreign_tables.html" rel="noopener noreferrer"&gt;GoodData Foreign Data Wrapper Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Import semantic model into the pseudo-table




&lt;h2&gt;
  
  
  Consuming the standardized revenue metric &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Currently, the revenue metric is used only on the GoodData.CN platform. Let’s see how to access the semantic model and consume the metric with other data tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL clients &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;First, the SQL client—&lt;a href="https://github.com/dbeaver/dbeaver" rel="noopener noreferrer"&gt;DBeaver&lt;/a&gt; in this case— needs to be connected to GoodData FDW.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Connection type: PostgreSQL
Host: localhost
Port: 2543
Database: gooddata
Username: gooddata
password: gooddata123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx74fralt8z8uqqxz7f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx74fralt8z8uqqxz7f8.png" alt="Database connection — DBeaver (image by author)" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;
Database connection — DBeaver (image by author)



&lt;p&gt;Once the connection is ready, we can write an SQL query to calculate the same Revenue by Region results created earlier in GoodData.CN.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select customers_region, revenue from demo.compute;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hr0p4abzau8al3l7uf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hr0p4abzau8al3l7uf1.png" alt="Revenue by Region — DBeaver (image by author)" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;
Revenue by Region — DBeaver (image by author)


&lt;h3&gt;
  
  
  Data science IDEs &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To work with &lt;a href="https://jupyter.org/" rel="noopener noreferrer"&gt;Jupyter&lt;/a&gt;, let’s start the notebook server from the command line:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ jupyter notebook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Next, we will use ipython-sql to connect to the FDW, run the same SQL query used with DBeaver, and print the Revenue by Region results.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Revenue by Region — Jupyter (image by author)





&lt;h3&gt;
  
  
  BI platforms &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;As a BI platform, this article uses &lt;a href="https://hub.docker.com/r/metabase/metabase/" rel="noopener noreferrer"&gt;Metabase&lt;/a&gt;, and we will run it locally as a container. The following command starts the Metabase container (note that the original port is changed from 3000 to 12345 because GoodData.CN CE uses the port 3000):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -d -p 12345:3000 --name metabase metabase/metabase
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Once the Metabase container is running, we need to create a network to connect the container with the FDW container because—as you guessed—everything is still running locally.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create network
$ docker network connect network metabase
$ docker network connect network gooddata-fdw-container-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Next, we can connect Metabase to the FDW pseudo table with the following details and credentials:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host: host.docker.internal
Port: 2543
Database name: gooddata
Username: gooddata
password: gooddata123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftorpaajzh8cbn55cikc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftorpaajzh8cbn55cikc8.png" alt="Database connection—Metadata (image by author)" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;
Database connection—Metadata (image by author)



&lt;p&gt;When the connection is complete, we can again use the same SQL query to compute the Revenue by Region results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffag3ad6v1maln6w8hlho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffag3ad6v1maln6w8hlho.png" alt="Revenue by Region — Metadata (image by author)" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;
Revenue by Region — Metadata (image by author)


&lt;h3&gt;
  
  
  React applications &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For this part, I created a React application using &lt;a href="https://github.com/gooddata/gooddata-ui-sdk" rel="noopener noreferrer"&gt;GoodData.UI&lt;/a&gt; accelerator toolkit. It is a CLI-based tool that guides you through creating the application step by step in your terminal application. The tool creates an application that is ready for use with none or minimal additional configuration needed from our side.&lt;/p&gt;

&lt;p&gt;To start with the React project, we run the below command in the terminal and follow the instructions provided by the CLI.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx --ignore-existing [@gooddata/create-gooddata-react-app](http://twitter.com/gooddata/create-gooddata-react-app) --backend tiger my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Once the build is finished, we need to go to the generated directory and start the app with the yarn start command.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd my-app
yarn start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then, we follow the directions on the main page and make the following edits to the src/constants.js file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
Constant.js config — React app





&lt;p&gt;Next, we will generate human-readable JavaScript identifiers for the data model objects, which will later be used in the code. First, we export the GoodData.CN authentication token environment variable (the token is the same for all GoodData.CN CE installations), and then run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TIGER_API_TOKEN=YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIz
yarn refresh-ldm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once the human-readable names for the data model objects and other metadata are generated, we can edit the &lt;strong&gt;src/routes/Home.js&lt;/strong&gt; file to match the following code to create a Revenue by Region table:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
home.js code for Revenue by Region table—React app





&lt;p&gt;When we return to the browser and go to the &lt;strong&gt;Home tab&lt;/strong&gt;, we see the embedded Revenue by Region results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjexjtj6v7v7ro5as6g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjexjtj6v7v7ro5as6g4.png" alt="Revenue by Region — React app (image by author)" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;
home.js Revenue by Region — React app (image by author)



&lt;h3&gt;
  
  
  Comparing the results &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For ease of comparison, I combined all the results of the previous steps into the image below. As we can see, all tools accessed the same semantic model, consumed the same &lt;strong&gt;Revenue&lt;/strong&gt; metric, and calculated exactly the same results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyjss4bqywgpt6kug9p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyjss4bqywgpt6kug9p0.png" alt="Standardized Revenue metric across various data tools (image by author)" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
Standardized Revenue metric across various data tools (image by author)



&lt;h2&gt;
  
  
  Summary &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The purpose of this article was to show how easy it is to start standardizing metrics. Standardization means that all our metrics are defined in one place and can be consumed by different data tools, such as SQL clients, Data science IDEs, BI platforms, and applications.&lt;/p&gt;

&lt;p&gt;With headless BI, the standardization is achieved by decoupling the analytical backend and computing from consumption and exposing the semantic layer via APIs and standard protocols. Thus, we can work with consistent metrics —with a shared understanding of what our data means— using the tools familiar to us.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the Author:&lt;/strong&gt; I'm Lauri Hänninen, Product Marketing Lead at Trezor. I specialize in translating complex technology, from crypto hardware security to B2B SaaS, into stories people actually understand.&lt;/p&gt;

&lt;p&gt;You can find my full professional portfolio at &lt;a href="https://laurihanninen.com/" rel="noopener noreferrer"&gt;laurihanninen.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>tutorial</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
