<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daemonxiao</title>
    <description>The latest articles on DEV Community by Daemonxiao (@daemonxiao).</description>
    <link>https://dev.to/daemonxiao</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/daemonxiao"/>
    <language>en</language>
    <item>
      <title>A simple way to import TiSpark into Databricks to load TiDB data</title>
      <dc:creator>Daemonxiao</dc:creator>
      <pubDate>Fri, 16 Sep 2022 07:30:43 +0000</pubDate>
      <link>https://dev.to/cloud-ecosystem/a-simple-way-to-import-tispark-into-databricks-to-load-tidb-data-5fb1</link>
      <guid>https://dev.to/cloud-ecosystem/a-simple-way-to-import-tispark-into-databricks-to-load-tidb-data-5fb1</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.pingcap.com/tidb/stable/overview"&gt;&lt;strong&gt;TiDB&lt;/strong&gt;&lt;/a&gt; is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pingcap/tispark"&gt;&lt;strong&gt;TiSpark&lt;/strong&gt;&lt;/a&gt; is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer the complex OLAP queries. It takes advantage of both the Spark platform and the distributed TiKV cluster and seamlessly glues to TiDB, the distributed OLTP database, to provide a Hybrid Transactional/Analytical Processing (HTAP) solution to serve as a one-stop solution for both online transactions and analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.databricks.com/getting-started/introduction/index.html"&gt;&lt;strong&gt;Databricks&lt;/strong&gt;&lt;/a&gt; is a cloud-based collaborative data science, data engineering, and data analytics platform that combines the best of data warehouses and data lakes into a lakehouse architecture.&lt;/p&gt;

&lt;p&gt;With the flexible and strong expanding ability of Databricks, you can install TiSpark in Databrick to gain special advantages in TiSpark, such as faster reading and writing, transactions, and so on.  This article will show how to use TiSpark in Databricks to handle TiDB data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy a TiDB in your own space
&lt;/h2&gt;

&lt;p&gt;TiDB supports a tool named &lt;a href="https://docs.pingcap.com/tidb/dev/tiup-documentation-guide"&gt;&lt;strong&gt;TiUP&lt;/strong&gt;&lt;/a&gt; to &lt;a href="https://docs.pingcap.com/tidb/dev/quick-start-with-tidb"&gt;&lt;strong&gt;quickly build the test cluster&lt;/strong&gt;&lt;/a&gt; on a single machine.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
Your machine must have a &lt;strong&gt;Public IP&lt;/strong&gt; for Databricks to access. Besides, this article uses a single instance TiDB cluster, so you don't need to config hosts for TiDB, PD, and TiKV.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install TiUP.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Declare the global environment variable.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source ${your_shell_profile}
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start the TiDB cluster.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tiup playground --host 0.0.0.0
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;After the TiDB cluster deploys completely, you will see this.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CLUSTER START SUCCESSFULLY, Enjoy it ^-^
To connect TiDB: mysql --comments --host 127.0.0.1 --port 4001 -u root -p (no password)
To connect TiDB: mysql --comments --host 127.0.0.1 --port 4000 -u root -p (no password)
To view the dashboard: http://127.0.0.1:2379/dashboard
PD client endpoints: [127.0.0.1:2379 127.0.0.1:2382 127.0.0.1:2384]
To view Prometheus: http://127.0.0.1:9090
To view Grafana: http://127.0.0.1:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;PD address&lt;/strong&gt; via MySQL client. Record the &lt;strong&gt;PD address&lt;/strong&gt; under the instance column. This address, which is an intranet IP usually, is used to communicate with internal instances.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -P 4000 -h 127.0.0.1
mysql&amp;gt; select * from INFORMATION_SCHEMA.CLUSTER_INFO;
+---------+-----------------+-----------------+-------------+------------------------------------------+---------------------------+-----------------+-----------+
| TYPE    | INSTANCE        | STATUS_ADDRESS  | VERSION     | GIT_HASH                                 | START_TIME                | UPTIME          | SERVER_ID |
+---------+-----------------+-----------------+-------------+------------------------------------------+---------------------------+-----------------+-----------+
...
| pd      | 172.*.*.*:2379  | 172.*.*.*:2379  | 6.1.0       | d82f4fab6cf37cd1eca9c3574984e12a7ae27c42 | 2022-07-13T13:25:54+08:00 | 2h31m44.004814s |         0 |
...
+---------+-----------------+-----------------+-------------+------------------------------------------+---------------------------+-----------------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Import sample data.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tiup bench tpcc --warehouses 1 prepare
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the result of importing.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -P 4000 -h 127.0.0.1
mysql&amp;gt; use test;
mysql&amp;gt; show tables;
+----------------+
| Tables_in_test |
+----------------+
| customer       |
| district       |
| history        |
| item           |
| new_order      |
| order_line     |
| orders         |
| stock          |
| warehouse      |
+----------------+
9 rows in set (0.00 sec)
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Install TiSpark in Databricks
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; &lt;br&gt;
A Databricks account, used to login Databricks workspace. If you don't have one, you can click &lt;a href="https://databricks.com/try-databricks?_ga=2.149545268.366754959.1655811799-681974717.1650447133"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; to find out how to try Databricks to get a free trial.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Databricks supports a custom &lt;a href="https://docs.databricks.com/clusters/init-scripts.html#example-cluster-scoped-init-scripts"&gt;&lt;strong&gt;init script&lt;/strong&gt;&lt;/a&gt;, which is a shell script and runs during startup of each cluster node before the Apache Spark driver or worker JVM starts. Here we use the init script to install TiSpark.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://accounts.cloud.databricks.com/login"&gt;&lt;strong&gt;Log in&lt;/strong&gt;&lt;/a&gt; to Databricks and open your workspace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new &lt;strong&gt;Python NoteBook&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the following scripts into notebook.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dbutils.fs.mkdirs("dbfs:/databricks/scripts/")
dbutils.fs.put(
"/databricks/scripts/tispark-install.sh",
"""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/tispark-assembly-3.2_2.12-3.1.0-SNAPSHOT.jar https://github.com/pingcap/tispark/releases/download/v3.1.0/tispark-assembly-3.2_2.12-3.1.0.jar 
""", 
True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach to a Spark cluster and &lt;strong&gt;run cell&lt;/strong&gt;. Then the scripts used to install TiSpark will be stored in &lt;strong&gt;&lt;a href="https://docs.databricks.com/data/databricks-file-system.html"&gt;DBFS&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Compute&lt;/strong&gt; on the sidebar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose a cluster that you want to run with TiSpark.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; to config cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In &lt;strong&gt;Configuration&lt;/strong&gt; panel, set &lt;strong&gt;Databricks Runtime Version&lt;/strong&gt; to "10.4 LTS".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In &lt;strong&gt;Advanced options&lt;/strong&gt;, add &lt;code&gt;dbfs:/databricks/scripts/tispark-install.sh&lt;/code&gt; to Init Scripts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GF2nqBjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwsf3ud07zdoaobh5d76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GF2nqBjB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwsf3ud07zdoaobh5d76.png" alt="Advanced options" width="825" height="424"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 3: Set TiSpark Configurations in Spark Config
&lt;/h2&gt;

&lt;p&gt;Once setting &lt;strong&gt;Init Scrpits&lt;/strong&gt;, you need to add some configurations for TiSpark in &lt;strong&gt;Spark Config&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add the following configuration. &lt;code&gt;{pd address}&lt;/code&gt; is we recorded in &lt;strong&gt;Step 1&lt;/strong&gt;. For more information about TiSpark conf, you can check &lt;strong&gt;&lt;a href="https://github.com/pingcap/tispark/blob/master/docs/configuration.md"&gt;TiSpark Configurations list&lt;/a&gt;&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spark.sql.extensions org.apache.spark.sql.TiExtensions
spark.tispark.pd.addresses {pd address}
spark.sql.catalog.tidb_catalog org.apache.spark.sql.catalyst.catalog.TiCatalog
spark.sql.catalog.tidb_catalog.pd.addresses {pd address}
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Optional) If the &lt;code&gt;{pd address}&lt;/code&gt; of your TiDB cluster is different with the &lt;strong&gt;Public IP&lt;/strong&gt; of the machine, you need to add a special conf to build a host mapping between &lt;code&gt;{pd address}&lt;/code&gt; (which equivalents to &lt;strong&gt;Intranet IP&lt;/strong&gt;) and &lt;strong&gt;Public IP&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spark.tispark.tikv.host_mapping {pd address}:{Public IP}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cZXjrgqM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7iej3v7ymx9ecpkx4h3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cZXjrgqM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7iej3v7ymx9ecpkx4h3f.png" alt="Advanced options" width="880" height="555"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Confirm and restart&lt;/strong&gt; to enable configuration.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 4: Handle your TiDB data in Databricks with TiSpark
&lt;/h2&gt;

&lt;p&gt;After the cluster with TiSpark has been started, you can create a new notebook, attach it to this cluster, and start operating TiDB data with TiSpark in Databricks directly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;strong&gt;Scala notebook&lt;/strong&gt; and attach the Spark cluster with TiSpark.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use tidb_catalog to enable TiSpark in SparkSession.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spark.sql("use tidb_catalog")
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;SELECT&lt;/code&gt; SQL to read TiDB data.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spark.sql("select * from test.stock limit 10").show
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gyi040pO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kffcmvvgkqkya7tjszck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gyi040pO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kffcmvvgkqkya7tjszck.png" alt="query result" width="880" height="512"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use Spark DataSource API to write TiDB data.&lt;/p&gt;

&lt;p&gt;a. Because TiSpark doesn't support DDL, you need to create a table in TiDB before writing in Databricks. Here use MySQL client to create a &lt;code&gt;best_sotck&lt;/code&gt; table on your own space.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -uroot -P4000 -h127.0.0.1
mysql&amp;gt; use test;
mysql&amp;gt; create table best_stock (s_i_id int(11), s_quantity int(11))
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;b. Set TiDB options, such as address, password, port and so on.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Descriptions of fields&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;tidb.addr:&lt;/strong&gt; TiDB address. It is the same as the PD IP we recorded in this Step 1.&lt;br&gt;
&lt;strong&gt;tidb.password:&lt;/strong&gt; TiDB password of user. &lt;br&gt;
&lt;strong&gt;tidb.port:&lt;/strong&gt; The port of TiDB. &lt;br&gt;
&lt;strong&gt;tidb.user:&lt;/strong&gt; The user used to connect with TiDB cluster.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val **tidbOptions**: Map[String, String] = Map(
"tidb.addr" -&amp;gt; "{tidb address}",
"tidb.password" -&amp;gt; "",
"tidb.port" -&amp;gt; "4000",
"tidb.user" -&amp;gt; "root")
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;c. Select data and write back to TiDB with the specified option.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Descriptions of fields&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;format:&lt;/strong&gt; Specify "TiDB" for TiSpark.&lt;br&gt;
&lt;strong&gt;database:&lt;/strong&gt; The destination database of writing.&lt;br&gt;
&lt;strong&gt;table: **The destination table of writing.&lt;br&gt;
**Mode:&lt;/strong&gt; The datasource writing mode.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val DF = spark.sql("select s_i_id, s_quantity from test.stock where s_quantity&amp;gt;99 ")
DF.write
.format("tidb")
.option("database", "test")
.option("table", "best_stock")
.options(tidbOptions)
.mode("append")
.save()
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ebyFBP4x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/reqabnutft1oqvwg8ctx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ebyFBP4x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/reqabnutft1oqvwg8ctx.png" alt="insert" width="880" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d. Check the writing data by &lt;code&gt;SELECT&lt;/code&gt; SQL.&lt;br&gt;
spark.sql("select * from test.best_stock limit 10").show&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oAFCWaDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vsismgjrfw3kasnrwjb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oAFCWaDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vsismgjrfw3kasnrwjb3.png" alt="check result" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;DELETE&lt;/code&gt; SQL to delete TiDB data and check with &lt;code&gt;SELECT&lt;/code&gt; SQL.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spark.sql("delete from test.best_stock where s_quantity &amp;gt; 99")
spark.sql("select * from test.best_stock").show
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IErlaIYo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y49kdp1rf5w8vsw5ingg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IErlaIYo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y49kdp1rf5w8vsw5ingg.png" alt="check result" width="880" height="311"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we use TiSpark in Databricks to access TiDB data. The key steps are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install TiSpark in Databricks via init scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set TiSpark Configurations in Databricks.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are interested in TiSpark, you can find more information on our &lt;a href="https://docs.pingcap.com/tidb/stable/tispark-overview"&gt;&lt;strong&gt;TiSpark homepage&lt;/strong&gt;&lt;/a&gt;. Welcome to share any ideas or PR on the &lt;a href="https://github.com/pingcap/tispark"&gt;&lt;strong&gt;TiSpark GitHub repository&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
