<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rodel E. Dagumampan</title>
    <description>The latest articles on DEV Community by Rodel E. Dagumampan (@rdagumampan).</description>
    <link>https://dev.to/rdagumampan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rdagumampan"/>
    <language>en</language>
    <item>
      <title>Released yuniql v1.2.25. Multi-tenant support, Oracle and largest set of bug fixes</title>
      <dc:creator>Rodel E. Dagumampan</dc:creator>
      <pubDate>Tue, 22 Feb 2022 06:14:18 +0000</pubDate>
      <link>https://dev.to/rdagumampan/released-yuniql-v1225-multi-tenant-support-oracle-and-largest-set-of-bug-fixes-3olo</link>
      <guid>https://dev.to/rdagumampan/released-yuniql-v1225-multi-tenant-support-oracle-and-largest-set-of-bug-fixes-3olo</guid>
      <description>&lt;p&gt;Announcing release of &lt;a href="https://github.com/rdagumampan/yuniql"&gt;yuniql&lt;/a&gt; &lt;a href="https://github.com/rdagumampan/yuniql/releases/tag/v1.2.25"&gt;v1.2.25&lt;/a&gt;. yuniql is a database version control and migration tool. Absolute zero runtime dependencies and use plain sql script files.&lt;/p&gt;

&lt;p&gt;Every release is special and this is not an exception. This pandemic period presents a rare opportunity to review more PRs, follow-up on reported issues, build prioritized features and fix major bugs. In fact, perhaps this is the release with the most bug-fixes! A testament of our growing usage and feedback from our community. We can only thank you all for your issue tickets and contributions. And here are the highlights:&lt;/p&gt;

&lt;h4&gt;
  
  
  Multi-tenant database support
&lt;/h4&gt;

&lt;p&gt;The idea behind this popular request is to perform schema versioning on shared database where objects are segmented by schema. This would also enable developers and teams to get dedicated schema version and evolve independently while using single installation. This is more important when teams are not permitted to create and destroy databases during dev/test. Get started here &lt;a href="https://yuniql.io/docs/get-started/"&gt;https://yuniql.io/docs/get-started/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;yuniql run -a --meta-schema dev01 --platform sqlserver
yuniql run -a --meta-schema dev02 --platform sqlserver
yuniql run -a --meta-schema dev03 --platform sqlserver

yuniql run -a --meta-schema sales --platform sqlserver
yuniql run -a --meta-schema inventory --platform sqlserver
yuniql run -a --meta-schema manufacturing --platform sqlserver
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Support Oracle Enterprise &amp;amp; Autonomous Database
&lt;/h4&gt;

&lt;p&gt;This release comes with previous support for Oracle. The platform tests has been executed using Oracle Enterprise 12C on docker container and Oracle Autonomous Database. You can explore this feature and get started here &lt;a href="https://yuniql.io/docs/get-started-oracle/"&gt;https://yuniql.io/docs/get-started-oracle/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;docker run --rm -dit --name oracle  -p 1521:1521  store/oracle/database-enterprise:12.2.0.1-slim

SETX YUNIQL_PLATFORM "oracle"
&lt;/span&gt;&lt;span class="gp"&gt;SETX YUNIQL_CONNECTION_STRING "Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCLCDB.localdomain)));&lt;/span&gt;User &lt;span class="nv"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sys&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;Password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Oradoc_db1&lt;span class="p"&gt;;&lt;/span&gt;DBA &lt;span class="nv"&gt;Privilege&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SYSDBA&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="s2"&gt;"
&lt;/span&gt;&lt;span class="go"&gt;SETX YUNIQL_WORKSPACE "C:\temp\yuniql\samples\basic-oracle-sample"

yuniql run --debug
yuniql list --debug
yuniql erase --force --debug
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Quick connectivity check with &lt;code&gt;yuniql check&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Can I resolve or ping the database server? Can I establish an Sql/Odbc connection to the master or catalog database? Will my connection string work if I run the migrations on this database server? Is my service account have permissions to establish a session? I hope this simple feature helps in diagnosing migration issues quickly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;yuniql check -c "Server=DESKTOP-ULR8GDO;&lt;/span&gt;&lt;span class="nv"&gt;Database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;helloyuniql&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;Trusted_Connection&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="s2"&gt;" -d --platform sqlserver
&lt;/span&gt;&lt;span class="go"&gt;
INF   2022-02-21 23:23:11Z   Verifying ping connectivity to server/cluster DESKTOP-ULR8GDO...
INF   2022-02-21 23:23:11Z   Ping connectivity to server/cluster DESKTOP-ULR8GDO - Successful
INF   2022-02-21 23:23:11Z   Verifying sql/odbc connectivity to master/catalog on DESKTOP-ULR8GDO...
INF   2022-02-21 23:23:11Z   Sql/odbc connectivity to master/catalog on DESKTOP-ULR8GDO - Successful
INF   2022-02-21 23:23:11Z   Verifying sql/odbc connectivity to database helloyuniql on DESKTOP-ULR8GDO...
INF   2022-02-21 23:23:11Z   Sql/odbc connectivity to database helloyuniql on DESKTOP-ULR8GDO - Successful

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Drop your dev/test database with &lt;code&gt;yuniql destroy&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Warning: Needless to say, don't use this in production. Just don't. This feature enables testers to rapidly deploy test databases and destroy when the test execution completes. If &lt;code&gt;yuniql erase&lt;/code&gt; is not sufficient, &lt;code&gt;yuniql destroy&lt;/code&gt; might suffice. It's quite simple, you prepare the destroy scripts yourself and do the necessary handling and sequence to drop the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;docker run --rm -dit -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=P@ssw0rd!" -p 1400:1433 -d mcr.microsoft.com/mssql/server:2017-latest

&lt;/span&gt;&lt;span class="gp"&gt;SETX YUNIQL_CONNECTION_STRING "Server=localhost,1400;&lt;/span&gt;&lt;span class="nv"&gt;Database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;helloyuniql&lt;span class="p"&gt;;&lt;/span&gt;User &lt;span class="nv"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SA&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;Password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;P@ssw0rd!&lt;span class="s2"&gt;"
&lt;/span&gt;&lt;span class="go"&gt;SETX YUNIQL_WORKSPACE "C:\temp\yuniql\samples\basic-sqlserver-sample"
SETX YUNIQL_PLATFORM "sqlserver" 

yuniql run -a --debug
yuniql list --debug
yuniql destroy --force --debug
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Breaking changes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;None&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  New features and enhancements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Support multi-tenant database&lt;/li&gt;
&lt;li&gt;Support Oracle platform&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yuniql check&lt;/code&gt;: Basic connectivity check to the target database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yuniql destroy&lt;/code&gt;: Drops database using custom prepared scripts.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yuniql apply&lt;/code&gt;: A synonym for &lt;code&gt;yuniql run&lt;/code&gt; to avoid conflict in &lt;code&gt;docker run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;trace-silent&lt;/code&gt;: Console window only log and no log files produced.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;trace-sensitive-data&lt;/code&gt;: Print secrets and connection strings into console window and log files.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;trace-directory&lt;/code&gt;: Set the directory where log files are created.&lt;/li&gt;
&lt;li&gt;Many many bug fixes and code improvements&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bugs fixes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Directory names like v10.0.0 and higher versions are not taking as latest #249&lt;/li&gt;
&lt;li&gt;Failed database updates should ideally fail with a non-zero exit code #198&lt;/li&gt;
&lt;li&gt;MSSQL/New version: SUSER_SNAME() throw an exception: String or binary data would be truncated. #201&lt;/li&gt;
&lt;li&gt;Providing table name for the schema version table cause error #202&lt;/li&gt;
&lt;li&gt;I am implementing Erease Yuniql and I get the following error &lt;strong&gt;This NpgsqlTransaction has completed; it is no longer usable&lt;/strong&gt;, the process deletes my tables in PostgreSQL (HyperScale) but flag that error. #204&lt;/li&gt;
&lt;li&gt;Error when setting up multi-tenant on shared databases instance #207&lt;/li&gt;
&lt;li&gt;Environment-aware migration not working. #208&lt;/li&gt;
&lt;li&gt;-- environment argument does not work #214&lt;/li&gt;
&lt;li&gt;CSV import towards SQLServer with a uniqueidentifier column #239&lt;/li&gt;
&lt;li&gt;Fixed failing bulk import when destination tables uses Pascal case&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Refactorings
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Revise all test scripts to CREATE TABLE instead of CREATE PROC #221&lt;/li&gt;
&lt;li&gt;Run platform tests in single database and not creating database per test case #220&lt;/li&gt;
&lt;li&gt;Why Core is linked to SqlServer ??? #241&lt;/li&gt;
&lt;li&gt;dynamically extract manifest data using reflection
&lt;a href="https://github.com/rdagumampan/yuniql/commit/74cba91c4ef954d0493052a68a6fd0e8db6441e6"&gt;https://github.com/rdagumampan/yuniql/commit/74cba91c4ef954d0493052a68a6fd0e8db6441e6&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Credits
&lt;/h3&gt;

&lt;p&gt;Special thanks to @bobi591, @wahmedswl, &lt;a class="mentioned-user" href="https://dev.to/zwdor20"&gt;@zwdor20&lt;/a&gt;, @black-eagle17, @automada, @gfnavarro for your significant contributions in this release. Also for everyone filing issue tickets and asking questions, thanks for all your feedback.&lt;/p&gt;

</description>
      <category>yuniql</category>
      <category>database</category>
      <category>devops</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Run environment-aware database migrations with yuniql</title>
      <dc:creator>Rodel E. Dagumampan</dc:creator>
      <pubDate>Mon, 01 Jun 2020 11:15:35 +0000</pubDate>
      <link>https://dev.to/rdagumampan/run-environment-aware-database-migrations-with-yuniql-522l</link>
      <guid>https://dev.to/rdagumampan/run-environment-aware-database-migrations-with-yuniql-522l</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Let me start this again by saying, I am writing this both to create awareness on &lt;a href="https://yuniql.io" rel="noopener noreferrer"&gt;yuniql&lt;/a&gt;; an open source database devops tool I am maintaining and to share some experiences that I hope would help our community make software better, faster, safer. &lt;/p&gt;

&lt;p&gt;The solutions here works with SqlServer, PostgreSql and MySql. :) Not let's get to meat of the story...&lt;/p&gt;

&lt;p&gt;One of the challenges when working with databases are variations of scripts and schema definitions driven by compliance to existing governance and practicalities. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-database queries 🍵
&lt;/h3&gt;

&lt;p&gt;Some organization requires database names to represent the environment where it's created. Let’s say we have HR and PAYROLL databases where each have DEV, TEST and PROD environments. As a result, in DEVELOPMENT environment we have &lt;code&gt;HRDB-DEV&lt;/code&gt; and &lt;code&gt;PAYROLLDB-DEV&lt;/code&gt;. This script certainly cannot be applied to TEST and PROD databases as the database names are suffixed with an environment code DEV.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;vw_employee_timesheets&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employee_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;checkin_time_utc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;checkout_time_utc&lt;/span&gt; 
    &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;HRDB&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;DEV&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;
    &lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;PAYROLLDB&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;DEV&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;timesheets&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employee_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employee_id&lt;/span&gt;
&lt;span class="k"&gt;GO&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment-specific scripts 🍈
&lt;/h3&gt;

&lt;p&gt;In another case, we may want to create partitioned tables best-fit for big data in TEST and PROD environment. We can observe the relatively complex configuration of the same table when created in PROD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;FUNCTION&lt;/span&gt; &lt;span class="n"&gt;RangePartFunction&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;RANGE&lt;/span&gt; &lt;span class="k"&gt;RIGHT&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'20200101'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'20200201'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="n"&gt;SCHEME&lt;/span&gt; &lt;span class="n"&gt;RangePartScheme&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="n"&gt;RangePartFunction&lt;/span&gt;
&lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;PartBefore2020&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Part202001&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Part202002&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;timesheets&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;timesheet_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;IDENTITY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;employee_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;checkin_time_utc&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;checkout_timeutc&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;PK_timesheets&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="n"&gt;CLUSTERED&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;timesheet_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;ASC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;RangePartScheme&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;checkin_time_utc&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;CLUSTERED&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;IDX_Part&lt;/span&gt; &lt;span class="k"&gt;On&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;timesheets&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;RangePartScheme&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;checkin_time_utc&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But wait, this may not be necessary for local development and testing where a minimal set of samples are being loaded. A simpler model like this may just work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;timesheets&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;timesheet_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;IDENTITY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;employee_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;checkin_time_utc&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;checkout_timeutc&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;PK_timesheets&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="n"&gt;CLUSTERED&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;timesheet_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;ASC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;PRIMARY&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Solutions
&lt;/h2&gt;

&lt;p&gt;yuniql addresses this in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;via token replacement&lt;/li&gt;
&lt;li&gt;via environment-reserved directories/folders. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To begin with, lets create a new repository repository. A yuniql repository is nothing but a git-ready repository to hold all your scripts files. For guide on how to get started with yuniql, please visit &lt;a href="https://yuniql.io/docs/" rel="noopener noreferrer"&gt;https://yuniql.io/docs/&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install yuniql cli&lt;/span&gt;
dotnet tool &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; yuniql.cli

&lt;span class="c"&gt;# prepare your repository&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;c:&lt;span class="se"&gt;\t&lt;/span&gt;emp
md yuniql-environment-aware
&lt;span class="nb"&gt;cd &lt;/span&gt;c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware

&lt;span class="c"&gt;# initialize your repository&lt;/span&gt;
yuniql init

c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&amp;gt;yuniql init
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\_&lt;/span&gt;init
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\_&lt;/span&gt;pre
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\v&lt;/span&gt;0.00
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\_&lt;/span&gt;draft
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\_&lt;/span&gt;post
INF   2020-05-31T...   Created script directory c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\_&lt;/span&gt;erase
INF   2020-05-31T...   Created file c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\R&lt;/span&gt;EADME.md
INF   2020-05-31T...   Created file c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\D&lt;/span&gt;ockerfile
INF   2020-05-31T...   Created file c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware&lt;span class="se"&gt;\.&lt;/span&gt;gitignore
INF   2020-05-31T...   Initialized c:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\y&lt;/span&gt;uniql-environment-aware.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Token replacement with yuniql 🍒
&lt;/h3&gt;

&lt;p&gt;When using yuniql, we can specify tokens to be replaced during migration run. For the given case, we use &lt;code&gt;ENVIRONMENT&lt;/code&gt; as token key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dbo&lt;/span&gt;&lt;span class="p"&gt;].[&lt;/span&gt;&lt;span class="n"&gt;vw_employee_timesheets&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="s1"&gt;'hello yuniql!'&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;--SELECT E.employee_id, E.first_name, E.last_name, E.position, T.checkin_time_utc, T.checkout_time_utc &lt;/span&gt;
    &lt;span class="c1"&gt;--FROM [HRDB-${ENVIRONMENT}].[dbo].[employees] E&lt;/span&gt;
    &lt;span class="c1"&gt;--INNER JOIN [PAYROLLDB-${ENVIRONMENT].[dbo].[timesheets] T&lt;/span&gt;
    &lt;span class="c1"&gt;--ON E.employee_id = t.employee_id&lt;/span&gt;

&lt;span class="k"&gt;GO&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then during migration run, we pass the token key/value pair. In the process, yuniql inspects all tokens in script files and replaces them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="s2"&gt;"ENVIRONMENT=DEV"&lt;/span&gt;
yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="s2"&gt;"ENVIRONMENT=TEST"&lt;/span&gt;
yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="s2"&gt;"ENVIRONMENT=PROD
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTz9TiM4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTz9TiM4.png" alt="Imgur"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, we can pass multiple tokens in a single call.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="s2"&gt;"ENVIRONMENT=DEV,USERNAME=rdagumampan,SOURCE=AzDevOpsTask"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment-reserved directories with yuniql 🍋
&lt;/h3&gt;

&lt;p&gt;Sometimes it would be simpler to group all scripts in a single environment-reserved directory. While it forces us to make duplicate script files, it can also help us stay organized. In this sample, let’s create &lt;code&gt;_development&lt;/code&gt;, &lt;code&gt;_test&lt;/code&gt; and &lt;code&gt;_production&lt;/code&gt; directories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPwfnvge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPwfnvge.png" alt="Imgur"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we call &lt;code&gt;yuniql run&lt;/code&gt;, we can pass &lt;code&gt;--environment&lt;/code&gt; to demand an environment-aware migrations. yuniql discover all directories, sort and preapare for execution. When environment-reserved directory is present, it only picks the right directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--environment&lt;/span&gt; development
yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--environment&lt;/span&gt; &lt;span class="nb"&gt;test
&lt;/span&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--environment&lt;/span&gt; production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Parting words ㊗️
&lt;/h2&gt;

&lt;p&gt;Whoah! You have reached this far! Thanks! 🍻&lt;/p&gt;

&lt;p&gt;Environment-aware migrations using token replacements and environment -reserved directories addressed the variations of scripts demanded by internal enterprise policies and guidelines. You may also find other use cases for token replacements such as annotating the scripts and baseline data.&lt;/p&gt;

&lt;p&gt;P.S. Please support yuniql by clicking &lt;a href="https://github.com/rdagumampan/yuniql" rel="noopener noreferrer"&gt;GitHub Star!&lt;/a&gt; For a young project like this, a star can help capture more user experiences and improve the tool in its early stage. &lt;a href="https://github.com/rdagumampan/yuniql" rel="noopener noreferrer"&gt;https://github.com/rdagumampan/yuniql&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>database</category>
      <category>devops</category>
      <category>dotnet</category>
      <category>dotnetcore</category>
    </item>
    <item>
      <title>Seed your SqlServer, PostgreSql and MySql databases with CSV files using yuniql migrations</title>
      <dc:creator>Rodel E. Dagumampan</dc:creator>
      <pubDate>Sun, 31 May 2020 22:18:23 +0000</pubDate>
      <link>https://dev.to/rdagumampan/seeding-sqlserver-postgresql-and-mysql-databases-with-csv-files-using-yuniql-migrations-3ki7</link>
      <guid>https://dev.to/rdagumampan/seeding-sqlserver-postgresql-and-mysql-databases-with-csv-files-using-yuniql-migrations-3ki7</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Let me start by saying, I am writing this both to create awareness of a tool I have built and also to share my experiences that I hope helps the community make software better, faster, safer. :)&lt;/p&gt;

&lt;p&gt;In the past 8 months, I have been building &lt;strong&gt;&lt;a href="https://github.com/rdagumampan/yuniql" rel="noopener noreferrer"&gt;project yuniql&lt;/a&gt;&lt;/strong&gt; &lt;em&gt;(yuu-nee-kel)&lt;/em&gt;; an open source schema versioning and database migration tool made with .NET Core. From the beginning, one of the primary use cases we tried to address is seeding our databases both for environment-specific deployment and for supporting integration tests. Every fresh databases carries initial set of data such as lookup tables and baseline data. Running integration tests and regression tests requires snapshots of data or samples to support the scenario being tested.&lt;/p&gt;

&lt;p&gt;While we can always script out the initial data, an efficient way is to just drop CSV files into a folder and have this imported into destination tables during deployment, test execution or application startup. Using CSV files also allows our Test Engineers and Business SMEs prepare data themselves and deliver to development teams so it can be can be fed into regression tests.&lt;/p&gt;

&lt;p&gt;Such is the birth of bulk import support of yuniql for CSV files as seed data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seeding database with CSV files and yuniql
&lt;/h2&gt;

&lt;p&gt;In this article, I am using a sample database from &lt;a href="https://sqltutorial.org" rel="noopener noreferrer"&gt;sqltutorial.org&lt;/a&gt;. Lets prepare a baseline schema version of the database and seed &lt;code&gt;regions&lt;/code&gt;, &lt;code&gt;countries&lt;/code&gt;, &lt;code&gt;location&lt;/code&gt;, &lt;code&gt;departments&lt;/code&gt; and &lt;code&gt;jobs&lt;/code&gt; tables. Then we will create new version for &lt;code&gt;employees&lt;/code&gt; and &lt;code&gt;dependents&lt;/code&gt; tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FLoDNnGY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FLoDNnGY.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install yuniql CLI 👉
&lt;/h3&gt;

&lt;p&gt;We begin by getting &lt;code&gt;yuniql cli&lt;/code&gt; installed. We can install yuniql CLI in several ways via chocolatey, dotnet global tool or direct from source. Choose what works for you. &lt;/p&gt;

&lt;p&gt;Option 1: Install via .NET Core Global Tool. Requires .NET Core 3.0 SDK installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet tool &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; yuniql.cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Option 2: Install via Chocolatey package manager. Requires choco installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;choco &lt;span class="nb"&gt;install &lt;/span&gt;yuniql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Option 3: Use Powershell and append to &lt;code&gt;PATH&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Invoke-WebRequest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Uri&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;https://github.com/rdagumampan/yuniql/releases/download/latest/yuniql-cli-win-x64-latest.zip&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-OutFile&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s2"&gt;"c:\temp\yuniql-win-x64-latest.zip"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Expand-Archive&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c:\temp\yuniql-win-x64-latest.zip"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-DestinationPath&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c:\temp\yuniql-cli"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;Env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;";c:\temp\yuniql-cli"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Option 4: Download zipped package containing yuniql.exe file and extract to our workspace directory. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rdagumampan/yuniql/releases/download/latest/yuniql-cli-win-x64-latest.zip" rel="noopener noreferrer"&gt;https://github.com/rdagumampan/yuniql/releases/download/latest/yuniql-cli-win-x64-latest.zip&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Initialize schema repository, call &lt;code&gt;yuniql init&lt;/code&gt; 🍨
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql init 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;yuniql init&lt;/code&gt; is issued, a baseline directory structure. Each version directory represents an atomic version of our database executed in an all-or-nothing fashion. Utility directories are also created to cover pre and post migration activities. A typical starter project workspace looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwXQp4S2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwXQp4S2.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create baseline schema structure 🍭
&lt;/h3&gt;

&lt;p&gt;Our next step is to prepare our Baseline version. A Baseline version, is the &lt;code&gt;v0.00&lt;/code&gt; of our database schema and initial data. Baseline helps create full visibility of our database schema evolution. Here, we prepare set of &lt;code&gt;CREATE TABLE&lt;/code&gt; scripts in &lt;code&gt;setup-tables.sql&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FGVr1dB2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FGVr1dB2.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Prepare and add CSV files in the baseline 🍹
&lt;/h3&gt;

&lt;p&gt;Lets now prepare the CSV files. Each CSV file represents a fully qualified name of the destination table. File &lt;code&gt;dbo.regions.csv&lt;/code&gt; will be bulk loaded into &lt;code&gt;dbo.regions&lt;/code&gt; table. We may use other schema or keep it simple to &lt;code&gt;regions.csv&lt;/code&gt; and it will use &lt;code&gt;dbo&lt;/code&gt; as default schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F2XFL1VT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F2XFL1VT.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Let's do this, call &lt;code&gt;yuniql run&lt;/code&gt; 🏃
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"ACCEPT_EULA=Y"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"MSSQL_SA_PASSWORD=P@ssw0rd!"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 1400:1433 &lt;span class="nt"&gt;-d&lt;/span&gt; mcr.microsoft.com/mssql/server:2017-latest
SETX YUNIQL_CONNECTION_STRING &lt;span class="s2"&gt;"Server=localhost,1400;Database=helloyuniql;User Id=SA;Password=P@ssw0rd!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets verify now by deploying our database locally. Here, we can use SQL Server container to speed up the process. In principle, we just need a database server we can connect to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;yuniql run&lt;/code&gt; is issued the first time, it inspects the target database and creates required table to track the versions applied. The &lt;code&gt;-a&lt;/code&gt; tells &lt;code&gt;yuniql&lt;/code&gt; to create new database also. All script files in &lt;code&gt;_init&lt;/code&gt; directory and child directories will be executed only this time. The order of execution is as follows &lt;code&gt;_init&lt;/code&gt;, &lt;code&gt;_pre&lt;/code&gt;, &lt;code&gt;vx.xx&lt;/code&gt;, &lt;code&gt;vxx.xx+N&lt;/code&gt;, &lt;code&gt;_draft&lt;/code&gt; , &lt;code&gt;_post&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;All CSV files are bulk loaded when all script files are executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FFyHTlHN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FFyHTlHN.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets verify how data looks now at SQL Server. Notice that we created &lt;code&gt;dbo.__yuniqldbversions&lt;/code&gt; to keep record of versions already applied. This ensures the CSV files will not be imported again the next time we call &lt;code&gt;yuniql run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FJb0iZPh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FJb0iZPh.png" alt="yuniql-database-migrations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where do we go from here? 🚧
&lt;/h2&gt;

&lt;p&gt;This is one of the first steps to enable continuous delivery of changes to our databases. We can take this further by committing this workspace into a &lt;code&gt;git&lt;/code&gt; repository and establish a CI/CD pipelines to run the migrations to different environments. Yuniql is released also as &lt;a href="https://marketplace.visualstudio.com/items?itemName=rdagumampan.yuniql-azdevops-extensions&amp;amp;ssr=false" rel="noopener noreferrer"&gt;free Azure DevOps Task&lt;/a&gt; and Docker image. &lt;/p&gt;

&lt;h2&gt;
  
  
  What about other database platforms? ❄️
&lt;/h2&gt;

&lt;p&gt;A PostgreSql and MySql compatible scripts can be executed in the same way with &lt;code&gt;--platform&lt;/code&gt; parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--platform&lt;/span&gt; postgresql
yuniql run &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;--platform&lt;/span&gt; mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The latest release of yuniql supports CSV import for SQL Server, PostgreSql and MySql. I am now maturing the implementation for Snowflake DW, Azure Synapse Analytics, and CockroachDB which are equally exciting data platforms. Contributors are welcome! :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Parting words ㊗️
&lt;/h2&gt;

&lt;p&gt;Using CSV files to bulk import baseline data and test samples is an effective approach to seeding databases. CSV files also allows non-SQL users help developers prepare datasets to support the software development process. As yuniql also leverage the native bulk import APIs of the target platform such &lt;code&gt;SqlBulkCopy&lt;/code&gt; for SqlServer, we can import large CSV files pretty fast.&lt;/p&gt;

&lt;p&gt;Thanks if you have reached this far!!! 🍻 This is my DEV first post and I guess I had fun with emojis :D The source for this article can be &lt;a href="https://github.com/rdagumampan/yuniql-tutorials/tree/master/yuniql-csv" rel="noopener noreferrer"&gt;cloned here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;P.S. Please support yuniql by clicking &lt;a href="https://github.com/rdagumampan/yuniql" rel="noopener noreferrer"&gt;GitHub Star!&lt;/a&gt; For a young project like this, a star can help capture more user experiences and improve the tool in its early stage. &lt;a href="https://github.com/rdagumampan/yuniql" rel="noopener noreferrer"&gt;https://github.com/rdagumampan/yuniql&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Thanks thanks!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sql</category>
      <category>database</category>
      <category>dotnet</category>
    </item>
  </channel>
</rss>
