<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jeremy Gunzburg</title>
    <description>The latest articles on DEV Community by Jeremy Gunzburg (@goozenburger).</description>
    <link>https://dev.to/goozenburger</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/goozenburger"/>
    <language>en</language>
    <item>
      <title>Automating restoration of production databases into test using AWS RDS</title>
      <dc:creator>Jeremy Gunzburg</dc:creator>
      <pubDate>Tue, 08 May 2018 02:49:29 +0000</pubDate>
      <link>https://dev.to/goozenburger/automating-restoration-of-production-databases-into-test-using-aws-rds-56f4</link>
      <guid>https://dev.to/goozenburger/automating-restoration-of-production-databases-into-test-using-aws-rds-56f4</guid>
      <description>&lt;p&gt;I have recently been working on some automation to regularly restore all of our live databases from AWS RDS into copies for use as a test environment.  This includes some data scrubbing to remove sensitive information, real email addresses, etc, but allows testing against a production- like environment.&lt;/p&gt;

&lt;p&gt;We have MS-SQL and Aurora MySql databases, and I have made use of the AWS APIs for this task.  In my case I used the .Net APIs and built a console app to call them.  While the APIs are well documented and quite easy to use, I have still found getting the right combination of calls has taken some time.&lt;/p&gt;

&lt;p&gt;Firstly, the API call to restore a DB from a snapshot will, quite rightly, refuse to override an existing instance.  However in this case we want to override the existing environment each time we restore.  So I had to build that into my app.&lt;/p&gt;

&lt;p&gt;First, call &lt;strong&gt;DescribeDBInstances&lt;/strong&gt; with the name of the target instance.  If an instance is found, then call &lt;strong&gt;DeleteDBInstance&lt;/strong&gt; to delete it.  Be aware that this call throws a DBInstanceNotFoundException if the instance does not exist, so you need to catch this and move on, as this is a perfectly valid scenario.&lt;/p&gt;

&lt;p&gt;One thing to note with all of these calls if that the API will trigger a task in the cloud, but not wait for it to complete.  So your DB instance will exist with status "Deleting" for a while.  I did not find a way to subscribe to events which will tell you when this has completed, so I implemented some simple polling, calling DescribeDBInstances every 2 minutes until the instance no longer exists.&lt;/p&gt;

&lt;p&gt;Once you are sure the target instance does not exist, you can call &lt;strong&gt;RestoreDBInstanceFromDBSnapshot&lt;/strong&gt; to do the restore.  This assumes you have a snapshot available, and the best bet is to have regular snapshots generated from the live DB automatically.  You'll need to find the latest snapshot for the source DB, using &lt;strong&gt;DescribeDBSnapshots&lt;/strong&gt;.  Return all available by DB instance name, and then sort descending by date to find the latest.&lt;/p&gt;

&lt;p&gt;Once again, you'll need to wait for the restore to complete, so again, I did some polling using DescribeDBInstances to wait until the target instance existed, with a status of "available".&lt;/p&gt;

&lt;p&gt;Once you have a newly restored database, you may need to apply security groups, and you will probably want to change the master user password, so test and live don't have the same credentials.  This can all be achieved using &lt;strong&gt;ModifyDBInstance&lt;/strong&gt;.  One gotcha I found is to ensure that "ApplyImmediately" is set to true, if you need to subsequently connect to the DB to complete further tasks.  Otherwise AWS will wait for a maintenance window to apply the changes, and you don't want that.&lt;/p&gt;

&lt;p&gt;Next, you can apply some data scrubbing or whatever you want to do to turn your prod data into test ready data.  For this, you can simply connect to the database using whatever method you prefer, and run your scripts.&lt;/p&gt;

&lt;p&gt;Now, this is all well and good, but what if you have databases in multiple regions?  What if you have a UK database, which you want to replicate as a test environment, but all your testing happens in Australia, so you'd rather your test DB sits in Aus for better latency .  Well, you can't restore a Snapshot across regions, but you can copy snapshots across regions.  Simply call the &lt;strong&gt;CopyDBSnapshot&lt;/strong&gt; command, connecting to the target region, passing the "DBSnapshotArn" of the snapshot in the source region that you wish to copy.&lt;br&gt;
Then, once it is copied, (Check status using DescribeDBSnapshots in the target region), you can proceed as above.&lt;/p&gt;

&lt;p&gt;So that covers single instances in MS-SQL, but what about Aurora DB clusters?  It's a bit more complicated and a bit less intuitive.&lt;/p&gt;

&lt;p&gt;First, call &lt;strong&gt;RestoreDBClusterFromSnapshot&lt;/strong&gt;.  The main gotcha here is that you need to specify the engine, even though the documentation implies you can leave it out.  In my case, Engine = "aurora-mysql".&lt;/p&gt;

&lt;p&gt;As above, you can poll for availability (call &lt;strong&gt;DescribeDBClusters&lt;/strong&gt;) then call &lt;strong&gt;ModifyDBCluster&lt;/strong&gt; to apply security groups and password changes.&lt;/p&gt;

&lt;p&gt;Here is where it's a bit unintuitive.  If you were to do this via the web dashboard, the cluster and an instance within would be restored, however the API does not do the second step.  So you end up with an empty cluster, which seems useless, but the cluster contains all the info you need to spin up instances within it.&lt;br&gt;
So you now need to call &lt;strong&gt;CreateDbInstance&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Pass in the following parameters:&lt;br&gt;
    DBClusterIdentifier = (target cluster),&lt;br&gt;
    DBInstanceIdentifier = (target instance name),&lt;br&gt;
    Engine = "aurora-mysql",&lt;br&gt;&lt;br&gt;
        PubliclyAccessible = true (Note this defaults to false, even if the original you restored from was public.  This caught me out and I wasted some time wondering why I couldn't connect).&lt;/p&gt;

&lt;p&gt;So there you go.  Everything you need to know to set up a restore of production AWS databases into a test environment.  Simply schedule your app to run using whichever build server or scheduling tool you like, and away you go.  You can restore a new environment, every night, at the start of a sprint, or whenever you wish.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>sqlserver</category>
      <category>aurora</category>
    </item>
    <item>
      <title>Exporting scripts from SQL Server for use with DBUp</title>
      <dc:creator>Jeremy Gunzburg</dc:creator>
      <pubDate>Wed, 18 Apr 2018 00:21:50 +0000</pubDate>
      <link>https://dev.to/goozenburger/exporting-scripts-from-sql-server-for-use-with-dbup-1j3</link>
      <guid>https://dev.to/goozenburger/exporting-scripts-from-sql-server-for-use-with-dbup-1j3</guid>
      <description>&lt;p&gt;Recently I have started a new job, and one of my first tasks is to help improve the database development and deployment process.  They are moving away from having all changes managed in the dev database, and pushing changes using SQL Compare.   Thankfully they have already started this process, with database delta scripts being written and deployed using DbUp.  This can work quite well and is a technique I have used for managing and deploying databases previously.&lt;/p&gt;

&lt;p&gt;The next step to improving the process is having all stored procedures, functions and views scripted out of the database and living as part of the codebase.  Any changes made should be made to the script, which now has a history in source control, and all of these scripts should be applied to the database with  each deployment.  &lt;/p&gt;

&lt;p&gt;SQL Server Management Studio does provide tools to assist with this, but I found it took me a while to get all the right settings to export things the way I wanted, in a way that would allow DbUp to repeatedly apply these changes without error.&lt;/p&gt;

&lt;p&gt;Right click on the database, select tasks, Generate scripts&lt;br&gt;
Select which ones you want (procs, functions, views, etc)&lt;br&gt;
Save to file, single file per object, and specify the target folder.  I have a separate folder for stored procs, views, and functions, and I export each separately.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm4gr82m69vdv1h0gn0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm4gr82m69vdv1h0gn0m.png" alt="screenshot" width="581" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensure ANSI text is selected so git can diff the files easily.  This is what tripped me up the first time.&lt;/p&gt;

&lt;p&gt;Click Advanced.&lt;br&gt;
Set Script USE DATABASE = false&lt;/p&gt;

&lt;p&gt;For stored procs:&lt;br&gt;
Check for object existence = true&lt;br&gt;
Script CREATE&lt;/p&gt;

&lt;p&gt;This creates CREATE + ALTER scripts for all procs.  (You don't need to script drop).  This has the advantage of not dropping existing procs, in case you have custom permissions created on the procs.&lt;/p&gt;

&lt;p&gt;For views and functions, we want to do something similar, but scripting CREATE with checking for existence generates the create as a string, which is ugly,  harder to maintain, and only creates it once, rather than altering the view / function once it is created.  I believe this is due to some of the intricacies of how views and functions are handled by SQL Server, whereas you can easily create a dummy stored proc and immediately alter it.&lt;br&gt;
Here is an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzc5am4kwndvgp4ajtogm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzc5am4kwndvgp4ajtogm.PNG" alt="script" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is obviously not that useful.  But we can't always drop and create, as this won't work if the view / function does not exist (initial run), and we can't always create, as this is not re-runnable.&lt;br&gt;
SO:  Do 2 generates.  One for only the DROP, checking for existence.&lt;br&gt;
Second for the CREATE, not checking for existence.  Untick overwrite, and in advanced, set Append to file = true.&lt;/p&gt;

&lt;p&gt;This gives you a nice set of files, one per view or function, that includes a conditional DROP if exists, and then a create, so is re-runnable.&lt;/p&gt;

&lt;p&gt;For a good overview of how to integrate these newly exported scripts with your projects and have DbUp run them on every execution, go here: &lt;a href="http://wengier.com/reviewable-sprocs/" rel="noopener noreferrer"&gt;http://wengier.com/reviewable-sprocs/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dbup</category>
      <category>sqlserver</category>
    </item>
  </channel>
</rss>
