<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luke Bearl</title>
    <description>The latest articles on DEV Community by Luke Bearl (@lukebearl).</description>
    <link>https://dev.to/lukebearl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lukebearl"/>
    <language>en</language>
    <item>
      <title>MS SQL Server Backups to S3 – On Linux!</title>
      <dc:creator>Luke Bearl</dc:creator>
      <pubDate>Wed, 06 Dec 2017 06:53:11 +0000</pubDate>
      <link>https://dev.to/lukebearl/ms-sql-server-backups-to-s3--on-linux-dan</link>
      <guid>https://dev.to/lukebearl/ms-sql-server-backups-to-s3--on-linux-dan</guid>
      <description>

&lt;p&gt;This was originally written on &lt;a href="https://lukebearl.com/2017/12/ms-sql-server-backups-to-s3-on-linux/"&gt;Luke's personal blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Today I'm going to go over what is necessary in order to do full and transaction log backups for SQL Server Express on Linux. One of the big limitations of SQL Express is that it doesn't include the SQL Agent, so most of the maintenance tasks that can normally be designed and implemented within SSMS need to be rethought. Thankfully Microsoft released &lt;code&gt;sqlcmd&lt;/code&gt; for Linux, which makes it pretty easy to go ahead and do the backups as simple bash scripts scheduled through cron.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This post isn't going to go through all of the steps to install SQL Server and the associated tools, but Microsoft has done a great job of documenting that on their docs site. In order to push the backups to S3 we will need the &lt;code&gt;s3cmd&lt;/code&gt; tool:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install s3cmd
s3cmd --configure
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You'll need to have an IAM identity with at least enough permissions to write to the S3 bucket you designate in the script. In the configure prompts include the keys and specify what region you want to default to.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scripts
&lt;/h2&gt;

&lt;p&gt;In order to do the backups, two scripts are necessary: one for the full backups and one for the transaction log backups. I've opted for a very simple structure since I only care about one database, it shouldn't be very hard to modify the script to generate backups for each database, but I'll leave that as an exercise for the reader :).&lt;/p&gt;

&lt;h3&gt;
  
  
  Full Database Backups (fullBackup.sh)
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TIMESTAMP=$(date +"%F")
BACKUP_DIR="/var/opt/mssql/backup/$TIMESTAMP"
SA_USER="SA"
SA_PASS="&amp;lt;Your_SA_User_Password&amp;gt;"

mkdir -p "$BACKUP_DIR"

chown -R mssql:mssql $BACKUP_DIR

sqlcmd -S localhost -Q "BACKUP DATABASE [&amp;lt;DBNAME&amp;gt;] TO DISK = N'$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;.bak' WITH NOFORMAT, NOINIT, SKIP, NOREWIND, STATS=10" -U $SA_USER -P $SA_PASS

s3cmd put "$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;.bak" "s3://&amp;lt;BUCKET_NAME&amp;gt;/$TIMESTAMP/&amp;lt;DBNAME&amp;gt;.bak"
rm -f "$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;.bak"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Transaction Log Backups (logBackup.sh)
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATESTAMP=$(date +"%F")
TIMESTAMP=$(date +"%H%M%S")
BACKUP_DIR="/var/opt/mssql/backup/$DATESTAMP/logs/$TIMESTAMP"
SA_USER="SA"
SA_PASS="&amp;lt;Your_SA_User_Password&amp;gt;"

mkdir -p "$BACKUP_DIR"

chown -R mssql:mssql $BACKUP_DIR

sqlcmd -S localhost -Q "BACKUP LOG [&amp;lt;DBNAME&amp;gt;] TO DISK = N'$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;_log.bak' WITH NOFORMAT, NOINIT, SKIP, NOREWIND, NOUNLOAD, STATS=5" -U SA -P $SA_PASS

s3cmd put "$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;_log.bak" "s3://&amp;lt;BUCKET_NAME&amp;gt;/$DATESTAMP/logs/$TIMESTAMP/&amp;lt;DBNAME&amp;gt;_log.bak"

rm -f "$BACKUP_DIR/&amp;lt;DBNAME&amp;gt;_log.bak"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then schedule them in cron:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 0 * * * /root/bin/fullBackup.sh
*/15 * * * * /root/bin/logBackup.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With the default schedule I have, full backups are taken at midnight and transaction log backups are taken every 15 minutes. &lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Lifecycles
&lt;/h2&gt;

&lt;p&gt;While the scripts do a good job of cleaning up after themselves, S3 will (by design) never delete your data unless you specifically tell it to. S3 has a nifty feature called "Lifecycles" which allows us to specify rules for object retention (it is a powerful feature that can be used for a number of other things as well). To access it go to the AWS Console and enter into your S3 bucket. Follow these steps to setup object retention:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the Management Tab&lt;/li&gt;
&lt;li&gt;Select Lifecycle&lt;/li&gt;
&lt;li&gt;Click + Add lifecycle rule&lt;/li&gt;
&lt;li&gt;Name the rule something descriptive ("Expire all files"). Leave the prefix blank&lt;/li&gt;
&lt;li&gt;Leave Configure transition blank&lt;/li&gt;
&lt;li&gt;In Expiration set the following options: &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lo8AYrql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lukebearl.com/wp-content/uploads/2017/12/aws-lifecycle-create.png" alt="S3 Lifecycle Creation"&gt;
&lt;/li&gt;
&lt;li&gt;Click Save&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  That's All
&lt;/h2&gt;

&lt;p&gt;At this point we have full and transaction log backups configured, being pushed off site to Amazon S3. These backups are soft deleted after 7 days and fully deleted after 14 days. &lt;/p&gt;


</description>
      <category>mssql</category>
      <category>linux</category>
      <category>sqlserver</category>
      <category>s3</category>
    </item>
    <item>
      <title>The Bus Factor</title>
      <dc:creator>Luke Bearl</dc:creator>
      <pubDate>Sun, 07 May 2017 20:15:54 +0000</pubDate>
      <link>https://dev.to/lukebearl/the-bus-factor</link>
      <guid>https://dev.to/lukebearl/the-bus-factor</guid>
      <description>

&lt;p&gt;&lt;em&gt;This article originally appears on &lt;a href="https://lukebearl.com/2017/05/the-bus-factor/"&gt;Luke's blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've been meaning to write this for quite a while, as the bus factor is something I've (literally) run into in my career. For those of you not familiar with it, the "Bus Factor" is basically an informal measure of resiliency of a project to the loss of one or more key members. It's basically the programming version of the old adage "Don't put all your eggs in one basket". &lt;/p&gt;

&lt;h2&gt;
  
  
  Story Time
&lt;/h2&gt;

&lt;p&gt;Some years ago I was a software development intern at a large company in Milwaukee, Wisconsin. The team I was on was broken into a U.S. development team, an offshore dev team in India and an offshore QA team in China. We had daily scrum meetings at 8 AM every morning so that the US and Indian teams could participate all in one. One day we got word that one of the senior-most developers had literally been hit by a bus while crossing the street (thankfully he made a full recovery, but it certainly slowed down that part of the team as he was out for 8 weeks or so).&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Reduce the Bus Factor
&lt;/h2&gt;

&lt;p&gt;I'm sure people can (and probably have) written entire books on the subject of reducing the bus factor, and spreading knowledge around through the entire team. Spreading knowledge is really the key element in bus factor reduction. &lt;/p&gt;

&lt;p&gt;How many people currently work on a team where one or two people are basically the wizards who secret spells make critical things happen (like deployments or provisioning infrastructure assets, or SSL certificates, or any of the other million things that need to be done in order to make software work)? I know I've worked on several teams where that happened. I've also worked with people who wanted to increase the bus factor as they thought it gave them better job security (a notion I strongly disagree with). &lt;/p&gt;

&lt;p&gt;In my experience one of the best ways to reduce the bus factor is to maintain an internal wiki where developers and administrators can document processes for anything which they are going to do more than once (and sometimes it's good to document things that are being done once as well). Another great idea is to regularly schedule cross training (n+1 isn't only a good idea for infrastructure, developers and admins should have a bit of redundancy as well).&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethics
&lt;/h2&gt;

&lt;p&gt;I personally feel that there is an ethical responsibility for all engineers to be transparent in what they do. I never want to be the only person capable of doing something, instead I do my best to make sure that anything I do, which may ever need to be done again, is documented at least well enough that someone can probably piece it together. Doing this ensures that if I am ever hit by a bus the rest of my team won't have to try to figure out the magical incantations I have developed in order to do a number of things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At the end of the day reducing the bus factor is good for your team. You never know when you or one of your colleagues are going to end up no longer being available to work (they might be hit by a bus, or it might be something more mundane like taking a new job, or leaving for a few months for a sabbatical or maternity/paternity leave). As an engineer and a member of a team you have an ethical obligation to ensure that you are both sharing processes and techniques you've developed with your colleagues, and also trying to learn those processes and techniques from your colleagues. &lt;/p&gt;


</description>
      <category>softwaredevelopment</category>
      <category>lifelessons</category>
    </item>
  </channel>
</rss>
