<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christian</title>
    <description>The latest articles on DEV Community by Christian (@klauenboesch).</description>
    <link>https://dev.to/klauenboesch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/klauenboesch"/>
    <language>en</language>
    <item>
      <title>PHP 8 is amazing. I still prefer C#.</title>
      <dc:creator>Christian</dc:creator>
      <pubDate>Fri, 08 Jan 2021 12:00:00 +0000</pubDate>
      <link>https://dev.to/klauenboesch/php-8-is-amazing-i-still-prefer-c-1lkk</link>
      <guid>https://dev.to/klauenboesch/php-8-is-amazing-i-still-prefer-c-1lkk</guid>
      <description>&lt;p&gt;With the end of 2020 came the &lt;a href="https://www.php.net/releases/8.0/en.php"&gt;release of PHP 8&lt;/a&gt; and also the &lt;a href="https://github.com/dotnet/core/blob/master/release-notes/5.0/5.0.0/5.0.0.md"&gt;release of .Net 5.0.0&lt;/a&gt;, the former also being discussed on &lt;a href="https://news.ycombinator.com/item?id=25220674"&gt;HackerNews&lt;/a&gt; (and &lt;a href="https://news.ycombinator.com/item?id=24866190"&gt;here&lt;/a&gt;, &lt;a href="https://news.ycombinator.com/item?id=24235440"&gt;here&lt;/a&gt; or &lt;a href="https://news.ycombinator.com/item?id=24320024"&gt;here&lt;/a&gt;). I write a lot of code in PHP, while frequently using C# with .Net Framework and recently more often .Net Core – maybe half/half. I admire PHP and I think PHP 8 proves that the folks designing and developing PHP care much about the language and the future of PHP in general. In this blog post, I would like to show why I think PHP matters and why I still prefer C#.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s new in PHP 8?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, there’s named arguments, so you can much more easily use functions that have a lot of arguments or just to make arguments more clear. As the PHP 8 release announcement shows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;htmlspecialchars($string, double_encode: false);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;PHP 8 also supports class/method attributes which may replace the often-used PHPDoc to configure behaviour for classes or methods. They look a bit silly with using #[] as syntax, but it looks like it’s for backwards compatibilty (as # introduces a one-line comment).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#[Route("/api/posts/{id}", methods: ["GET"])]&lt;br&gt;
public function foo()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then there’s also constructor property promotion, which removes the need to set properties of classes which are arguments of the constructor manually in the constructor. Then there’s union types which allow to pass more than a single type to a strictly typed method. There are match expression, which remove the need to write a switch statement. And there’s a nullsafe operator, which allows to navigate through class methods which may be null:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$country = $session?-&amp;gt;user?-&amp;gt;getAddress()?-&amp;gt;country;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are also som other improvements under the hood like Just-in-time compilation, a lot of stuff around type system and the error handling and various syntax tweaks. Basically, PHP 8 is a major release in respect to the language design and less regarding the functionality (expect maybe for JIT-compilation). It fixes a lot of legacy that was around for so long and allows the language to grow – and even get better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is PHP really worth it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, absolutely. PHP is an amazing language by providing an easy entry, simple deployment, widely available good webhosters which support it. It is stable, reliable and usually just works. PHP is often laughed at for it’s language design. However, PHP has a vibrant community and one of the largest publicly available &lt;a href="https://packagist.org/"&gt;package repository&lt;/a&gt;. PHP is easy to install, easy to use and quite easy to deploy as it consists only of files that need to be copied to a webserver – there’s no platform-specific compilation.&lt;/p&gt;

&lt;p&gt;The most often heard arguments against PHP are it’s sloppiness/laxiness around types (script types are not mandatory), it’s forgiveneess (as long as the syntax is ok, semantic is “adaptable”) and it’s bad language design (way to much legacy).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then why prefer C#?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ll show you some arguments why I prefer the language design of C# over the one of PHP. For context: I mostly use both languages for web-based things. So it’s always usually JSON in and JSON out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good dependency management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While I think PHP does the job (it does it very well), I began to like C# (especially with .Net Core). I mostly admire the design of the C# language. Composer, PHP’s semi-official package manager, is one of the best solutions for dependency resolution. It is perfectly capable resolving transitive dependencies which make’s it very easy for package maintainers to define their requirements and makes it very easy for package users to require some package and validate whether it’s acutally compatible with the other dependencies. In .Net, assembly binding and transitive dependencies often caused &lt;a href="https://www.erikheemskerk.nl/transitive-nuget-dependencies-net-core-got-your-back/"&gt;trouble, issues and build problems&lt;/a&gt;. With .Net Core, this got a lot easier having a simpler dependency-definition format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strict typing and compile-time syntax validation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the quirks of being an interpreted language like PHP is that syntax validation is either done statically or when actually running the code. This can lead to issues that get uncovered quite late (at best when running automated test, at worst in production). With C# being a compiled language, at least the syntax is validated when compiling. This does not mean that your code magically works always, it just means that the most basic issues are catched early.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;// This is valid syntax in PHP. Does it make sense? No.&lt;br&gt;
function foobar() : void {}&lt;br&gt;
$val = foobar();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Obviosuly, your preferred IDE may point out that this code does not make sense. But that’s not the point. In a large codebase it gets very hard to keep eyes on all such issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streams&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PHP does not have streams available. It does have &lt;a href="https://www.php.net/manual/en/language.types.resource.php"&gt;resources&lt;/a&gt; which are some kind of special variable that hold a reference to an external resource. If you search on how to how to read an image (or any file) from a remote host and write it to a local file, you’re &lt;a href="https://stackoverflow.com/questions/909374/copy-image-from-remote-server-over-http"&gt;pointed&lt;/a&gt; to something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$imageString = file_get_contents("http://example.com/image.jpg");&lt;br&gt;
$save = file_put_contents('Image/saveto/image.jpg',$imageString);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;C# uses &lt;a href="https://support.microsoft.com/en-za/help/2512241/how-to-upload-and-download-files-from-a-remote-server-in-asp-net"&gt;some more code&lt;/a&gt; for that, but then: What type does $imageString actually have? It is not a resource, it is a string. And if you guess, you’ll want to load a 100 MB file from a remote server, how much memory do you need? At least 100 MB. Then I simply prefer using some derivate of the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.io.stream?view=net-5.0"&gt;Stream&lt;/a&gt; class which allows me to read, write, seek and all that using a quite low memory footprint. Having worked on projects which do image processing on webservers with often quite large images (up to 50 or sometims 100 MB), Streams are a welcome gift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generics, Enums and all the missing parts of a nice language&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PHP actually does have a &lt;a href="https://wiki.php.net/rfc/enum"&gt;proposal&lt;/a&gt; to implement enums at a future point. It also does have a &lt;a href="https://wiki.php.net/rfc/generics"&gt;four year old proposal&lt;/a&gt; for implementing generics. Do you need enums and generics to get the job done? Absolutley not. Does it make the job easier? Absolutely yes. Especially, with basically every database providing an enum datatype, it would be nice to have an enum in PHP and not use a string for that. There are great &lt;a href="https://github.com/myclabs/php-enum"&gt;libraries&lt;/a&gt; for that, and they work quite well. And that brings me do my last argument:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proper type deserialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Searching the web for how to “deserialize in php to a custom class”, the first link that &lt;a href="https://stackoverflow.com/questions/5397758/json-decode-to-custom-class"&gt;Stackoverflow that pops up&lt;/a&gt;, proposes something like that:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$data = json_decode($json, true);&lt;br&gt;
$class = new Foobar();&lt;br&gt;
foreach ($data as $key =&amp;gt; $value) $class-&amp;gt;{$key} = $value;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Obviously, this does not work with nested classes, strict types (or somehow it does?, because of the lax typing in PHP). Other proposals loke much more fancier, or suggest using stdClass (the base class of every class) and the use PHPDoc type-hints to hint the type to the IDE. Then I prefer the C# way which is perfectly capable of deserializing nested objects into their proper type.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FooObj object = JsonConvert.DeserializeObject&amp;lt;FooObj&amp;gt;(jsonString);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In todays world, with distributed (or micro-) services, it is possible that some data is passed from one service to another. The receiving service gets some JSON string and deserializes that string into a shared class definition. With that, you get script types for free and modifying one side (modifying the contract) also let’s you track the change for the receiving service. Without proper deserializing into actual class types, you’re just guessing a lot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Whould I still use PHP?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, for sure. With &lt;a href="https://symfony.com/"&gt;Symfony&lt;/a&gt;, one of if not the most advanced PHP frameworks, it’s easy to write webservices. With &lt;a href="https://www.doctrine-project.org/"&gt;Doctrine&lt;/a&gt;, an ORM, it’s quite easy to read/write to databases and map objects (However, I still prefer something like a passive record pattern). PHP has a large, active community and a lot of open source projects which help you in many cases. PHP is absolutely evoling and I think if it keeps going in the same direction as it did with PHP 8, it may have a position in the language landscape for a long time.&lt;/p&gt;

&lt;p&gt;PHP has started of in a (web-) landscape that was so different from what we know today. PHP kept (and still keeps) a lot of legacy around, which can also be a good thing (others don’t, looking at &lt;a href="https://stackify.com/net-ecosystem-demystified/"&gt;.Net ecosystem&lt;/a&gt; and their many UI frameworks) but also can be so bad that, at some point, you will &lt;a href="https://make.wordpress.org/core/2020/11/23/wordpress-and-php-8-0/"&gt;break half of the internet&lt;/a&gt; by making the language incompatible with WordPress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With .Net Core 3 and, the latest release, .Net Core 5, Microsoft shows that it cares aboud it’s .Net plaform and it shows that it is perfectly capable to compete with well-known languages for web APIs like PHP, Python and Java. PHP 8 proved, that the team around PHP is willing to improve the language in significant ways and to cut old tails off. I think PHP 8 is amazing and it does it’s job perfectly. However, I think C# just provides the better language design for todays api-based world.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This blog post was originally posted on my companys blog under &lt;a href="https://en.globalelements.ch/2021/01/08/php-8-is-amazing-i-still-prefer-c/"&gt;https://en.globalelements.ch/2021/01/08/php-8-is-amazing-i-still-prefer-c/&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>c</category>
      <category>php</category>
      <category>opinion</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Backing up MySQL in AWS using a read-replica</title>
      <dc:creator>Christian</dc:creator>
      <pubDate>Sat, 11 Apr 2020 16:35:55 +0000</pubDate>
      <link>https://dev.to/klauenboesch/backing-up-mysql-in-aws-using-a-read-replica-3ia6</link>
      <guid>https://dev.to/klauenboesch/backing-up-mysql-in-aws-using-a-read-replica-3ia6</guid>
      <description>&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;We have several databases on AWS using RDS (Relational Database Service). We would like to backup those databases automatically. AWS provides „Snapshots“ for that, but they have several limits (like we can’t use them to restore a local database and it’s a bit harder to manage hundreds of them). Backing up a database is basically a simple task. However, we want to back-up the database with as least impact as possible to users (customers) of this database, so we can’t just connect to it and run a backup command.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;We use the AWS CLI and IAM to create a read replica of an existing RDS DB instance (in this case MySQL). We use IAM to authenticate against this RDS instance and then backup databases using a pre-defined list. After we completed the backup, we remove the read replica. The backup is compressed as .tar.gz. The whole process is automated using bash.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;For this to success, you need available&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a RDS instace with IAM authentication enabled&lt;/li&gt;
&lt;li&gt;a bash terminal (we use a lot of Ubuntu/Linux these days)&lt;/li&gt;
&lt;li&gt;the AWS CLI (v1 is enough)&lt;/li&gt;
&lt;li&gt;jq for JSON parsing (see &lt;a href="https://stedolan.github.io/jq/"&gt;https://stedolan.github.io/jq/&lt;/a&gt; )&lt;/li&gt;
&lt;li&gt;the RDS instance has a user that can authenticate with the AWSAuthenticationPlugin (see &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;an IAM role that is able to create a read replica of the database you want to backup and that can connect to the read replica using IAM (see &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html&lt;/a&gt; )&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 0: Environment
&lt;/h3&gt;

&lt;p&gt;I’ll use several environment variables through this tutorial. I’ll assume you have the AWS CLI configured and the IAM user has enough access rights to create a read replica and delete it again afterwards.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The unique identifier of the RDS read replica
RDS_REPLICA_IDENTIFIER=
# The unique identifier of the RDS instance to backup
RDS_SOURCE_IDENTIFIER=
# The username which the IAM user is allowed to use, i'll use backup here
DB_USER=
# The endpoint will be the hostname of the read replica
ENDPOINT=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: Create a read replica
&lt;/h3&gt;

&lt;p&gt;First, we want to create a read replica using the AWS CLI. We make the replica publicly available so that we can access it from our computer. Use $RDS_REPLICATE_IDENTIFIER for the unique name of the replica and $RDS_SOURCE_IDENTIFIER for the unique name of the instance you want to backup. We’ll add a tag here, but that’s not necessary. You might also want to consider to upsize/downsize your read replica depending on the size of your master instance (a powerful replica might cost you not that much but might improve backup speed).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws rds create-db-instance-read-replica \
        --db-instance-identifier $RDS_REPLICA_IDENTIFIER \
        --enable-iam-database-authentication \
        --source-db-instance-identifier $RDS_SOURCE_IDENTIFIER \
        --publicly-accessible \
        --tags Key=purpose,Value=backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Wait until the replica is ready
&lt;/h3&gt;

&lt;p&gt;It will take some time until the read replica is ready and can be accessed using the MySQL CLI. So let’s use some bash magic and wait until the read replica has the „available“ state. We use jq here to extract the status from the JSON response of the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STATUS=creating
while [["$STATUS" != "available"]]
do
        sleep 30
        echo " Checking..."
        STATUS=$( aws rds describe-db-instances \
                --db-instance-identifier $RDS_REPLICA_IDENTIFIER | jq -r ".DBInstances[0].DBInstanceStatus" )
        echo " -&amp;gt; Status is ${STATUS}"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If that went through smooth (probably after 5 to 10min), let’s move forward and find the information needed to connect to our instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Describe the endpoint
&lt;/h3&gt;

&lt;p&gt;We will need at least a hostname, a username, a password and a port to connect to the MySQL read replicate. So let’s find those informations. I’ll use again jq to extract the information from the JSON response of the AWS CLI. The last line generates a temporary access token for our IAM user, so that we can backup the database. Be careful, the token expires after 900s (15min), so if your backups take longer you might want to make sure you can renew the token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENDPOINT=$( aws rds describe-db-instances \
         --db-instance-identifier $RDS_REPLICA_IDENTIFIER )
DB_HOST=$( echo $ENDPOINT | jq -r ".DBInstances[0].Endpoint.Address" )
DB_PORT=$( echo $ENDPOINT | jq -r ".DBInstances[0].Endpoint.Port" )
DB_PASS=$( aws rds generate-db-auth-token \
   --hostname ${DB_HOST} \
   --port ${DB_PORT} \
   --region eu-central-1 \
   --username ${DB_USER} )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Doing the backup
&lt;/h3&gt;

&lt;p&gt;Now we will be able to backup our databases. Let’s again use some bash magic to dump not only one but several databases using mysqldump.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES=(
    # add/remove database names as you wish
    "db1"
    "db2"
)
for DATABASE in "${DATABASES[@]}"
do
    echo "-- Exporting database &amp;lt;${DATABASE}&amp;gt;"
    mysqldump \
            --enable-cleartext-plugin
            -h${DB_HOST} \
            -u${DB_USER} \
            -p${DB_PASST} \
            --port=${DB_PORT} \
            --databases $DATABASE | \
            sed -e 's/DEFINER[]*=[]*[^*]*\*/\*/' &amp;gt; ${RDS_SOURCE_IDENTIFIER}_${DATABASE}.sql
    echo " -&amp;gt; Dump ${DATABASE} complete, wait a few seconds..."
    # Experience tells, let's wait a few secs or you'll get errors
    sleep 3
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will dump the databases from our RDS instance one-by-one and store them in a SQL file in the current directory. Depending on your use-case you might want to &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html"&gt;configure mysqldump with one of its many options&lt;/a&gt;. There are quite some important options there to make your backups both fast, reliable and complete.&lt;/p&gt;

&lt;p&gt;Also, don’t forget to compress the sql files and move them somewhere safe. We’ll usually upload them to S3/Glacier, a very cost-effective storage. As sql files are plain-text, they are very small when compressed and this will not cost very much.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -czvf "${RDS_SOURCE_IDENTIFIER}.tar.gz" ./*.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Removing the read replica
&lt;/h3&gt;

&lt;p&gt;Last but not least, let’s remove the read replica so it won’t incurr any more costs to our AWS account. We can remove the read replica with the same command as we would remove a regular RDS instance. So be careful here and make sure your IAM user does not have rights to remove the master but only the replica. We use the same logic to wait for the delete to be finished as above. Make sure to skip final snapshots, as they are not available to read replicase (but the API expects you to provide the option).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws rds delete-db-instance \
        --db-instance-identifier $RDS_REPLICA_IDENTIFIER \
        --skip-final-snapshot

STATUS=deleting
while [ "$STATUS" == "deleting" ];
do
        sleep 30
        echo "   Checking..."
        STATUS=$( aws rds describe-db-instances \
                --db-instance-identifier $RDS_REPLICA_IDENTIFIER || \
                echo '{"DBInstances":[{"DBInstanceStatus":"gone"}]}' | \
                jq -r ".DBInstances[0].DBInstanceStatus" )
        echo "   -&amp;gt; Status is ${STATUS}"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AWS CLi will fail as soon as the instance is gone, so we’ll trick the script by returning some dummy JSON to indicate that the instance is gone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Backup is an important task and should be done properly. I demonstrated how to automate backing up an AWS RDS (MySQL) instance using the AWS CLI and some bash. I’ll used a read replica to keep the impact on the production system low as the backup happens from a „detached“ database. This is a very cost-effective and performant option.&lt;/p&gt;

&lt;p&gt;Going further, I would recommend detaching the read replica (making it independent from the master) and to add some better error handling. I also did not demonstrate how to backup those files (we usually move them to S3/Glacier), as this is dependent on your needs, your storage options and also cost.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>blog</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Why use Infrastructure as a Code</title>
      <dc:creator>Christian</dc:creator>
      <pubDate>Mon, 18 Mar 2019 00:00:34 +0000</pubDate>
      <link>https://dev.to/klauenboesch/why-use-infrastructure-as-a-code-3793</link>
      <guid>https://dev.to/klauenboesch/why-use-infrastructure-as-a-code-3793</guid>
      <description>&lt;p&gt;I believe that Infrastructure-as-a-Code (also known as IaaC) becomes a central aspect of future infrastructure management. In this article I will outline some advantages and disadvantages of IaaC.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Advantages of IaaC&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;First, I believe that documentation which is asynchronous to the task itself tends to get outdated and ignored. If your job requires you to document a change in a wiki page (or similar to that), you might easily forget to track that change. This results in differences between effective and assumed implementation, which can result in service failures, security or compliance issues.&lt;/p&gt;

&lt;p&gt;Second, I believe in automatization. Automating regular tasks makes executing them easier, safer and more resilient. Auomatization requires a structured definition of the required output, and by using IaaC the output is defined in some form of code. Automatization makes it easier to compare a desired state with an effective state, and usually computers tend to be more error-prone than humand.&lt;/p&gt;

&lt;p&gt;And third, I believe in disaster recovery. A backup is useless, if one never tried to restore it. The same applies to infrastructure. Knowing that a brand-new and up-to-date infrastructure is just a click away, one can have a disaster recovery in place, at no additional cost.&lt;/p&gt;

&lt;p&gt;But there are also some counterpoints to IaaC.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Disadvantages of IaaC&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The first point is quite obvious: One size does not fit all. Having a heterogenous IT landscape, there might not be a tool available fitting all purposes and hardware sets one might have. Most IaaC toolsets tend to support „The Cloud“ only, so not having your application in the cloud might break the idea of using IaaC for everything.&lt;/p&gt;

&lt;p&gt;Second, merging existing processes is not easy. Assuming you have all your applications already in the cloud, extracting the information from your provider and generating the sources required for IaaC is something almost all toolsets can do for you. What none of the toolsets can do is fitting easily into established processes and procedures. They not only might require some adjustment, they require it for sure. And having to automate a complex process is not going to be easy.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Providers&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;There are several providers and toolsets for IaaC. We at Global Elements believe in the strenghts of &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;, one of the earliest providers of IaaC. With Terraform, you write code in a domain-specific language that is very close to JSON.&lt;/p&gt;

&lt;p&gt;Like virtually all IaaC providers, Terraform uses something called „state“. The state describes the last applied version of the infrastructure definition. It is used to compare the latest applied state, the desired state and the effective state. This allows to make sure that manual changes are detected (and then overriden) by applying the desired state to the infrastructure. Using this process, manual changes are automatically overriden. This requires an engineer to always write changes to the Terraform definition.&lt;/p&gt;

&lt;p&gt;Terraform has support for almost any of the well-known Cloud providers, including Amazon, Microsoft, Google, Cloudflare, DigitalOcean and &lt;a href="https://www.terraform.io/docs/providers/"&gt;quite more&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Summary
&lt;/h4&gt;

&lt;p&gt;By using Infrastructre-as-a-Code, you receive a structured and automatically documented definition of your current infrastructure. You can detect (and override) manual changes by applying a desired state to the current state. By this process you will make sure, that everyone is forced to use the defined IaaC toolset. IaaC is nothing new, but it is still not yet widely adapted. There are some disadvantages when having hybrid infrastructure, but IaaC toolsets provide still more advantages.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>terraform</category>
      <category>iaac</category>
      <category>aws</category>
    </item>
    <item>
      <title>An Introduction to Graylog</title>
      <dc:creator>Christian</dc:creator>
      <pubDate>Mon, 07 Jan 2019 20:50:08 +0000</pubDate>
      <link>https://dev.to/klauenboesch/an-introduction-to-graylog-79o</link>
      <guid>https://dev.to/klauenboesch/an-introduction-to-graylog-79o</guid>
      <description>&lt;p&gt;Graylog, recently released in &lt;a href="https://www.graylog.org/post/announcing-graylog-v2-5" rel="noopener noreferrer"&gt;version 2.5&lt;/a&gt;, is an alternative to the well-known ELK stack (Elasticsearch, Logstash, Kibana). In comparison to the ELK-stack, Graylog uses MongoDB as a storage backend for settings and authentication, and leverages Elasticsearch as a document store.&lt;/p&gt;

&lt;p&gt;This post is going to be a part of a series that will explore Graylog in detail. Stay tuned!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh0halczqtihv76wn4sx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh0halczqtihv76wn4sx.png" width="300" height="203"&gt;&lt;/a&gt;The sample dashboard as shown in the graylog documentation.&lt;/p&gt;

&lt;p&gt;If you’re looking for an easy-to-go application, that is yet quite powerful and can be customized quite well – and on top of that, is &lt;a href="https://github.com/Graylog2/graylog2-server" rel="noopener noreferrer"&gt;Open Source&lt;/a&gt; – Graylog might be your solution. Additionaly compared to the „classic“ ELK stack, Graylog provides a fully-fledged authentication backend, and also allows to integrate with any LDAP directory (for example, ActiveDirectory).&lt;/p&gt;

&lt;p&gt;The key concept of Graylog are inputs, which are nothing else than definitions of „how to receive messages“. It supports the well-known Syslog format and the GELF format, which is a JSON-definition maintained by Graylog itself. GELF is supported through UDP and TCP, which makes Graylog quite powerful – delivering log messages through the internet is not an issue at all, as the TCP connection does support TLS for encrypted transfer. Graylog can also easily be configured to act as a relay and forward any (or messages matching a pattern) to another instance.&lt;/p&gt;

&lt;p&gt;Inputs are routed into streams, which represent a collection of messages. Streams can be configured to be filled by messages matching a pattern (e.g. a regular expression). If you ever require to extract information from a log message, extractors come to help. Extractors allow to, well, extract data from a messages by applying regular expressions, and the converting the data to various formats, like date or IP-adresses.&lt;/p&gt;

&lt;p&gt;If that is not enough, Graylog provides a concept called pipelines. Pipelines basically allow you to „code“ a custom, complex process on how an incoming log message might be processed. This can include modifying and routing a message. A classic example would be that a message is routed into a stream based on an IP address, but the IP address must be removed from the message before it is stored (e.g. any GDPR compliance).&lt;/p&gt;

&lt;p&gt;Having implemented Graylog in multiple projects, we would &lt;a href="https://globalelements.ch/kontakt/" rel="noopener noreferrer"&gt;love to assist you&lt;/a&gt; on your next project requiring a scalable, centralized and powerful logging application.&lt;/p&gt;

&lt;p&gt;Der Beitrag &lt;a href="https://globalelements.ch/2019/01/07/an-introduction-to-graylog/" rel="noopener noreferrer"&gt;An Introduction to Graylog&lt;/a&gt; erschien zuerst auf &lt;a href="https://globalelements.ch" rel="noopener noreferrer"&gt;Global Elements GmbH&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>blog</category>
      <category>devops</category>
      <category>english</category>
      <category>graylog</category>
    </item>
  </channel>
</rss>
