<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ry</title>
    <description>The latest articles on DEV Community by Ry (@ryboflavin42).</description>
    <link>https://dev.to/ryboflavin42</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ryboflavin42"/>
    <language>en</language>
    <item>
      <title>Testing Angular 10 Interceptors with Spectator and Jest</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Wed, 23 Sep 2020 13:42:28 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/testing-angular-10-interceptors-with-spectator-and-jest-3f4i</link>
      <guid>https://dev.to/ryboflavin42/testing-angular-10-interceptors-with-spectator-and-jest-3f4i</guid>
      <description>&lt;p&gt;There are many reasons to switch from Karma and Jasmine to Jest when Testing Angular:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jest runs faster than Karma and Jasmine&lt;/li&gt;
&lt;li&gt;Jest supports snapshot testing&lt;/li&gt;
&lt;li&gt;Jest runs tests in parallels&lt;/li&gt;
&lt;li&gt;Jest does not require a browser for testing&lt;/li&gt;
&lt;li&gt;many more...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, what's missing are examples of how to write Angular unit tests in Jest, particularly testing Angular HTTP Interceptors.&lt;/p&gt;

&lt;h2&gt;Setting up Angular, Spectator, and Jest&lt;/h2&gt;

&lt;p&gt;For the purpose of this article, we will assume that you have an Angular project already set up with Spectator and Jest. If not, I will provide you with some links on how to setup Angular with these libraries.&lt;/p&gt;

&lt;h3&gt;Jest&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p7nrxVEb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/jest.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p7nrxVEb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/jest.jpeg" alt="" class="wp-image-5275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the focus of this post is NOT  on how to convert Angular from Karma and Jasmine to Jest, below is a list of resources on how to do this conversion yourself.  You can also use my &lt;a href="https://github.com/djchi82/angular-jest-spectator-interceptor-test"&gt;Github project&lt;/a&gt; as a template. &lt;em&gt;I should mention that Jest can be a bit quirky if you are used to using other testing frameworks, but these quirks are worth it.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.amadousall.com/how-to-set-up-angular-unit-testing-with-jest/"&gt;How to Set Up Angular Unit Testing with Jest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://itnext.io/testing-angular-applications-with-jest-and-spectator-c05991579807"&gt;Testing Angular applications with Jest and Spectator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://itnext.io/how-i-do-configure-jest-to-test-my-angular-8-project-2bd84a21d725"&gt;How I do configure Jest to test my Angular 8 Project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thymikee/jest-preset-angular"&gt;https://github.com/thymikee/jest-preset-angular&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@ole.ersoy/unit-testing-angular-with-jest-b65888ff33f6"&gt;Unit Testing Angular with Jest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/angular-in-depth/migrate-your-angular-library-to-jest-faba9579591a"&gt;Migrate your Angular library to Jest&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Spectator&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--egT-hKrn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/1_vwethP80ObAvhrW29WaGEw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--egT-hKrn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/1_vwethP80ObAvhrW29WaGEw.png" alt="" class="wp-image-5276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spectator is an amazing library that reduces the wordy boilerplate code for setting up Angular Unit Tests to only a few lines. It has a few quirks that are absolutely worth it for the value it provides,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ngneat/spectator"&gt;https://github.com/ngneat/spectator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://netbasal.com/spectator-v4-a-powerful-tool-to-simplify-your-angular-tests-bd65a0bf317e"&gt;Spectator V4: A Powerful Tool to Simplify Your Angular Tests!&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://engineering.datorama.com/spectator-for-angular-or-how-i-learned-to-stop-worrying-and-love-the-spec-2aa8521c8488"&gt;Spectator for Angular or: How I Learned to Stop Worrying and Love the Spec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;A Couple of things&lt;/h3&gt;

&lt;p&gt;The major thing to keep in mind when using Spectator and jest together is that Specator imports should come from the &lt;code&gt;@ngneat/spectator/jest&lt;/code&gt; package.&lt;/p&gt;

&lt;pre class="code"&gt;import {createHttpFactory, HttpMethod, SpectatorHttp} from '@ngneat/spectator/jest';&lt;/pre&gt;

&lt;p&gt;Below is the final devDependencies section of &lt;code&gt;package.json&lt;/code&gt;.&lt;/p&gt;

&lt;pre class="code"&gt;{&lt;br&gt;
  …&lt;br&gt;
   "devDependencies": {&lt;br&gt;
    "@angular-devkit/build-angular": "~0.901.7",&lt;br&gt;
    "@angular/cli": "~9.1.7",&lt;br&gt;
    "@angular/compiler-cli": "~9.1.9",&lt;br&gt;
    "@ngneat/spectator": "^5.13.3",&lt;br&gt;
    "@types/jest": "^26.0.13",&lt;br&gt;
    "@types/node": "^12.11.1",&lt;br&gt;
    "codelyzer": "^5.1.2",&lt;br&gt;
    "jest": "^26.4.2",&lt;br&gt;
    "jest-preset-angular": "^8.3.1",&lt;br&gt;
    "protractor": "~7.0.0",&lt;br&gt;
    "ts-node": "~8.3.0",&lt;br&gt;
    "tslint": "~6.1.0",&lt;br&gt;
    "typescript": "~3.8.3"&lt;br&gt;
  }&lt;br&gt;
}&lt;/pre&gt;



&lt;h2&gt;Angular 10 Interceptor Unit Test&lt;/h2&gt;



&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IFwr9NGM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/1_69L1XNoNowh5KuJMbGMPlg.png" alt="" class="wp-image-5277"&gt;



&lt;p&gt;For this example, we will be testing an Http Interceptor that logs HttpErrorResponses to the console. &lt;/p&gt;



&lt;pre class="wp-block-preformatted code"&gt;
import { Injectable } from '@angular/core';
import {
  HttpRequest,
  HttpHandler,
  HttpEvent,
  HttpInterceptor, HttpErrorResponse, HttpResponse
} from '@angular/common/http';
import { Observable, throwError} from 'rxjs';
import {catchError, tap} from 'rxjs/operators';

/**
 * Intercepts HttpRequests and logs any http responses of 3xx+
 * In the future we can make this a conditional retry based on the status code.
 *
 */
@Injectable({ providedIn: 'root' })
export class HttpErrorInterceptor implements HttpInterceptor {

  constructor() {}

  intercept(req: HttpRequest, next: HttpHandler): Observable&amp;gt; {
    return next.handle(req).pipe(tap(() =&amp;gt; {}),
      catchError((error) =&amp;gt; {
        if (error instanceof HttpErrorResponse) {
          if (error.error &amp;amp;&amp;amp; error.error.message) {
            console.log('status: ' + error.status + '\nmessage: ' + error.error.message);
          } else {
            console.log(error);
          }
        }
        return throwError(error);
      })
    );
  }
}
&lt;/pre&gt;

&lt;p&gt;What this code does is intercept an HttpRequest from the application and logs the response to the console when an HttpErrorResponse is returned. The HttpHandler is used to execute the request &lt;code&gt;next.handle&lt;/code&gt;. Then we create a pipe in order to &lt;code&gt;tap&lt;/code&gt; the response for processing. &lt;em&gt;Note: tap is a &lt;a href="https://rxjs-dev.firebaseapp.com/api/operators/tap"&gt;rxjs pipe function&lt;/a&gt; that allows us to inspect the data without changing the actual data in the pipe.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;In this case, we catch the HttpErrorResponse, allowing any non-error HttpResponse to pass through. Once the Response is caught we can inspect the error message and log it to console. &lt;em&gt;Note in this case we are expecting a custom body in the HttpResponse.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;The Unit Test&lt;/h3&gt;

&lt;p&gt;In this unit test, we will be checking that a response with a 2xx will pass through and that an Error Response will be thrown. &lt;em&gt;For more advanced testing the console could be mocked and we can check that the console.log has been called. This is out of scope for this article.&lt;/em&gt;&lt;/p&gt;

&lt;pre class="wp-block-preformatted code"&gt;import {HttpErrorInterceptor} from './http-error.interceptor';&lt;br&gt; import {createHttpFactory, HttpMethod, SpectatorHttp} from '@ngneat/spectator/jest';&lt;br&gt; import {async} from '@angular/core/testing';&lt;br&gt; import {of, throwError} from 'rxjs';&lt;br&gt; import {HttpErrorResponse, HttpRequest, HttpResponse} from '@angular/common/http';&lt;br&gt; describe('HttpErrorInterceptor', () =&amp;gt; {&lt;br&gt; let spectator: SpectatorHttp&amp;lt;HttpErrorInterceptor&amp;gt;;&lt;br&gt;   const createHttp = createHttpFactory({&lt;br&gt;     service: HttpErrorInterceptor&lt;br&gt;   });&lt;br&gt; beforeEach(() =&amp;gt; {&lt;br&gt;     spectator = createHttp();&lt;br&gt;   });&lt;br&gt; test('Http error', async(() =&amp;gt; {&lt;br&gt;     const mockHandler = {&lt;br&gt;       handle: jest.fn(() =&amp;gt; throwError(&lt;br&gt;         new HttpErrorResponse({status: 500, error: {message: 'This is an error'}})))&lt;br&gt;     };&lt;br&gt;     spectator.service.intercept(new HttpRequest&amp;lt;unknown&amp;gt;(HttpMethod.GET, '/thing'), mockHandler)&lt;br&gt;       .subscribe((response) =&amp;gt; {&lt;br&gt;         fail('Expected error');&lt;br&gt;       }, (error =&amp;gt; {&lt;br&gt;         expect(error).toBeTruthy();&lt;br&gt;       }));&lt;br&gt; }));&lt;br&gt; test('Http success', async(() =&amp;gt; {&lt;br&gt;     const mockHandler = {&lt;br&gt;       handle: jest.fn(() =&amp;gt; of(new HttpResponse({status: 500})))&lt;br&gt;     };&lt;br&gt;     spectator.service.intercept(new HttpRequest&amp;lt;unknown&amp;gt;(HttpMethod.GET, '/thing'), mockHandler)&lt;br&gt;       .subscribe((response) =&amp;gt; {&lt;br&gt;         expect(response).toBeTruthy();&lt;br&gt;       }, (error =&amp;gt; {&lt;br&gt;         fail('Expected Successful');&lt;br&gt;       }));&lt;br&gt;   }));&lt;br&gt; });&lt;/pre&gt;

&lt;p&gt;The key here is 1) how the handler is mocked and 2) and how we test the interceptor response.&lt;/p&gt;

&lt;h4&gt;Mocking the HttpHandler&lt;/h4&gt;

&lt;p&gt;The first confusing thing when testing the interceptor is how to mock the HttpHandler. Since Jasmine is removed &lt;code&gt;mock&lt;/code&gt; and &lt;code&gt;SpyOn&lt;/code&gt; are off the table. You may notice that &lt;code&gt;jest.mock&lt;/code&gt; exists, but it doesn't function as expected. This is one of those little Jest quirks I mentioned; &lt;code&gt;&lt;a href="https://jestjs.io/docs/en/mock-functions"&gt;jest.mock&lt;/a&gt;&lt;/code&gt; is used to mock a package and not an object. In this case, we will build an object that looks like HttpHandler interface and mock the methods expected. Below is the HttpHandler interface. As you can see it only has one method.&lt;/p&gt;

&lt;pre class="wp-block-preformatted code"&gt;export declare abstract class HttpHandler {&lt;br&gt;     abstract handle(req: HttpRequest): Observable&amp;lt;HttpEvent&amp;lt;any&amp;gt;&amp;gt;;&lt;br&gt; }&lt;/pre&gt;

&lt;p&gt;This is easily mocked with &lt;code&gt;jest.fn()&lt;/code&gt;&lt;/p&gt;

&lt;pre class="wp-block-preformatted code"&gt;const mockHandler = {&lt;br&gt;       handle: jest.fn(() =&amp;gt; throwError(&lt;br&gt;         new HttpErrorResponse({status: 500, error: {message: 'This is an error'}})))&lt;br&gt;     };&lt;/pre&gt;

&lt;p&gt;In the error case, we will instruct the method to throw an HttpErrorResponse and create a custom object for the response body/error.&lt;/p&gt;

&lt;p&gt;In the happy path case the mock looks like the following:&lt;/p&gt;

&lt;pre class="wp-block-preformatted code"&gt;const mockHandler = {&lt;br&gt;       handle: jest.fn(() =&amp;gt; of(new HttpResponse({status: 200})))&lt;br&gt;     };&lt;/pre&gt;

&lt;h4&gt;Testing the Interceptor's Response&lt;/h4&gt;

&lt;p&gt;Now that we have the HttpHandler mocked, how do we actually test that the interceptor does anything? The key here is to specify an input on the &lt;code&gt;.subscribe&lt;/code&gt; lambda.&lt;/p&gt;

&lt;pre class="wp-block-preformatted code"&gt;spectator.service.intercept(new HttpRequest&amp;lt;unknownn&amp;gt;(HttpMethod.GET, '/thing'), mockHandler)&lt;br&gt;       .subscribe((response) =&amp;gt; {&lt;br&gt;         expect(response).toBeTruthy();&lt;br&gt;       }, (error =&amp;gt; {&lt;br&gt;         fail('Expected Successful');&lt;br&gt;       }));&lt;/pre&gt;

&lt;p&gt;In this case we are checking that that the interceptor passed the response through as normal, and did not throw an error. &lt;/p&gt;

&lt;h4&gt;Spectator and Unit Testing Fiddly Bits&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A8_JvqSE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/sirterry-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A8_JvqSE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/09/sirterry-1.jpg" alt="" class="wp-image-5278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some might note that the code is using spectators &lt;code&gt;createHttpFactory&lt;/code&gt; instead of  &lt;code&gt;createServiceFactory&lt;/code&gt;. In this scenario, both will work exactly the same. I'm using &lt;code&gt;createHttpFactory&lt;/code&gt; in anticipation of adding an HTTP retry.&lt;/p&gt;

&lt;p&gt;It is also important to note that this interceptor doesn't actually modify the Response and the tests are a bit weak. This is meant to be a basic framework to get you started with testing interceptors. If you have an interceptor that modifies the HttpRespond using &lt;code&gt;map&lt;/code&gt;, you will be able to specify the input using the mocked HttpHandler and test the output in the subscribe portion of the interceptor call. &lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Using Spectator and Jest with Angular 10 is a very powerful combination. The trick is to either have a full understanding of Jest and Spectator, or have a ready source of examples to draw from. I hope this article can provide you a rough understanding of how to use Jest in concert with Spectator to test Angular HttpInterceptors. The keys here are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using &lt;code&gt;jest.fn()&lt;/code&gt; to mock the function of the HttpHandler&lt;/li&gt;
&lt;li&gt;Adding the input variable to the subscribe lambda for testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github source: &lt;a href="https://github.com/djchi82/angular-jest-spectator-interceptor-test"&gt;https://github.com/djchi82/angular-jest-spectator-interceptor-test&lt;/a&gt;&lt;/p&gt;

</description>
      <category>angular</category>
      <category>javascript</category>
      <category>testing</category>
    </item>
    <item>
      <title>Hacking your To-Do List for More Productivity and Fulfillment</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Wed, 27 May 2020 13:17:11 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/hacking-your-to-do-list-for-more-productivity-and-fulfillment-4h8g</link>
      <guid>https://dev.to/ryboflavin42/hacking-your-to-do-list-for-more-productivity-and-fulfillment-4h8g</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_6fSz8Yo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/businessman-3492380_1920-1024x466.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_6fSz8Yo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/businessman-3492380_1920-1024x466.jpg" alt="" class="wp-image-5184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the previous article of this two-part series &lt;a href="https://tenmilesquare.com/hacking-your-life-with-task-types/"&gt;Hacking Your Life with Task Types&lt;/a&gt;. I discussed the four quadrants of the Eisenhower Matrix and how to use it to determine a task's Importance and Urgency. In this article, I will introduce you to a few task management strategies that I've learned and implemented to ensure I'm doing what is most important to me. These strategies are the &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture Sheet&lt;/li&gt;
&lt;li&gt;Chunking&lt;/li&gt;
&lt;li&gt;Life Categories.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Capture Sheet&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4oWjEjjk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/list-372766_1920-1024x768.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4oWjEjjk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/list-372766_1920-1024x768.jpg" alt="" class="wp-image-5185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The capture sheet is much like a to-do list, but expands it, allowing it to be used as a base for planning. Most to-do lists are typically short-termed or project-focused. They are things that need to be done soon and often are filled with urgent tasks. A capture sheet will contain all tasks, plans, and ideas for all your projects and timelines. &lt;/p&gt;

&lt;h3&gt;Why is having a capture sheet important? &lt;/h3&gt;

&lt;p&gt;Capture sheets keep us from forgetting tasks that we might think are important to accomplish in the future. It doesn't matter how good of a memory we think we have, we can only juggle so many thoughts at once. Keeping everything in our heads might cause us to forget about an important but less urgent task.&lt;/p&gt;

&lt;p&gt;Keeping everything in our heads also puts unneeded stress on our brain, which uses energy. This impacts our capacity to think clearly and concentrate. The first time I did a capture sheet a sense of euphoria came over me. It was like a weight was lifted and I could be more present with the immediate tasks.&lt;/p&gt;

&lt;p&gt;Capture sheets give us a platform for organizing, planning, and chunking our tasks.&lt;/p&gt;

&lt;h3&gt;What is a capture sheet?&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A dumping ground&lt;/strong&gt;: There are no bad thoughts or tasks on a capture sheet. All ideas are valid, because you thought of it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impermanent&lt;/strong&gt;: Because the capture sheet is an idea dumping ground, it can be thrown away or altered. Expect to have many iterations of capture sheets, either weekly, monthly, or even versions of the same sheet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy to access&lt;/strong&gt;: The purpose of the capture sheet is to get the idea or task out of your head as quickly as possible. Phone apps are generally a go-to for easy access, but personally I use a journal that I keep close by. A journal allows me to easily see many items and allows me to write notes, timestamps,  levels of importance, and categories. Sometime I'll use my phone to capture a thought if my journal is not nearby and I transcribe the idea to my journal when I get back home.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence booster&lt;/strong&gt;: Keeping old capture sheets and reviewing them over time allows us to see how far we come. It tracks our thoughts and shows us how we've evolved as we worked through important tasks. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Satisfying&lt;/strong&gt;: &lt;em&gt;Let's face it&lt;/em&gt;, physically scratching items off a list is satisfying.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Chunking&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YsEapYUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/iceland-2111809_1920-1024x681.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YsEapYUe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/iceland-2111809_1920-1024x681.jpg" alt="" class="wp-image-5187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have a list of items we need a way not to feel overwhelmed with all we have to do. Chunking is a great way to handle the overwhelm. Chunking allows us to group tasks into bite-size pieces that we can plan and work with. &lt;/p&gt;

&lt;h3&gt;How to Chunk&lt;/h3&gt;

&lt;p&gt;There are many ways to chunk tasks that will work differently for different people, situations, and projects. Sometimes chunking can be a combination of approaches, and the same task can exist in different chunks. The important thing to remember when chunking is to keep it as simple as possible. Here a few ways to chunk tasks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project&lt;/strong&gt;: Chunking tasks by the project or goal is the most common way to get started. This allows us to see what needs to be done in order to reach an objective. Once tasks are grouped by the project they can be ordered in sequence, importance, or urgency. A trick I employ when organizing a project chunk is to say to myself "What task can I do that will make all other tasks easier or irrelevant." This will help you make the most out of your precious time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Importance&lt;/strong&gt;: Sometimes chunking tasks by importance is a good way to see what task will be the most impactful on your life. Often this is a great way to organize tasks across multiple projects in order to plan your immediate schedule.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Urgency&lt;/strong&gt;: This is a consequence-based way to look at tasks. It is also a way to organize tasks across multiple projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Life Categories&lt;/strong&gt;: Life categories are areas or your life that are important to you and typically consists of multiple projects and goals. They can be thought of as a larger chunk of goals. Life categories will be discussed later in this article.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Up and Down Chunking&lt;/h3&gt;

&lt;p&gt;Up and down chunking is a way of organizing your tasks to make them detailed enough to accomplish without being overwhelmed. After grouping tasks on a project level, you may find that not all the tasks seem to be at the same level logically. For example, planning a trip to a foreign country. You may end up with a list of tasks like&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Buy tickets&lt;/li&gt;
&lt;li&gt;Buy travel clothes&lt;/li&gt;
&lt;li&gt;Get passport renewed&lt;/li&gt;
&lt;li&gt;Research trip&lt;/li&gt;
&lt;li&gt;Buy guidebooks&lt;/li&gt;
&lt;li&gt;Ask friends about the location&lt;/li&gt;
&lt;li&gt;Research airfare&lt;/li&gt;
&lt;li&gt;Plan activities&lt;/li&gt;
&lt;li&gt;Book hotel&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the above list, there are a bunch of tasks that appear at different levels of the project. Items  5, 6, 7, and 8 can be seen as sub-tasks of "researching the trip" (item 4). Up chunking, these 4 items into a larger chunk make organizing the project easier. If we look at this project at larger chunks we might end up with &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Research Trip &lt;/li&gt;
&lt;li&gt;Book trip &lt;/li&gt;
&lt;li&gt;Prepare for travel &lt;/li&gt;
&lt;li&gt;Travel&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This list of 4 items is easier to manage over the life of this travel project. Four items are less overwhelming than 9+ items. The fact that it is less overwhelming helps ensure that we will follow through with this project. As we begin to work through these four larger chunks, we can break each large chunk into small chunks to plan over the course of a day(s) or week(s). &lt;/p&gt;

&lt;h2&gt;Life Categories&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rZhOx87O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/hands-1838659_1920-1024x685.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rZhOx87O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/05/hands-1838659_1920-1024x685.jpg" alt="" class="wp-image-5186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Life categories is the idea that our lives are made up of many parts or categories that need to be managed. Many of these categories such as finances, career, and family, maybe obvious, but often we are not aware of all of our life categories. Often these unknown categories take the shape of multiple related goals. The fulfillment of these goals brings us a feeling of peace, joy, accomplishment, and fulfillment. Being able to identify them and plan meaningful (important) activities is the key to mitigating burnout and frustration. &lt;/p&gt;

&lt;p&gt;For example, if you are solely focusing on career and finances and begin to feel burnt out, it might be because you may not be honoring your need for relationships or physical health. Often we can be so locked into a particular goal that we deem everything else as unimportant. So we grind. This causes us to obsess and burnout. If we step back and take care of other needs we find the obtaining the main goal will become easier and more enjoyable.&lt;/p&gt;

&lt;p&gt;So how could this manifest? In the previous article I mentioned making time for important, non-urgent tasks. Sometimes we label a task as not important, such as playing a board game or going for a swim, but in actuality, it is important to us, because it brings us joy and fulfillment. &lt;/p&gt;

&lt;h3&gt;Categories&lt;/h3&gt;

&lt;p&gt;The following list is some suggestions for categories to consider. For simplicity's sake, try and keep the number of categories small and add more as you master them or feel them evolve. Remember to up chunking to make them manageable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finance&lt;/li&gt;
&lt;li&gt;Career/Mission&lt;/li&gt;
&lt;li&gt;Health/Fitness&lt;/li&gt;
&lt;li&gt;Personal Growth&lt;/li&gt;
&lt;li&gt;Recreation/Fun&lt;/li&gt;
&lt;li&gt;Friends/Social&lt;/li&gt;
&lt;li&gt;Family&lt;/li&gt;
&lt;li&gt;Romance&lt;/li&gt;
&lt;li&gt;Contribution/Spiritual&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;How we manage tasks can lead to a more fulfilled and productive life. Especially if we take a holistic approach to how we manage the tasks and our life. Utilizing the Einshower Matrix, Capture Sheets, Chunking, and Life Categories are a few great approaches. However, there are many more strategies, and often how we use and approach these strategies may change with time.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>leadership</category>
      <category>management</category>
    </item>
    <item>
      <title>Finding Diacritics in a Postgres Table</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Fri, 20 Mar 2020 13:39:05 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/finding-diacritics-in-a-postgres-table-4kcc</link>
      <guid>https://dev.to/ryboflavin42/finding-diacritics-in-a-postgres-table-4kcc</guid>
      <description>&lt;p&gt;Recently I had the task to find all the Diacritics in a database table. I had to find an easy way to select every row that has a diacritic in the name. The solution was surprisingly simple, but a bit unintuitive. In this article, I will provide you with that solution as well as additional information about how Java and Postgres treat diacritics and other characters.&lt;/p&gt;

&lt;h2&gt;What is a diacritic?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vaFjv90T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/diacriti.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vaFjv90T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/diacriti.jpg" alt="" class="wp-image-5037"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Diacritic is a mark or an accent on a letter that indicates that the letter should be pronounced differently. Diacritics are often referred to as accents. The marked letter and the un-marked letter are logically the same letters in the language. Examples of Diacritics are&lt;/p&gt;

&lt;pre class="wp-block-preformatted"&gt;áàâüñçå&lt;/pre&gt;

&lt;p&gt;It is important to understand that some characters that we might consider diacritics are not actually diacritics. This is because, In the character's respective alphabet, they are individual letters.  Examples are&lt;/p&gt;

&lt;pre class="wp-block-preformatted"&gt;Æ æ Ø ø Ł ł&lt;/pre&gt;

&lt;p&gt;When dealing with diacritics its important to understand how the system will treat certain characters. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NPC2n0ie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/Homer_British_Museum.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NPC2n0ie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/Homer_British_Museum.jpg" alt="" class="wp-image-5038" width="419" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;How does Java deal with Diacritics?&lt;/h2&gt;

&lt;p&gt;Java handles diacritics in a strict fashion. For these examples, we will use the Normalizer.normalize utility in the Apache Commons-text library. We will see that "Æ æ Ø ø Ł ł",  are not replaced with a similar English analog. &lt;/p&gt;

&lt;p&gt;We will use the following method for testing&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;&lt;strong&gt;public static &lt;/strong&gt;String stripDiacritics(String input){&lt;br&gt;    &lt;strong&gt;if &lt;/strong&gt;( !Normalizer.&lt;em&gt;isNormalized&lt;/em&gt;(input, Normalizer.Form.&lt;strong&gt;&lt;em&gt;NFD&lt;/em&gt;&lt;/strong&gt;) ) {&lt;br&gt;        &lt;strong&gt;return &lt;/strong&gt;Normalizer.&lt;em&gt;normalize&lt;/em&gt;(input, Normalizer.Form.&lt;strong&gt;&lt;em&gt;NFD&lt;/em&gt;&lt;/strong&gt;).replaceAll(&lt;strong&gt;"&lt;/strong&gt;&lt;strong&gt;\\&lt;/strong&gt;&lt;strong&gt;p{M}+"&lt;/strong&gt;, &lt;strong&gt;""&lt;/strong&gt;);&lt;br&gt;    }&lt;br&gt;    &lt;strong&gt;return &lt;/strong&gt;input;&lt;br&gt;}&lt;/pre&gt;

&lt;p&gt;Testing this method will result in the following&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt; &lt;a class="comment-mentioned-user" href="https://dev.to/test"&gt;@test&lt;/a&gt;
&lt;br&gt;&lt;strong&gt;public void &lt;/strong&gt;testStripDiacritic(){&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"aabbsser3ssa35ds"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"aabbsser3ssa35ds"&lt;/strong&gt;));&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"This is a funky String"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"Tĥïŝ ĩš â fůňķŷ Šťŕĭńġ"&lt;/strong&gt;));&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"Ø or Ł"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"Ø or Ł"&lt;/strong&gt;));&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"the German umlauts: a, o, u"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"the German umlauts: ä, ö, ü"&lt;/strong&gt;));&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"manana"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"mañana"&lt;/strong&gt;));&lt;br&gt;    &lt;em&gt;assertEquals&lt;/em&gt;(&lt;strong&gt;"Æ æ Æ æ or Æ æ"&lt;/strong&gt;, DiacriticUtility.&lt;em&gt;stripDiacritics&lt;/em&gt;(&lt;strong&gt;"Ǣ ǣ Ǽ ǽ or Æ æ"&lt;/strong&gt;));&lt;br&gt;}&lt;/pre&gt;

&lt;p&gt;Notice diacritics such as "ůňķŷ" are replaced with "unky", but "Æ æ" are not replaced with "AE ae"&lt;/p&gt;

&lt;h2&gt;How does Postgres handle diacritics?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oC_8nap8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/database.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oC_8nap8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/database.jpg" alt="" class="wp-image-5039" width="224" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Postgres treats diacritics the same as Java but has a different effect on the other characters. In Postgres, we use the &lt;em&gt;unaccent&lt;/em&gt; extension.&lt;/p&gt;

&lt;p&gt;To install &lt;em&gt;unccent&lt;/em&gt; run the following.&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;&lt;strong&gt;CREATE EXTENSION &lt;/strong&gt;unaccent;&lt;br&gt;&lt;/pre&gt;

&lt;p&gt;Now let's run a simple query to see how characters are treated.&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;&lt;strong&gt;select &lt;/strong&gt;&lt;em&gt;unaccent&lt;/em&gt;(&lt;strong&gt;'Tĥïŝ ĩš â fůňķŷ Šťŕĭńġ | łŁØø | ä, ö, ü | Ǣ ǣ Ǽ ǽ æ or Æ æ'&lt;/strong&gt;) &lt;br&gt;&lt;strong&gt;FROM &lt;/strong&gt;blog.diacritic_blog &lt;strong&gt;LIMIT&lt;/strong&gt;1;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;This is a funky String | lLOo | a, o, u | Ǣ ǣ Ǽ ǽ ae or AE ae&lt;/pre&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ACz43wEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/picard-facepalm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ACz43wEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/picard-facepalm.jpg" alt="" class="wp-image-5040" width="470" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the expected diacritics are resolved, but so is "Æ æ ł Ł Ø ø". Interestingly enough is the Postgres's unaccent does not resolve "Ǣ ǣ Ǽ ǽ" to "Æ æ Æ æ" or to "AE ae AE as".&lt;/p&gt;

&lt;h2&gt;Finding diacritics in a Postgres Table?&lt;/h2&gt;

&lt;p&gt;As we saw above there are a lot of special cases regarding how Postgres will handle diacritics and special characters. This solution to finding the rows with diacritics will not take those edge cases into account but will give you a basic solution to querying the table for diacritics. You can add regex and additional where clauses to the query as needed.&lt;/p&gt;

&lt;p&gt;This example will use the following&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;&lt;strong&gt;CREATE TABLE &lt;/strong&gt;blog.diacritic_blog(&lt;br&gt;  &lt;strong&gt;id bigserial NOT NULL&lt;/strong&gt;,&lt;br&gt;  &lt;strong&gt;name varchar&lt;/strong&gt;(255) &lt;strong&gt;NOT NULL&lt;/strong&gt;,&lt;br&gt;  &lt;strong&gt;CONSTRAINT &lt;/strong&gt;pk_diacritic_id &lt;strong&gt;PRIMARY KEY &lt;/strong&gt;(&lt;strong&gt;id&lt;/strong&gt;)&lt;br&gt;);&lt;br&gt;&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'aabbsser3ssa35ds'&lt;/strong&gt;);&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'Tĥïŝ ĩš â fůňķŷ Šťŕĭńġ'&lt;/strong&gt;);&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'Ø or Ł or łŁØø'&lt;/strong&gt;);&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'the German umlauts: ä, ö, ü'&lt;/strong&gt;);&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'mañana'&lt;/strong&gt;);&lt;br&gt;&lt;strong&gt;INSERT INTO &lt;/strong&gt;blog.diacritic_blog(&lt;strong&gt;name&lt;/strong&gt;) &lt;strong&gt;values &lt;/strong&gt;(&lt;strong&gt;'Ǣ ǣ Ǽ ǽ æ or Æ æ'&lt;/strong&gt;);&lt;/pre&gt;

&lt;p&gt;The following query will return 5 rows:&lt;/p&gt;

&lt;pre class="wp-block-preformatted highlight code highlighter"&gt;&lt;strong&gt;SELECT name, unaccent(name)&lt;/strong&gt;&lt;em&gt; &lt;/em&gt;&lt;strong&gt;FROM &lt;/strong&gt;blog.diacritic_blog &lt;strong&gt;where name &lt;/strong&gt;!= &lt;em&gt;unaccent&lt;/em&gt;(&lt;strong&gt;name&lt;/strong&gt;);&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;OUTPUT:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table class="wp-block-table"&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tĥïŝ ĩš â fůňķŷ Šťŕĭńġ&lt;/td&gt;
&lt;td&gt;This is a funky String&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ø or Ł or łŁØø&lt;/td&gt;
&lt;td&gt;O or L or lLOo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;the German umlauts: ä, ö, ü&lt;/td&gt;
&lt;td&gt;the German umlauts: a, o, u&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mañana&lt;/td&gt;
&lt;td&gt;manana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ǣ ǣ Ǽ ǽ æ or Æ æ&lt;/td&gt;
&lt;td&gt;Ǣ ǣ Ǽ ǽ ae or AE ae&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Finding all the Diacritics in a Postgres table is simple if not a bit unintuitive. However, handling diacritics and non-English alphabet characters is a minefield. Hopefully, now you have a basic understanding of diacritics and how Java and Postgres treat these characters. &lt;/p&gt;

&lt;p&gt;If you'd like to see a complete treatment of how other languages (JavaScript or Python) treats these character please reach out to me via Twitter &lt;a href="https://twitter.com/ryboflavin42"&gt;@ryboflavin42&lt;/a&gt; and let me know what other languages you'd like to see.&lt;/p&gt;

&lt;p&gt;The examples from this article can be found on GitHub at &lt;a href="https://github.com/djchi82/DiacriticBlogProject"&gt;https://github.com/djchi82/DiacriticBlogProject&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>java</category>
      <category>database</category>
    </item>
    <item>
      <title>Hacking Your Life with Task Types</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Tue, 03 Mar 2020 21:08:29 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/hacking-your-life-with-task-types-18f8</link>
      <guid>https://dev.to/ryboflavin42/hacking-your-life-with-task-types-18f8</guid>
      <description>&lt;p&gt;I want to share an idea that has changed my life by mitigating burnout and feeling more fulfilled day-to-day, month-to-month, and year-to-year. The idea is task typing. &lt;/p&gt;

&lt;p&gt;In my next two blog posts, I will introduce you to the idea and show you a few ways to use it to improve performance, happiness, fulfillment, and productivity.&lt;/p&gt;

&lt;h2&gt;4 Types of Tasks&lt;/h2&gt;

&lt;p&gt;In essence, there are 4 categories that every task falls into. Those categories and formed by two-axis; importance and urgency. This is sometimes called the Eisenhower Matrix. In the figure below the x-axis describes a task's urgency and the y-axis describes a task's importance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yr5gQaru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/02/task-graph.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yr5gQaru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/02/task-graph.jpg" alt="" class="wp-image-5094" width="429" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Quadrant I: Urgent - Important Tasks&lt;/h3&gt;

&lt;p&gt;A task that is Urgent and Important often consumes our life. They show up as urgent requests from our bosses, loved ones, and the unexpectedness of life. These are drop everything tasks and do it now. They cause us to lose our rhythm. Often we perceive dire consequences if these tasks aren't done immediately or in a timely matter. We often describe these tasks as "firefighting".&lt;/p&gt;

&lt;p&gt;An example of an urgent - important task might be the production system is down due to a bug. We need to drop everything, get production running, and fix the bug. In our personal life, this might be an unexpected bill or an accident.&lt;/p&gt;

&lt;h3 id="mce_32"&gt;Quadrant II: Non-Urgent - Important Tasks&lt;/h3&gt;

&lt;p&gt;A task that is non-urgent - important is one that we deem important, often because it moves us towards our goals. However, these tasks often take a backseat to more urgent tasks. In many cases, these tasks are not done because they take energy to complete and we are tired from all the "firefighting", so we look at Quadrant III or IV task to feel good about completing something or for distraction.&lt;/p&gt;

&lt;p&gt;An example of a non-urgent - important task, might be refactoring a component that is slowing the system down, or creating a report feature for the user to free up operations time. In our personal lives, these show up as tasks we need to complete in order to complete a goal.&lt;/p&gt;

&lt;h3 id="mce_35"&gt;Quadrant III: Non-Urgent - Non-Important Tasks&lt;/h3&gt;

&lt;p&gt;Quadrant III tasks are tasks that distract us from our objectives. These tasks are things like pointless meetings, unnecessary paperwork, playing a game. We often use these tasks for distractions or instant-gratification. Non-urgent - non-important tasks are tasks that if we don't do then they will have little impact on the overall project or vision.&lt;/p&gt;

&lt;p&gt;An example of this task might be creating a meeting to discuss a feature and bringing in people that are not needed for the task. In our personal life, these tasks can show up as endlessly cleaning or watching tv.&lt;/p&gt;

&lt;h3 id="mce_41"&gt;Quadrant IV: Urgent - Non-Important Tasks&lt;/h3&gt;

&lt;p&gt;Urgent - non-important tasks are tasks that have a deadline but have little consequences to them if they are not completed. &lt;/p&gt;

&lt;p&gt;An example of this task might be replying to emails or chat messages. In our personal life, they might show up as chores.&lt;/p&gt;

&lt;h2&gt;Where We Live&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mo-kxWFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/02/tormentor-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mo-kxWFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/02/tormentor-1.jpg" alt="" class="wp-image-5095" width="478" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most of us live in Quadrants I and III. Either we are handling things that need to happen right now, else there will be consequences, or we are living in distraction or exhaustion. The secret is to begin eliminating quadrant III and IV tasks to create space to complete more quadrant II tasks. Doing more quadrant II tasks allows us to get ahead of the things that cause quadrant I tasks, and feel more fulfilled and energized.&lt;/p&gt;

&lt;p&gt;Understanding the 4 task types and how they may be showing up in your business or life is the first step to taking control and making changes. In the next post, I will describe some systems and philosophies to help eliminate quadrant III and IV tasks and create the space to start feeling more fulfilled and take back your time and energy.&lt;/p&gt;

&lt;h2&gt;&lt;strong&gt;Resources&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Most of this content is adapted from third-party content. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://store.tonyrobbins.com/products/the-time-of-your-life"&gt;https://store.tonyrobbins.com/products/the-time-of-your-life&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.eisenhower.me/eisenhower-matrix/"&gt;https://www.eisenhower.me/eisenhower-matrix/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.franklincovey.com/the-7-habits.html"&gt;https://www.franklincovey.com/the-7-habits.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>management</category>
      <category>beginners</category>
    </item>
    <item>
      <title>4 Ways to Learn Object-Oriented Design</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Sun, 23 Feb 2020 15:10:33 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/4-ways-to-learn-object-oriented-design-4c1e</link>
      <guid>https://dev.to/ryboflavin42/4-ways-to-learn-object-oriented-design-4c1e</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zZz_TX2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/sketch-4748895_640.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zZz_TX2u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/sketch-4748895_640.jpg" alt="" class="wp-image-5073" width="225" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking back over 2019 I realized I've had the privilege to work on a diverse set of software projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architecting and coding an Enterprise Transform and Load (ETL) project in Python and Django&lt;/li&gt;
&lt;li&gt;Building an SFTP platform using AWS lambda, Node, and Vue&lt;/li&gt;
&lt;li&gt;Adding major features and refactoring a long-term enterprise Java project &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While reviewing these projects one common thing stuck out as to why these projects were considered a success, it was due to their object-oriented design.&lt;/p&gt;

&lt;p&gt;A key feature of each of these projects was the loose nature of the specific requirements and how details of inputs and behaviors would change as we learned more about the domain. Having a good design made making changes and testing a breeze. For example, by designing the &lt;a href="https://tenmilesquare.com/data-transform-and-load-overview/"&gt;ETL pipeline to use a canonical,&lt;/a&gt; a requirement change in the Python ETL project that should have taken a week to implement could be made and tested in less than a day.&lt;/p&gt;

&lt;p&gt;This got me thinking about how I learned object-oriented design. Some of the answers are obvious, but others are less so. I want to share these approaches with you.&lt;/p&gt;

&lt;h2&gt;1. Courses&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2EDx-C4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/woman-2701154_640.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2EDx-C4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/woman-2701154_640.jpg" alt="" class="wp-image-5072" width="355" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My earliest object-oriented design training was from taking an employer-sponsored training course with Training Hott. The course covered the most common &lt;a href="http://www.blackwasp.co.uk/gofpatterns.aspx"&gt;Gang of Four&lt;/a&gt; design patterns. It went on to teach how to organize a set of requirements into components and analyze them to see what design patterns could be used to build an application. To be honest, I don't remember much from this course, as I didn't have much opportunity to use the techniques and patterns in my daily job.&lt;/p&gt;

&lt;p&gt;Today there are a ton of different online resources for learning design patterns. Going the online route is cheaper than live training and often is more hands-on, but requires self-discipline to finish the course. Lastly, the barrier to entry for creating an online course is low and sometimes the quality of the course may suffer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/courses/search/?src=ukw&amp;amp;q=software%20design"&gt;Udemy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.coursera.org/courses?query=design%20patterns"&gt;Coursera&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/learning/search?keywords=design%20patterns"&gt;Linkedin Learning&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;2. Books&lt;/h2&gt;

&lt;p&gt;Another obvious way of learning object-oriented design is from books. Books are not only a great way to learn, but make great reference material when you've forgotten details and need a reminder. In my opinion, the secret of using books is to understand how the material is going to be presented and how you learn. The canonical design patterns book is the Gang of Four book, &lt;a href="https://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612/ref=asc_df_0201633612/?tag=hyprod-20&amp;amp;linkCode=df0&amp;amp;hvadid=312280575053&amp;amp;hvpos=1o8&amp;amp;hvnetw=g&amp;amp;hvrand=6962496824570411812&amp;amp;hvpone=&amp;amp;hvptwo=&amp;amp;hvqmt=&amp;amp;hvdev=c&amp;amp;hvdvcmdl=&amp;amp;hvlocint=&amp;amp;hvlocphy=9011972&amp;amp;hvtargid=aud-801381245258:pla-395340045790&amp;amp;psc=1"&gt;Design Patterns: Elements of Reusable Object-Oriented Software&lt;/a&gt;. However, it can be a tough read, especially for beginners. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZM6jYpMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/Screen-Shot-2020-01-26-at-2.36.41-PM-1024x780.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZM6jYpMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/Screen-Shot-2020-01-26-at-2.36.41-PM-1024x780.png" alt="" class="wp-image-5067" width="457" height="347"&gt;&lt;/a&gt;Taken from:&lt;a href="https://martinfowler.com/bliki/GangOfFour.html"&gt; &lt;/a&gt;&lt;a href="https://martinfowler.com/bliki/GangOfFour.html"&gt;https://martinfowler.com/bliki/GangOfFour.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my experience, I found the Head First books an excellent place to start or to refresh yourself. They are broken down in digestible chunks with coding and thought exercises at the end of each section. Sort of like a &lt;em&gt;Highlights for Coders.&lt;/em&gt; Two great books to start with are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Head-First-Design-Patterns-Brain-Friendly/dp/0596007124"&gt;Head First Design Patterns&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.amazon.com/Head-First-Object-Oriented-Analysis-Design/dp/0596008678"&gt;Head First Object-Oriented Analysis and Design&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O'Reilly also publishes great books on the subject and often teaches the subjects by building a sample application with the concepts.&lt;/p&gt;

&lt;h2&gt;3. Reading and Mimicking Code&lt;/h2&gt;

&lt;p&gt;One of my favorite methods of learning new design patterns is digging into the source code of my favorite frameworks and libraries. The Spring Framework has taught me more about design than any course or book. This method may be difficult for the beginner as some frameworks, such as Spring, are highly abstracted and it can be difficult to understand why classes are broken apart. &lt;/p&gt;

&lt;p&gt;A great way to begin to dive into an abstract framework is to try and extend a portion of the framework. Often frameworks are set up to allow users to extend interfaces and abstracts in order to modify the behavior of a particular component. The Spring documentation is very detailed and often will walk the reader through the how and why of a component's design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ymPDsEih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/gaiman.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ymPDsEih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/gaiman.jpg" alt="" class="wp-image-5069" width="207" height="207"&gt;&lt;/a&gt;“Most of us find our own voices only after we’ve sounded like a lot of other people.”&lt;br&gt;&lt;br&gt;― &lt;strong&gt;Neil Gaiman﻿&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another way is to read and extend the code of your team. One of my biggest jump in understanding how to create reusable components was reading and mimicking the code of the smartest people on my team. Seeing how and why code is abstracted and asking them questions about the code will often give you insight into a different way of thinking and designing. &lt;/p&gt;

&lt;h2&gt;4. Refactoring&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dOBnKaML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/refactoring-i1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dOBnKaML--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://tenmilesquare.com/wp-content/uploads/2020/01/refactoring-i1.png" alt="" class="wp-image-5071"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Picking up a refactoring task or refactoring your own code before you commit is a great way to improve your understanding of design patterns. As you read the code you can begin to see where code is duplicated, confusing, or hard to test. Refactoring causes you to remove yourself from the actual code and think abstractly about how it fits into the application. Asking questions such as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When this is refactored how can I verify it behaves the same way?&lt;/li&gt;
&lt;li&gt;What tests are missing from the original code?&lt;/li&gt;
&lt;li&gt;How can I componentize the code?&lt;/li&gt;
&lt;li&gt;How can I eliminate multiple or nest ifs and loops?&lt;/li&gt;
&lt;li&gt;How can I combine two methods or classes?&lt;/li&gt;
&lt;li&gt;How can I test a particular piece a logic inside a long method or function with minimum set up?&lt;/li&gt;
&lt;li&gt;If I was to support this code 3 years what would make this code easier to support?&lt;/li&gt;
&lt;li&gt;If the requirements were to change or the business wanted to change the behavior, how can I make it configurable?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I believe that proper design allows developers to easily maintain, extend, change, and test code and is often overlooked in many components. On top of becoming a better coder and creating better code, I believe you also begin to appreciate coding aesthetically and get more satisfaction from the work.&lt;/p&gt;

</description>
      <category>design</category>
      <category>java</category>
      <category>python</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Why do beginner hate Java so much</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Fri, 04 Oct 2019 21:34:40 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/why-do-beginner-hate-java-so-much-f00</link>
      <guid>https://dev.to/ryboflavin42/why-do-beginner-hate-java-so-much-f00</guid>
      <description>&lt;p&gt;I'm getting to the point where I can start being considered an old guy. My primary language is Java. The more I engage with the community, the more I find dislike and frustration with Java. To me, it was my first serious language and I found it great as it told me everything I wanted to know about the new APIs and third-party libraries. It also didn't devastate my global environment by installing third-party libraries as would Node or Python. It's all a binary that you add to a classpath instead of installing it to the OS.&lt;/p&gt;

&lt;p&gt;Working with JavaScript and Python I'm mostly frustrated by &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;magic string configurations&lt;/li&gt;
&lt;li&gt;not knowing what is being passed into and returned by APIS&lt;/li&gt;
&lt;li&gt;and not being able to trust third-party libraries
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm curious&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What frustrates you most about Java?&lt;/li&gt;
&lt;li&gt;What frustrates you most about learning Java?&lt;/li&gt;
&lt;li&gt;What is your perception about Java?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm really starting to think that it just needs an image overhaul on how you learn it and approach? I ask my self, would could I create to show its versatility and effectiveness as a backend language.&lt;/p&gt;

</description>
      <category>java</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to Debug Node Serverless Using JetBrains WebStorm</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Thu, 03 Oct 2019 14:46:43 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/how-to-debug-node-serverless-using-jetbrains-webstorm-70k</link>
      <guid>https://dev.to/ryboflavin42/how-to-debug-node-serverless-using-jetbrains-webstorm-70k</guid>
      <description>&lt;p&gt;One of the most useful tools in a developer's quiver is the debugger. The debugger allows a developer to not only step through code and track down bugs, but it is useful as a way to profile data structures. I find the ability to profile data structures to be extremely useful when working with scripting languages such as Python and Node. &lt;/p&gt;

&lt;p&gt;Recently I was working on a Node serverless project and had no idea what fields existed on the serverless lambda objects (event, context,  callback). When I went looking how to debug serverless, I struggled to find a solution that detailed debugging serverless in JetBrains WebStorm. The following will get you started debugging node serverless using JetBrains WebStorm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6gn9ogj6bbyaka8edvpw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6gn9ogj6bbyaka8edvpw.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a ton of tutorials on how to install node, serverless, WebStorm, so I've assumed you've already taken care of that. For the purpose of this tutorial, we will be using macOS Mojave. Some locations may vary depending on your OS.&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;Create a new node configuration: In the toolbar click &lt;strong&gt;Run&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Edit Configurations...&lt;/strong&gt;
&lt;/li&gt;&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fopfcizgwu6isl5tat04t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fopfcizgwu6isl5tat04t.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2. Create a new node configuration by click &lt;strong&gt;+&lt;/strong&gt; and then &lt;strong&gt;Node.js &lt;/strong&gt;from the dropdown&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftenmilesquare.com%2Fwp-content%2Fuploads%2F2019%2F10%2Fnew-config-1024x633.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftenmilesquare.com%2Fwp-content%2Fuploads%2F2019%2F10%2Fnew-config-1024x633.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;Fill in the configuration details&lt;ul&gt;
&lt;li&gt;Name: Anything you want&lt;/li&gt;
&lt;li&gt;Working directory: This will default to the root of your project. Be sure it points to the directory with your serverless.js file&lt;/li&gt;
&lt;li&gt;JavaScript file: this should point to the serverless binary: Typically /usr/local/bin/sls If you do not know where sls is installed you can find it by typing &lt;code&gt;&lt;em&gt;&lt;strong&gt;which sls&lt;/strong&gt;&lt;/em&gt;&lt;/code&gt; in the terminal&lt;/li&gt;
&lt;li&gt;Application parameters: 'offline'&lt;ul&gt;&lt;li&gt;Be sure to add any additional parameters you might need such as '-s local'&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftenmilesquare.com%2Fwp-content%2Fuploads%2F2019%2F10%2FFIllin-config-1024x651.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftenmilesquare.com%2Fwp-content%2Fuploads%2F2019%2F10%2FFIllin-config-1024x651.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you launch the Configuration as debug, the WebStorm debugger will automatically be hooked into the node process.&lt;/p&gt;

&lt;p&gt;This debug configuration may be obvious to a seasoned node developer, but if you're a language transplant like me, you may need help getting started with debugging serverless using WebStorm. This configuration will definitely help you get started understanding the framework and squashing those pesky scripting bugs.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>serverless</category>
      <category>webstorm</category>
      <category>debugging</category>
    </item>
    <item>
      <title>Reusability and Sustainability Using a Canonical - Enterprise Transform and Load Part 3</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Thu, 11 Jul 2019 14:45:44 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/reusability-and-sustainability-using-a-canonical-enterprise-transform-and-load-part-3-5252</link>
      <guid>https://dev.to/ryboflavin42/reusability-and-sustainability-using-a-canonical-enterprise-transform-and-load-part-3-5252</guid>
      <description>&lt;h2&gt;Overview&lt;/h2&gt;

&lt;p&gt;In the last article in my Extract Transform and Load (ETL) series, &lt;a href="https://tenmilesquare.com/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation/"&gt; Submission Information Packages and High-level Validation&lt;/a&gt;, I covered the most important part of this process: &lt;strong&gt;the data&lt;/strong&gt;. The data format will drive 80 percent of the design decisions for the ETL pipeline. In this article, I introduce the concept of &lt;strong&gt;Canonical Transformation&lt;/strong&gt; in this context as a way to create a reusable and easy to maintain pipeline.&lt;/p&gt;

&lt;p&gt;The term canonical may be new to most readers, so what is it? &lt;/p&gt;

&lt;blockquote class="wp-block-quote"&gt;&lt;p&gt;"Authoritative or standard; conforming to an accepted rule or procedure. When referring to programming, canonical means conforming to well-established patterns or rules." &lt;br&gt;—&lt;a href="https://www.webopedia.com/TERM/C/canonical.html"&gt;Webopedia&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;In essence, it is a set of common objects that can be manipulated, submitted, and used by various components. In the context of an ETL pipeline, the canonical allows multiple data types and endpoints to converge into a set of objects that allows a common set of downstream processing logic to be applied. The consistency of downstream processing allows an enterprise to create, maintain, and monitor a single set of business rules, identification criteria, and persistence logic. The benefits of implementing the Canonical Transformation are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduction in maintenance&lt;/li&gt;
&lt;li&gt;Easy business logic changes&lt;/li&gt;
&lt;li&gt;Data monitoring&lt;/li&gt;
&lt;li&gt;Quality control&lt;/li&gt;
&lt;li&gt;Decoupling the database from the data formats and endpoints &lt;/li&gt;
&lt;li&gt;Decoupling pipeline components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main factors when designing the Canonical Transformation step are the canonical objects and mapping routines.&lt;/p&gt;

&lt;h2&gt;Canonical Objects&lt;/h2&gt;

&lt;p&gt;Most canonical objects are unremarkable. They are data holders. However, determining the canonical design can be difficult, particularly if there are multiple data formats. The key is to start with some good questions to guide the design.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What are the system’s submission input formats? (&lt;a href="https://tenmilesquare.com/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation/"&gt;See the last article&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;What is the database design?&lt;/li&gt;
&lt;li&gt;Will the system need to disseminate data? If so what format and how?&lt;/li&gt;
&lt;li&gt;How will data be indexed for search?&lt;/li&gt;
&lt;li&gt;What are the systems business rules?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most likely the first question will drive the canonical design and will be the only question covered in this article. As is often the case, the most complex or detailed input format will influence most of the canonical design. Often, these formats have been designed by a consortium of experts and data partners. These are people with domain knowledge and a good understanding of the data, so typically they know how to represent it. In other words, it is the industry-standard format. Also, it is typically easy to map simple data to a more complex format than it is to map complex data to a simpler format.&lt;/p&gt;

&lt;p&gt;Often times when working with an industry-standard format, the schemas are complex enough to handle edge cases that most consumers will not use or need. This is why some companies design simpler formats in CSV or use a UI to service smaller data partners (&lt;a href="https://tenmilesquare.com/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation/"&gt;See submission Input&lt;/a&gt;). When a simpler format is created, data structures are reduced or an array will be simplified to a single item or limited to a certain number of items. When going from simple to complex, the decision to keep data is simple. However, when going from complex to simple, a decision to truncate data needs to be made. In the case where there is no data structure, the decision may be to not map it, but if many objects (array) are converted to a single field, the question of which item should be kept needs to be answered.&lt;/p&gt;

&lt;h3&gt;Pruning the Canonical Bush&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_BAlRDmI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfec155vwha4hoglleck.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_BAlRDmI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfec155vwha4hoglleck.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simply creating the canonical based on the industry-standard may work, but this approach may leave the system with a messy canonical and could require pruning. The first thing to ask in such a case is: “Do I have a business case for the data being truncated?” If the business owners/domain experts agree that the data is not needed for the system's purpose, then it is okay to remove the fields from the canonical.  It is important to make it clear to business owners that re-ingesting the data may be difficult, time-consuming, or in some case impossible at a later date.&lt;/p&gt;

&lt;p&gt;Another case for pruning the canonical is unused structures. In more verbose standards, metadata envelopes or communication sequence histories are specified. These data structures typical hold data about the transaction and that data is often transient. These are good candidates for pruning. It is important to note that if the system requires a high level of data provenance and auditing, then this data may need to to be kept.&lt;/p&gt;

&lt;h3&gt;Poorly designed standards&lt;/h3&gt;

&lt;p&gt;In some cases, the industry-standard may be poorly designed. An example of this is when a data structure is overly abstracted. This typically happens when there are various types of the same data structure. For example, a digital media standard may represent video, music, podcast, and e-books. The standard may abstract all these digital media types into a structured called work. However, a video may contain fields that an e-book wouldn’t have and vice versa. If the structure was design correctly with correct sub-type inheritance this may not be an issue. However, if the design tries to collect all the different types into the abstract work object, there could be confusing on the standard. In these scenarios, there are two approaches. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leave the abstract data structure as is and write business rules to validate each type. This would be done if the system is designed to inform submitters of any violations in their data.&lt;/li&gt;
&lt;li&gt;Boil down the data structures into concrete types. In this case, the system is deliberately truncating data that does not make sense for the type. The one issue with this approach is that data partners may not completely understand the data standard or may conflate the meaning of two different data fields. Using the above example, a book may have an author and music may have a composer. These words have very similar meanings and a data partner may populate the composer field for a book’s author. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the canonical is drafted, ask “does the design accurately represent the domain?” Repeatedly asking this question throughout the design process will help ensure that the canonical stays true the system’s intention.&lt;/p&gt;

&lt;h2&gt;Mapping Routines&lt;/h2&gt;

&lt;p&gt;Mapping routines is all about how the raw data gets unmarshalled into the canonical. Often there will be at least one mapping routine per data format. &lt;/p&gt;

&lt;p&gt;In some cases there may be multiple routines per data type. This is often seen with industry standards, particularly with more complex standards.  When a standard has been around for a while, data partners will have their own way to fill them out, using some data fields, not populating others, and using nodes in different ways.  In these cases, most of the data can be mapped by the parser/mapper, but will require different sub-routines to map the differences. &lt;/p&gt;

&lt;p&gt;Another multiple-routines-for-the-same-format scenario occurs when there are multiple versions of the same industry format. In both these scenarios a factory pattern can be used to determine the routine via submitter and version. Then simple inheritance can handle submitter specific mappings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cske2iEl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xsd18qx0pnthtdj6puzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cske2iEl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xsd18qx0pnthtdj6puzg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Breaking out the mapping routines per format, version, and data partner decouples the logic for easy blackbox testing. Raw data goes in and the expected canonical comes out.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;The most important step for making an ETL pipeline reusable and easy to maintain is the canonical step. A well designed canonical will decouple the raw data from the system’s business logic and internal working, allowing the code to be reused and reduce duplication. The next article in the Extract Transform and Load Series will address business rules.&lt;/p&gt;

&lt;p&gt;Ten Mile Square has expert consultants and engineers that have a proven track record to learn new domains, analyze requirements, and design the perfect canonical to mean your business’s needs. &lt;a href="/contact/"&gt;Reach out to us&lt;/a&gt; if you have any questions. We love to help.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>data</category>
    </item>
    <item>
      <title>Extract Transform and Load Part 2 – Submission Information Packages and High-level Validation</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Fri, 28 Jun 2019 16:45:21 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation-106b</link>
      <guid>https://dev.to/ryboflavin42/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation-106b</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OVT0Ez-B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/f0etl0w2ae5a8ztybvfh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OVT0Ez-B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/f0etl0w2ae5a8ztybvfh.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first two major pieces of an Extract Transform and Load pipeline are the Submission Information Package (SIP) and High-level validation. The major components, as well as the benefits and usages of Extract Transform and Load, were discussed in the &lt;a href="https://tenmilesquare.com/data-transform-and-load-overview/"&gt;first article&lt;/a&gt; in the series. It is recommended that the reader have familiarization with the overall Extract Transform and Load process before continuing on. Data is important, and in this article, the subject is Data.&lt;/p&gt;

&lt;h2&gt;Submission Information Package&lt;/h2&gt;

&lt;p&gt;The Submission Information Package, or SIP for short, is the actually data submitted from the data partner to the data ingest system. The term SIP is taken form the &lt;a href="https://www.oclc.org/research/publications/library/2000/lavoie-oais.html"&gt;Open Archival Information Standard&lt;/a&gt; (OAIS) which defines it as &lt;/p&gt;

&lt;blockquote class="wp-block-quote"&gt;&lt;p&gt;Submission Information Package (SIP): An Information Package that is delivered by the Producer to the OAIS for use in the construction or update of one or more AIPs and/or the associated Descriptive Information.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;This is the saying the data provided to the system for use
of creating one or more records. For the purpose of this discussion, we will expand that
definition and break into two pieces Raw Data and Data End Points&lt;/p&gt;

&lt;h3&gt;Raw Data&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ut4-pj3W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a3inbjizjwbjvyuh0r9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ut4-pj3W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a3inbjizjwbjvyuh0r9t.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Raw data is the data files provided to the system from a data partner to the ingest system. The data can be a single data file, multiple data files, or a package, which often contains the data files and a metadata or descriptor file. In an OAIS system, a SIP is usually &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a bunch of files of various formats, &lt;/li&gt;
&lt;li&gt;A descriptor file for each file&lt;/li&gt;
&lt;li&gt;A manifest of all the files in the package&lt;/li&gt;
&lt;li&gt;An authentication file or digital signature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most ingest systems the data will just be one or more predefined data files.&lt;/p&gt;

&lt;p&gt;The questions to ask when it comes to raw data are &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are there any particular standards used in the
industry (Industry SOPS, Schemas, XSD, format best practices)?&lt;/li&gt;
&lt;li&gt;What data types is the system ingesting (Excel,
XML, JSON)  &lt;/li&gt;
&lt;li&gt;Do we need to create a standard? Or do we need
to simplified an existing standard?&lt;/li&gt;
&lt;li&gt;Who will be submitting the data? Will it be from
one partner or can multiple partners submit the same data types.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;Are there any particular standards used in the industry?&lt;/h4&gt;

&lt;p&gt;Typically, the first question “Are there any particular standards used in the industry?” is the best place to start. In a larger industry, it will often be the case that a data transfer standard already exists. This should be the starting point for this discussion. The best way to find this out is to google, ask industry experts, and look at digital retail sources for the data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2MHo6KWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rv43d2ngg9hsyfytzata.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2MHo6KWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rv43d2ngg9hsyfytzata.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data types are often easily determined from answering the industry standard question.  Typically, XML is used, but it is the case that CSV and fixed-width file formats will be used. JSON is gaining popularity, but in industries where data quality is highly importing this format is avoid, due to its free form nature. However, tools such as RelaxNG are available to set up a data structure standard for JSON. The adoption of such tools is limited and resources for use in certain programming languages may not be available or incomplete.&lt;/p&gt;

&lt;h4&gt;Do we need to create a standard or a simplify standard?&lt;/h4&gt;

&lt;p&gt;The question “do we need to create a standard or a simplify standard?” often comes from one of two situations &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No standards currently exist&lt;/li&gt;
&lt;li&gt;A standard exist, but it's &lt;ul&gt;
&lt;li&gt;overly complicated&lt;/li&gt;
&lt;li&gt;it no longer fits the industry's needs&lt;/li&gt;
&lt;li&gt;or less sophisticated data partners want to participate &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating a new standard requires an understanding of the domain and facilitated communication between the data partners. Best practices when creating a new standard could fill a book and such is out of scope for this article. &lt;/p&gt;

&lt;p&gt;The second situation is when a standard exists, but it's either overly complicated, it no longer fits the industry needs, or the ingest system is trying to allow smaller data partners to contribute. As is the case with the first situation care needs to be taken when creating a new standard. However, when building a system that is trying to allow small data partners to contribute, it is often the case that some data partners do not have the  technology sophistication to implement a large standard.  Often the solution is to create a UI to allow partners to manually enter data, or create a CSV/Excel template to allow partners to bulk upload data. When simplifying a format, it is important the format covers or make mandatory the basic required data of the larger formats.&lt;/p&gt;

&lt;h4&gt;Who will be submitting the data?&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eGDUcyHJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yr19succc1d85frxwul1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eGDUcyHJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yr19succc1d85frxwul1.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last question, “who will be submitting the data?” will dictate the above standards question as well as the next section Data Endpoints. &lt;/p&gt;

&lt;h3&gt;Data Endpoints&lt;/h3&gt;

&lt;p&gt;Data endpoints are how data partners send the data. There
are two basic forms of data endpoints, push and pull. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Data retrieval by a data partner (the system sends responses) would also be considered a data endpoint, but is outside the scope of discussion for this article.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;Push&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hTGCVQL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rv5m33j364ihtshwud8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hTGCVQL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rv5m33j364ihtshwud8x.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A push endpoint is where a data partner initiates the requests and sends the data to the system. Typically, these types of endpoints are web services and UI inputs. There are many articles about REST, web services, and UI design and best practice, so this article will not touch on these. However, it is recommended that the endpoints are just that endpoints, they should not dictate a whole new pipeline. Care should be taken to tie these endpoints into the data pipeline and reuse as many of the ETL components as possible.  Typically this means mapping the data into a canonical. Business logic/rules are where data quality is scrutinized and keep the logic in one component makes the system easy to maintain and ensure data quality. If a UI &lt;/p&gt;

&lt;p&gt;If a UI is offered to data partners, particularly if a CSV upload component is available it is often good to allow data partners to upload XML or JSON files via the UI. In addition, having a path to allow partners to validate the data without it being persisted to the database is useful when on-boarding new data partners.&lt;/p&gt;

&lt;h3&gt;Pull&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zdmd3Avd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sjlgvzmyyhmpych1pn0p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zdmd3Avd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sjlgvzmyyhmpych1pn0p.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A pull endpoint is where the ingest system reaches out to
various data partners to ask for data. These pull jobs need to be scheduled and
execute by the system and periodically poll the partnering system. Examples of
this are when the ingest system subscribes to a data feed such as RSS or ATOM
or data scrapes from a data source. Note: most data scraping endpoints will not
require a formal ingest pipeline as data is often unofficial and may not be the
best quality.&lt;/p&gt;

&lt;h4&gt;SFTP/Shared Directories&lt;/h4&gt;

&lt;p&gt;The one hybrid approach is SFTP or a shared directory. This requires that both the data partner and the ingest system to do pulls and pushes. In this case, the data partner will push data to a shared location. Then the ingest system will poll the shared directory for new data to ingest.&lt;/p&gt;

&lt;p&gt;When pulling data from a shared location best practices are extremely important. things to consider are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sub-directories: These are used to break up data partner submissions in logical chunks. A best practice is to use a unique identifier or a timestamp as the subdirectory's name. The pulling system does want to scan old data every time.&lt;/li&gt;
&lt;li&gt;Batch complete files. The data partner needs a way to tell the pulling system that a data batch is ready for ingest. If the pulling system starts to scan when the pushing is still transferring the data it could result in a bad/incomplete batch. Often a blank file with a naming convention like 'batch.complete' is used. Be sure that the data partner knows to create the batch.complete file only after all data has been fully transferred.&lt;/li&gt;
&lt;li&gt;Data retention policies: Over the course of time a shared directory can get large, causing performance problems for the pulling system or maintenance costs for directories host. The question of what should be done with or how long should data be stored on the directory after it is ingested. Often times the ingesting system will delete or move the data to another location after ingest is successfully completed. &lt;em&gt;Note: successfully complete can contain pipeline errors if the data is invalid. But if an unknown system error occurs, such as, the database is down, then data shouldn't be removed. &lt;/em&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;High-level Validation&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W5sw-64q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/26cg1viobs63s5uyzxhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W5sw-64q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/26cg1viobs63s5uyzxhy.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;High-level validation deals strictly with the questions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the data from a trusted source (i.e. does the system know the submitter)?&lt;/li&gt;
&lt;li&gt;Is the data in a format that is known?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Is the data from a trusted source?&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AK7gytJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ouy1s8t9fnuo4f92ei3s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AK7gytJW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ouy1s8t9fnuo4f92ei3s.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Determining if the data is from a trusted is typically straight forward and solutions are already well known. Examples of solutions to this question are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Control Lists (ACLs)&lt;/li&gt;
&lt;li&gt;API Keys&lt;/li&gt;
&lt;li&gt;User login credentials&lt;/li&gt;
&lt;li&gt;FTP credentials&lt;/li&gt;
&lt;li&gt;SHA keys&lt;/li&gt;
&lt;li&gt;etc&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;In more secure cases digital signatures and certificates can be used to ensure the data wasn't tempered with and chain of custody is maintained.&lt;/p&gt;

&lt;h3&gt;Is the data in a format that is known?&lt;/h3&gt;

&lt;p&gt;In this case we are talking about High-level validation, which does not care about business rules or fine-grained validation such as does a book include a title and an author. However, some data specifications, such as XSD, will have lower level validations included.&lt;/p&gt;

&lt;p&gt;Some examples of high-level validations for various data formats
are&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XML&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the data well-formed?&lt;/li&gt;
&lt;li&gt;Does the XSD validate? &lt;ul&gt;&lt;li&gt;It should be pointed out that not all XSD validation packages are created equal. If the XSD spec is from a later version some XSD libraries in Python and Java may not catch all the rules. Note: Most of the Java libraries will validate the spec. However, most python XML processing libraries are tied to XMLLINT, which does not handle more of the complicated validation rules. &lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CSV/EXCEL&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do all the expected header columns exist?&lt;/li&gt;
&lt;li&gt;Does the column header text match the expect
column header text?&lt;/li&gt;
&lt;li&gt;Are the columns in order?&lt;/li&gt;
&lt;li&gt;Is there at least one row of data?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;File format&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;Does the file extension match the expect extension?&lt;ul&gt;&lt;li&gt;This one is really easy to spoof. However, when dealing with CSV or Excel most users are not sophisticated enough to change the file extension, since they are using a spreadsheet application. This check ensures they didn’t accidentally upload the wrong file.&lt;/li&gt;&lt;/ul&gt;

&lt;ul&gt;&lt;li&gt;Tools exist to inspect and determine file format regardless of the file extension, however the footprint of these tools are often large and considered overkill for most data pipelines.&lt;/li&gt;&lt;/ul&gt;


&lt;/li&gt;&lt;/ul&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;When working with Extract Transform and Load systems we are focusing on a system that ingesting data. That means data is the most important part. Considering the data first can help give clarity to the downstream components. Be sure to answer the questions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What kind of will the system be ingesting?&lt;/li&gt;
&lt;li&gt;Who will provide this data?&lt;/li&gt;
&lt;li&gt;How will the data partners provide this data?&lt;/li&gt;
&lt;li&gt;How will system verify the data is in a format that the system understands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information about Extract Transform and Load, be sure to read &lt;a href="https://tenmilesquare.com/data-transform-and-load-overview/"&gt;Extract Transform and Load Overview&lt;/a&gt; article. Also be on the look out for part 3 of this series Canonical Transformation.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>tutorial</category>
      <category>data</category>
    </item>
    <item>
      <title>Extract Transform and Load Overview</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Fri, 14 Jun 2019 13:24:14 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/extract-transform-and-load-overview-5a6k</link>
      <guid>https://dev.to/ryboflavin42/extract-transform-and-load-overview-5a6k</guid>
      <description>&lt;h2 class="subhead"&gt;Part 1 of the Data Ingest Series&lt;/h2&gt;

&lt;p&gt;The process of extract transforms and load (ETL) goes by many names:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data acquisition&lt;/li&gt;
&lt;li&gt;Data Ingest&lt;/li&gt;
&lt;li&gt;Enterprise transform and load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But they all are about getting external data into the system.&lt;/p&gt;

&lt;p&gt;The problem that most businesses face is that there are no easy to follow best practices that can guide them in creating ETL pipelines. In this 6-part series, I will address this gap by laying out a simple and flexible set of best practices for creating or evaluating an ETL system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dNrYgCrr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rpihnun1xb5xiph0tkbv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dNrYgCrr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rpihnun1xb5xiph0tkbv.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;The importance of good quality data&lt;/h2&gt;

&lt;p&gt;First let’s discuss why ETL is important to the business. Data is frequently the most important asset in any business. In the information age it’s easy to see why. Let look at these examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon captures buyer data to suggest new items to purchase.&lt;/li&gt;
&lt;li&gt;Facebook tracks user data to use and sell for targeted marketing.&lt;/li&gt;
&lt;li&gt;Google uses user search history to display google ads.&lt;/li&gt;
&lt;li&gt;The music industry exchanges data in order to sell music, pay royalties, and exchange rights.&lt;/li&gt;
&lt;li&gt;Governments ingest data for threat assessment in order to enhance national security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The list goes on and on...&lt;/p&gt;

&lt;p&gt;So how we handle these data sources and the quality of these data sources are high import for sales, governance, commerce, and in some cases life and death decision making.&lt;/p&gt;

&lt;h2&gt;Buy vs. build&lt;/h2&gt;

&lt;p&gt;This subject is deserving of its own blog article and can actually be quite controversial during the due diligence phase. I’m only going to briefly touch on the subject here in this series.&lt;/p&gt;

&lt;h3&gt;What are some of the criteria when evaluating a COTS ETL platform?&lt;/h3&gt;

&lt;p&gt;The common criteria when evaluating building something custom or buying Commercial off the Shelf (COTS) platforms are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Price.&lt;/strong&gt; How much license? How much is the cost of building from scratch? Are their maintenance contracts? Will I need to send my employee’s to external training? Will I need a third-party consultant on staff? What are the hosting costs?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance. &lt;/strong&gt;How difficult is it to upgrade? Are there solid upgrade paths? Where will I host the application? Is this a stand-alone/hosted install? Or can I deploy into the supporting app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed of ingest.&lt;/strong&gt;  How quickly do I need to process data? Custom built solutions are often much faster that COTS products.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Case&lt;/strong&gt;. Will this application be used on one project or across the entire organization? (Be honest with this one. I’ve worked with countless COTS products bought and installed for a particular project where the organization says “Lets beta test this million-dollar product on one project and if it works, let's implement it to the rest of the organization”. Only they never do and now they are stuck with a very expensive license).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governance.&lt;/strong&gt; Do I need to maintain a separate set of users and roles? Can I limit who can access my data pipelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility.&lt;/strong&gt; How easy is it to add new input data types. Can I change the process flow?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Who will be developing it?&lt;/strong&gt; A feature of COTS products is they often claim workflows (pipelines) can be created and managed by business users. This often requires special training of business users and a particular type of business user to maintain this. Sometimes companies will use an existing developer to work on the pipeline anyway.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many great products out there that will give flexible pipelines with the kitchen sink when building your data pipeline. This seems great! Buy a license, install, create a pipeline and let’s go! However, consider the hidden costs in this approach.&lt;/p&gt;

&lt;h2&gt;Components&lt;/h2&gt;

&lt;p&gt;The majority of data pipelines can be boiled down to 6 main components. Some of these components can be reordered, others may be optional in a particular use case. However, understanding and fully considering each of the components is the key to ensuring data quality and pipeline reusability.&lt;/p&gt;

&lt;h3&gt;Submission Information Package (SIP)&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n78Igkbo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x7oc0g5ch5q8p2ftm3aa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n78Igkbo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x7oc0g5ch5q8p2ftm3aa.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Submission information package is the first step in any data pipeline. It is the data itself and how the data will get into the system, also known as raw data and data endpoints.&lt;/p&gt;

&lt;p&gt;Raw data is the data files being submitted or ingested by the system. This could be multiple formats from multiple submitters. The questions to ask are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What data types are we ingesting (Excel, XML, JSON) and are there any particular standards used in the industry (Industry SOPS, Schemas, XSD, format best practices)&lt;/li&gt;
&lt;li&gt;Do we need to create a standard? Or do we need to simplify an existing standard?&lt;/li&gt;
&lt;li&gt;Who will be submitting the data? Will it be from one partner or can multiple partners submit the same data types.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The endpoints discuss how the system gets the data. The questions to ask are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Will partners be submitting data to us (Push)?&lt;/li&gt;
&lt;li&gt;Or will we need to get the data from the partners (Pull)?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://tenmilesquare.com/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation/"&gt;SIPs and High-level validation will be discussed in depth in from Part 2 of this series.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;High-level validation&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LUcLenAZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0a3adrh4qumtszozq2am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LUcLenAZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0a3adrh4qumtszozq2am.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;High-level validation is the second step of any ETL process. High-Level validation is mainly concerned with ensuring that the data format being ingested is actually a format that the system knows how to ingest. For example, it checks that an XML file conforms to the schema, or that the CSV/Excel file has the correct columns. It is not concerned with details of the data like does the book have a title. Does the user have at least one account, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tenmilesquare.com/extract-transform-and-load-part-2-submission-information-packages-and-high-level-validation/"&gt;SIPS and High-level validation will be discussed in-depth in Part 2 of this series.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Canonical Transform&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m8sB1KtG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hkyv99va5jm0fmznqnbq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m8sB1KtG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hkyv99va5jm0fmznqnbq.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the third step in any ETL process. The canonical transform is actually composed of 2 pieces. The canonical domain and the data mapping process.&lt;/p&gt;

&lt;p&gt;The canonical form is a set of simple objects that all data formats can be mapped to. Often times the canonical will take the form of target database tables or the most complicated or standardized input data format. Industry data standards, like OMOP in healthcare, also represent canonical data models.&lt;/p&gt;

&lt;p&gt;The data mapping process is the actual process of converting data from the input formats into the canonical.  For example, taking a book title from a CSV file and putting it in the title field of the Book canonical object.&lt;/p&gt;

&lt;p&gt;This step is important to the process because it allows for downstream processing to become reusable and easy to maintain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tenmilesquare.com/reusability-and-sustainability-using-a-canonical-enterprise-transform-and-load-part-3/"&gt;Canonical Transform will be discussed in-depth in Part 3 of this series.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Business Rules&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a0zUf_ju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/an0vbltf9barkjvz88x9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a0zUf_ju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/an0vbltf9barkjvz88x9.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fourth step uses the canonical from step 3 and applies detailed validation rules against the data. These rules are important to keep database integrity. These could be simple validations such as all books must have a title or more complex rules such as that a book ownership shares total to 100%. Complex business rules usually focus on data relationships or aggregated data.&lt;/p&gt;

&lt;p&gt;Business Rules may also focus on rules that are important to the organization. Data standards are often designed to be flexible across many use cases, however, your organization may only care about a particular aspect of the data standard.  Business rules are how the system will filter out the needed data and the unneeded data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tenmilesquare.com/business-rules-extract-transform-and-load-part-4/"&gt;Business Rules will be discussed in-depth in Part 4 of this series.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Identification&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aNcb6IM9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mgxeo578e4iod2blnp99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aNcb6IM9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mgxeo578e4iod2blnp99.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The identification step is the fifth step in the ETL process. The identification step is how the system will reconcile new data with previously ingested data. This process will involve key identifiers to compare the data against our known data set (database, index, NO-SQL, external system, etc.). If a match (the data was identified) was found then this step may resolve how data is merged. This means if the data is known and there is a difference between the data store and incoming data how is it reconciled. Some uses case will require the data merging to happen in the persistence layer. This decision will ultimately be based on the business rules and the system’s requirements.&lt;/p&gt;

&lt;p&gt;Depending on the needs of the business and the type of data being ingested, this step may need to be skipped. Also, in some use cases it is important to identify the data before running the business rules as the business rules may vary based on if the data is known (identified) or new (unidentified).&lt;/p&gt;

&lt;p&gt;The identification step is discussed in-depth in part 5 of this series.&lt;/p&gt;

&lt;h3&gt;Persistence&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LkmZtgWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kpbqolhz8i5my669znzr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LkmZtgWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kpbqolhz8i5my669znzr.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the sixth step in the data ingest process. It focuses on storing the data in our datastores for downstream use or for future data identification&lt;/p&gt;

&lt;p&gt;Persistence is discussed in-depth in part 5 of this series.&lt;/p&gt;

&lt;h3&gt;Audit/Governance/Work Queue&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kz4M_gDJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w6pm5iqex6pnt2uuojl9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kz4M_gDJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w6pm5iqex6pnt2uuojl9.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The audit, governance, and work queue components are not steps in the overall ingest process but are important, but optional, components of an ETL system. These are the ‘kitchen sink’ features found in most COTS products. The way the data ingest pipelines are used and managed will ultimately determine if any of these 3 components need to be implemented.&lt;/p&gt;

&lt;h4&gt;Audit&lt;/h4&gt;

&lt;p&gt;The audit component is multi-faceted and covers many areas important to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data provenance:&lt;/strong&gt; Knowing the source of data stored in the data store&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business intelligence:&lt;/strong&gt; This is buzz term for analytic about the pipeline and the data streaming through. It answers questions such as
&lt;ul&gt;
&lt;li&gt;How much data is being ingested from a given source?&lt;/li&gt;
&lt;li&gt;How long are each of these steps taking to process?&lt;/li&gt;
&lt;li&gt;Where in process does data fail?&lt;/li&gt;
&lt;li&gt;Where are our bottlenecks?&lt;/li&gt;
&lt;li&gt;How many records are being created from each submission?&lt;/li&gt;
&lt;/ul&gt;




&lt;/li&gt;

&lt;li&gt;
&lt;strong&gt;Decision Audit:&lt;/strong&gt; Why decisions about the data were made.

&lt;ul&gt;
&lt;li&gt;Why a record was identified?&lt;/li&gt;
&lt;li&gt;Why a data file did not pass a business rule?&lt;/li&gt;
&lt;li&gt;Why was a record persistence?&lt;/li&gt;
&lt;/ul&gt;




&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;Governance&lt;/h4&gt;

&lt;p&gt;Governance covers who can access, run, and report on our data ingest pipeline? And who can change our data pipeline?&lt;/p&gt;

&lt;h4&gt;Work Queue&lt;/h4&gt;

&lt;p&gt;The work queue component is not a component that is often seen on many ingest system, but given the type of data being ingested it can greatly improve data quality, data transparency, and data throughput.&lt;/p&gt;

&lt;p&gt;The Audit, Governance, and work queue pieces will be discussed in part 6 of this series.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;The importance of data acquisition in a system is undeniably important and the quality of the data, ultimately determines the quality of the system. With these 6 basic components just about any data ingest pipeline can be designed and implemented with an assurance of data quality and pipeline reusability. In the next segment of this series we will be discussing the Submission Information Package&lt;/p&gt;

</description>
      <category>design</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Gratitude in your life</title>
      <dc:creator>Ry</dc:creator>
      <pubDate>Fri, 14 Jun 2019 12:54:04 +0000</pubDate>
      <link>https://dev.to/ryboflavin42/gratitude-in-your-life-8g1</link>
      <guid>https://dev.to/ryboflavin42/gratitude-in-your-life-8g1</guid>
      <description>&lt;p&gt;As a response to @Ryland G's "What gets you out of bed in the morning?" I want to know Do you practice gratitude? and if so  How do you practice gratitude?&lt;/p&gt;

&lt;p&gt;This practice has been one of the most impactful practices in my life. I attribute this practice to helping reduce anxiety and increasing happiness. And as we know a happy person is a more productive person.&lt;/p&gt;

&lt;p&gt;I practice by either writing or thinking about 3 things I'm grateful for in my life each day. The important thing I found through this practice is that you really need to feel emotion for each item.&lt;/p&gt;

&lt;h2&gt;How I divide the list of 3 &lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1 thing that you are grateful for in your life&lt;/li&gt;
&lt;li&gt;1 small thing that you are grateful for (i.e. the wind blowing, the way the grass looks in the dew, etc)&lt;/li&gt;
&lt;li&gt;1 thing about your self you are grateful for (i.e. your intelligence, your heart, because it never stopped beating, etc)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How do you practice? What are you grateful for?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
      <category>lifestyle</category>
    </item>
  </channel>
</rss>
