<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: omtechblog</title>
    <description>The latest articles on DEV Community by omtechblog (@omtechblog).</description>
    <link>https://dev.to/omtechblog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/omtechblog"/>
    <language>en</language>
    <item>
      <title>Scaling‌ ‌your‌ ‌Skills:‌ ‌How‌ ‌size‌ ‌affects‌ ‌software‌ ‌development‌</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 16:02:51 +0000</pubDate>
      <link>https://dev.to/omtechblog/scaling-your-skills-how-size-affects-software-development-2eno</link>
      <guid>https://dev.to/omtechblog/scaling-your-skills-how-size-affects-software-development-2eno</guid>
      <description>&lt;p&gt;by Peter McCarthy – August 17, 2021&lt;/p&gt;

&lt;p&gt;In‌ ‌November‌ ‌2020,‌ ‌OpenMarket‌ ‌was‌ ‌acquired‌ ‌by‌ ‌our‌ ‌once-competitor‌ ‌Infobip.‌ ‌After‌ ‌many‌‌ months,‌ ‌my‌ ‌team‌ ‌and‌ ‌I‌ ‌have‌ ‌started‌  being‌ ‌exposed‌ ‌to‌ ‌Infobip’s‌ ‌engineering‌ ‌infrastructure,‌‌ platforms,‌ ‌products‌ ‌and‌ ‌general‌ ‌approach‌ ‌to‌ ‌software‌ ‌engineering.‌ ‌My‌  previous‌ ‌conceptions‌ ‌of‌‌ software‌ ‌development‌ ‌were‌ ‌blown‌ ‌away‌ ‌in‌ ‌some‌ ‌regards,‌ ‌as‌ ‌it‌ ‌became‌ ‌clear‌ ‌at‌ ‌how‌ ‌the‌ ‌sheer‌‌ scale‌ ‌of‌ ‌the‌ ‌company‌ ‌facilitated‌ ‌these‌ ‌sophisticated‌ ‌and‌ ‌meticulously‌ ‌planned‌ ‌conventions.‌ ‌But‌ while‌ ‌it‌ ‌is‌ ‌almost‌ ‌certainly‌ ‌best‌ ‌practice‌ ‌to‌ ‌have‌ ‌a‌ ‌log‌ ‌store‌ ‌and‌ ‌some‌ ‌interface‌ ‌to‌ ‌filter‌ ‌them,‌‌ there’s‌ ‌something‌ ‌special‌ ‌about‌ ‌manually‌ ‌SSHing‌ ‌into‌ ‌a‌ ‌machine‌ ‌and‌  simply‌ tail‌ ‌-f’ing‌ the‌‌ file,‌ ‌and‌ ‌it’s‌ ‌these‌ ‌contrasts‌ ‌that‌ ‌have‌ me‌ ‌torn‌ ‌on‌ ‌the‌ ‌behemoth‌ ‌software‌ ‌powerhouse‌ ‌of‌ ‌IB‌ ‌and‌‌ the‌ ‌humble‌  yet‌ ‌agile‌ ‌technical‌ ‌layout‌ ‌of‌ ‌OM.‌ ‌&lt;/p&gt;

&lt;p&gt;Automation‌ ‌over‌ ‌People‌&lt;br&gt;
One‌ ‌of‌ ‌the‌ ‌things‌ ‌that‌ ‌struck‌ ‌me‌ ‌immediately‌ ‌was‌ ‌how‌ ‌much‌ ‌automation‌ ‌IB‌ ‌(InfoBip)‌ ‌had‌ ‌in‌‌ almost‌ ‌all‌ ‌facets‌ ‌of‌ ‌their‌ ‌CI/CD‌ ‌and‌ ‌monitoring.‌ ‌As‌ ‌mentioned‌ ‌previously,‌ ‌all‌ ‌logs‌ ‌are‌‌ automatically‌ ‌streamed‌ ‌to‌ ‌a‌ ‌logstore‌ ‌which‌ ‌can‌ ‌then‌ ‌be‌ ‌queried‌ ‌through‌ ‌an‌ ‌ElasticSearch‌‌ interface.‌ ‌This‌ ‌one‌ ‌web‌ ‌application‌ ‌provides‌ ‌log‌ ‌filtering/streaming,‌ ‌setting‌ ‌up‌ ‌alerts‌ ‌and‌‌ generally‌ ‌getting‌ ‌a‌ ‌thorough‌ ‌overview‌ ‌of‌ ‌what‌ ‌a‌ ‌specific‌ ‌system‌ ‌(or‌ ‌instance‌ ‌of)‌ ‌is‌ ‌doing‌ ‌and‌‌ what’s‌ ‌going‌ ‌wrong.‌ ‌At‌ ‌OM‌ ‌(OpenMarket)‌ ‌in‌ ‌contrast,‌ ‌each‌ ‌service‌ ‌and‌ ‌instance‌ ‌logged‌ ‌to‌ ‌a‌ ‌file‌‌ on‌ ‌that‌ ‌VMs‌ ‌disk‌ ‌which‌ ‌was‌ ‌then‌ ‌collected‌ ‌and‌ ‌archived‌ ‌on‌ ‌another‌ ‌machine.‌ ‌In‌ ‌short,‌ ‌to‌ ‌see‌‌ what‌ ‌a‌ ‌program‌ ‌was‌ ‌doing,‌ ‌one‌ ‌would‌ ‌SSH‌ ‌in‌ ‌and‌ ‌simply‌ ‌run‌ ‌Bash‌ ‌commands‌ ‌on‌ ‌the‌ ‌log‌ ‌file(s).‌‌ Naturally‌ ‌automation‌ ‌is‌ ‌part‌ ‌of‌ ‌what‌ ‌makes‌ ‌a‌ ‌good‌ ‌software‌ ‌engineer,‌ ‌and‌ ‌IBs‌ ‌logging‌ ‌and‌‌ metrics‌ ‌solution‌ ‌is‌ ‌exemplary.‌ ‌For‌ ‌a‌ ‌company‌ ‌of‌ ‌this‌ ‌scale,‌ ‌this‌ ‌is‌ ‌the‌ ‌ideal‌ ‌solution.‌ ‌However,‌‌ manually‌ ‌inspecting‌ ‌enormously‌ ‌sized‌ ‌files‌ ‌has‌ ‌its‌ ‌merits‌ ‌also;‌ ‌not‌ ‌only‌  does‌ ‌it‌ ‌require‌ ‌one‌ ‌to‌‌ become‌ ‌proficient‌ ‌in‌ ‌Bash‌ ‌and‌ ‌knowing‌ ‌how‌ ‌to‌ ‌get‌ ‌the‌ ‌data‌ ‌you‌ ‌need‌ ‌out‌ ‌of‌ ‌a‌ ‌large‌ ‌file,‌ ‌but‌ ‌it‌‌ also‌ ‌provides‌ ‌valuable‌ ‌experience‌ ‌in‌ ‌fundamental‌ ‌tools‌ ‌such‌ ‌as‌ ‌SSH‌ ‌and‌ ‌SSH‌ ‌keys‌ ‌and‌‌ ownership‌ ‌of‌ ‌your‌ ‌own‌ ‌servers.‌‌ ‌&lt;br&gt;
‌&lt;br&gt;
Similarly,‌ ‌IB’s‌ ‌CI/CD‌ ‌platform‌ ‌is‌ ‌one‌ ‌of‌ ‌the‌ ‌most‌ ‌polished‌ ‌I’ve‌ ‌seen.‌ ‌What‌ ‌astounded‌ ‌me‌ ‌most‌ ‌is‌‌ that‌ ‌it‌ ‌was‌ ‌all‌ ‌built‌ ‌in-house‌ ‌and‌ ‌worked‌ ‌seamlessly‌ ‌with‌ ‌all‌ ‌of‌ ‌their‌ ‌projects,‌ ‌something‌ ‌I‌‌ believed‌ ‌cloud‌ ‌providers‌ ‌like‌ ‌AWS‌ ‌were‌ ‌partly‌ ‌designed‌ ‌to‌ ‌offer.‌ ‌It‌ ‌really‌ ‌went‌ ‌to‌ ‌show‌ ‌how‌‌ much‌ ‌can‌ ‌be‌ ‌achieved‌ ‌when‌ ‌the‌ ‌resources‌ ‌are‌ ‌available‌ ‌to‌ ‌build‌ ‌and‌ ‌maintain‌ ‌an‌ ‌integration‌‌ system‌ ‌at‌ ‌this‌ ‌scale.‌ ‌Jenkins‌ ‌automatically‌ ‌builds‌ ‌and‌ ‌releases‌ ‌new‌ ‌versions,‌ ‌and‌ ‌IB’s‌ ‌own‌‌ deployment‌ ‌manager‌ ‌service‌ ‌sets‌ ‌up‌ ‌Docker‌ ‌containers,‌ ‌ports‌ ‌and‌ ‌inter-service‌ ‌communication‌‌ with‌ ‌literally‌ ‌the‌ ‌click‌ ‌of‌ ‌a‌ ‌button.‌ ‌OM,‌ ‌on‌ ‌the‌ ‌other‌ ‌hand,‌ ‌generally‌ ‌had‌ ‌bespoke‌ ‌release‌ ‌and‌‌ deployment‌ ‌processes‌ ‌for‌ ‌each‌ ‌project.‌ ‌Naturally‌ ‌the‌ ‌older‌ ‌ones‌ ‌were‌ ‌more‌ ‌of‌ ‌a‌ ‌challenge‌ ‌to‌‌ standardise,‌ ‌but‌ ‌having‌ ‌to‌ ‌recollect‌ ‌in‌ ‌the‌ ‌README‌ ‌how‌ ‌to‌ ‌release‌ ‌and‌ ‌deploy‌ ‌an‌ ‌application‌‌ becomes‌ ‌cumbersome‌ ‌and‌ ‌unnecessarily‌  complex.‌ ‌That’s‌ ‌not‌ ‌to‌ ‌say‌ ‌there‌ ‌were‌ ‌no‌ ‌benefits‌‌ from‌ ‌this:‌ ‌being‌ ‌exposed‌ ‌to‌ ‌so‌ ‌many‌ ‌different‌ ‌frameworks‌ ‌and‌ ‌plugins‌ ‌we‌ ‌used,‌ ‌not‌ ‌to‌ ‌mention‌‌ the‌ ‌amount‌ ‌one‌ ‌can‌ ‌take‌ ‌in‌ ‌about‌ ‌Git‌ ‌and‌ ‌its‌ ‌tagging‌ ‌system‌ ‌for‌ ‌releases,‌ ‌were‌ ‌definitely‌ ‌skills‌‌ that‌ ‌can‌ ‌take‌ ‌one‌  farther‌ ‌in‌ ‌becoming‌ ‌a‌ ‌more‌ ‌well-rounded‌ ‌engineer.‌ ‌It’s‌ ‌nice‌ ‌when‌ ‌something‌‌ does‌ ‌what‌ ‌you‌ ‌want‌ ‌to‌ ‌do‌ ‌by‌ ‌itself,‌ ‌especially‌ ‌if‌ ‌you‌  know‌ ‌what‌ ‌you‌ ‌want‌ ‌to‌ ‌do,‌ ‌but‌ ‌building‌ ‌and‌using‌ ‌it‌ ‌yourself‌ ‌teaches‌ ‌you‌ ‌exponentially‌ ‌more‌ ‌about‌ ‌software‌ ‌integration‌ ‌and‌ ‌gives‌ ‌you‌ ‌a‌‌ much‌ ‌more‌ ‌substantial‌ ‌appreciation‌ ‌of‌ ‌how‌ ‌CI/CD‌ ‌can‌ ‌and‌ ‌should‌ ‌be‌ ‌approached.‌ ‌&lt;/p&gt;

&lt;p&gt;Setting‌ ‌the‌ ‌standard‌&lt;br&gt;
Using‌ ‌a‌ ‌microservices‌ ‌approach‌ ‌to‌ ‌software‌ ‌design‌ ‌means‌ ‌communication‌ ‌is‌ ‌the‌ ‌bedrock‌ ‌of‌ ‌the‌‌ infrastructure.‌ ‌Services‌ ‌need‌ ‌to‌ ‌be‌ ‌able‌ ‌to‌ ‌discover‌ ‌others‌ ‌and‌ ‌forward‌ ‌data‌ ‌in‌ ‌a‌ ‌common‌‌ language/encoding,‌ ‌taking‌ ‌other‌ ‌requirements‌ ‌like‌ ‌load‌ ‌balancing‌ ‌and‌ ‌queueing‌ ‌into‌ ‌account.‌‌ While‌ ‌this‌ ‌can‌ ‌all‌ ‌be‌ ‌done‌ ‌using‌ ‌a‌ ‌common‌ ‌library‌ ‌to‌ ‌facilitate‌ ‌this‌ ‌communication,‌ ‌in‌ ‌a‌ ‌system‌‌ of‌ ‌this‌ ‌scale,‌ ‌a‌ ‌separate‌ ‌system‌ ‌is‌ ‌much‌ ‌more‌ ‌pragmatic‌ ‌to‌ ‌orchestrate‌ ‌data‌ ‌transfer‌ ‌between‌‌ services,‌ ‌which‌ ‌is‌ ‌how‌ ‌IB‌ ‌has‌ ‌solved‌ ‌this‌ ‌problem.‌ ‌All‌ ‌networking‌ ‌communication‌ ‌is‌ ‌done‌‌ through‌ ‌the‌ ‌application,‌ ‌which‌ ‌acts‌ ‌as‌ ‌a‌ ‌type‌ ‌of‌ ‌registry‌ ‌for‌ ‌services‌ ‌to‌ ‌’check-in.‌ ‌This‌ ‌type‌ ‌of‌‌ communication‌ ‌abstraction‌ ‌is‌ ‌perfect‌ ‌for‌ ‌a‌ ‌large‌ ‌number‌ ‌of‌ ‌teams,‌ ‌many‌ ‌of‌ ‌which‌ ‌are‌ ‌creating‌‌ microservices‌ ‌applications,‌ ‌who‌ ‌don’t‌ ‌need‌ ‌to‌ ‌worry‌ ‌about‌ ‌how‌ ‌to‌ ‌discover‌ ‌or‌ ‌ping‌ ‌a‌ ‌service.‌‌ What‌ ‌makes‌ ‌this‌ ‌so‌ ‌seamless‌ ‌is‌ ‌the‌ ‌use‌ ‌of‌ ‌RPC‌ ‌(Remote‌ ‌Procedure‌ ‌Call)‌ ‌at‌ ‌the‌‌ application-code‌ ‌level.‌ ‌This‌ ‌effectively‌ ‌means‌ ‌functionalities‌ ‌are‌ ‌viewed‌ ‌as‌ ‌simply‌ ‌library‌ ‌imports‌‌ rather‌ ‌than‌ ‌API‌ ‌calls.‌ ‌E.g.‌ ‌One‌ ‌service‌ ‌has‌ ‌a‌ ‌method‌ ‌“double(int‌ ‌x)‌”‌ ‌which‌ ‌simply‌ ‌doubles‌‌ the‌ ‌input‌ ‌of‌ ‌x.‌ ‌Instead‌ ‌of‌ ‌calling‌ ‌this‌ ‌explicitly‌ ‌over‌ ‌HTTP,‌ ‌the‌ ‌user‌ ‌can‌ ‌simply‌ ‌import‌ ‌this‌‌ service’s‌ ‌library‌ ‌and‌ ‌call‌ ‌the‌ ‌method‌ ‌from‌ ‌the‌ ‌application‌ ‌code.‌‌ ‌&lt;br&gt;
‌&lt;br&gt;
This‌ ‌is‌ ‌where‌ ‌IBs‌ ‌standard‌ ‌service‌ ‌architecture‌ ‌comes‌ ‌in.‌ ‌New‌ ‌services‌ ‌are‌ ‌recommended‌ ‌to‌‌ utilise‌ ‌IBs‌ ‌standard‌ ‌project‌ ‌layout,‌ ‌which‌ ‌includes‌ ‌the‌ ‌functionality‌ ‌for‌ ‌RPC.‌ ‌That‌ ‌is,‌ ‌if‌ ‌a‌ ‌new‌‌ project‌ ‌is‌ ‌needed,‌ ‌the‌ ‌functionalities‌ ‌of‌ ‌RPC‌ ‌and‌ ‌this‌ ‌communications‌ ‌registry‌ ‌is‌ ‌included‌ ‌in‌ ‌the‌‌ standard‌ ‌template.‌ ‌Thus‌ ‌all‌ ‌services‌ ‌elegantly‌ ‌communicate‌ ‌with‌ ‌each‌ ‌other‌ ‌without‌ ‌having‌ ‌to‌‌ worry‌ ‌about‌ ‌the‌ ‌networking‌ ‌layer‌ ‌as‌ ‌much.‌ ‌This‌ ‌solution‌ ‌is‌ ‌a‌ ‌masterclass‌ ‌in‌ ‌abstraction,‌‌ removing‌ ‌arguably‌ ‌unnecessary‌ ‌requirements‌ ‌from‌ ‌projects‌ ‌and‌ ‌streamlines‌ ‌the‌ ‌development‌ ‌of‌‌ actual‌ ‌business‌ ‌features.‌‌ ‌&lt;br&gt;
‌&lt;br&gt;
While‌ ‌this‌ ‌works‌ ‌well‌ ‌for‌ ‌an‌ ‌organisation‌ ‌of‌ ‌this‌ ‌scale,‌ ‌again,‌ ‌for‌ ‌OpenMarket‌ ‌it‌‌ would‌ ‌likely‌ ‌be‌‌ a‌ ‌hindrance‌ ‌than‌ ‌a‌ ‌boon‌ ‌to‌  productivity.‌ ‌IBs‌ ‌approach‌ ‌is‌ ‌very‌ ‌inflexible,‌ ‌having‌ ‌a‌ ‌significantly‌‌ larger‌ ‌number‌ ‌of‌ ‌moving‌ ‌parts‌ ‌than‌ ‌OM’s,‌ ‌and‌ ‌with‌ ‌those‌ ‌crucial‌ ‌infrastructure‌ ‌layers‌ ‌abstracted‌‌ out,‌ ‌teams‌ ‌are‌ ‌less‌ ‌able‌ ‌to‌ ‌diagnose‌ ‌and‌ ‌troubleshoot‌ ‌issues,‌ ‌leaving‌ ‌it‌ ‌up‌ ‌to‌ ‌the‌ ‌networking‌‌ team‌ ‌to‌ ‌solve.‌ ‌OM‌ ‌used‌ ‌a‌ ‌message‌ ‌passing‌ ‌approach‌ ‌to‌ ‌its‌ ‌core‌ ‌SMS‌ ‌platform.‌ ‌This‌ ‌was‌ ‌simply‌‌ a‌ ‌library‌ ‌that‌ ‌a‌ ‌new‌ ‌service‌  would‌ ‌import‌ ‌and‌ ‌integrate‌ ‌if‌ ‌it‌ ‌wanted‌ ‌to‌ ‌communicate‌ ‌with‌ ‌other‌‌ services‌ ‌utilising‌ ‌it.‌ ‌The‌ ‌library‌ ‌handled‌ ‌service‌ ‌discovery,‌  encoding,‌ ‌load‌ ‌balancing‌ ‌and‌ ‌service‌‌ redundancy,‌ ‌and‌ ‌with‌ ‌it‌ ‌all‌ ‌being‌ ‌compacted‌ ‌into‌ ‌a‌ ‌single‌ ‌library,‌ ‌this‌ ‌left‌ ‌factors‌ ‌like‌ ‌networking‌‌ more‌ ‌open‌ ‌to‌ ‌developers‌ ‌and‌ ‌therefore‌ ‌became‌ ‌a‌ ‌much‌ ‌more‌ ‌agile‌ ‌approach.‌ ‌Like‌ ‌before,‌ ‌it‌‌ meant‌ ‌team‌ ‌members‌  needed‌ ‌to‌ ‌have‌ ‌some‌ ‌understanding‌ ‌of‌ ‌the‌ ‌networking‌ ‌layer‌ ‌that‌ ‌perhaps‌‌ the‌ ‌IB‌ ‌approach‌ ‌didn’t;‌ ‌packet‌ ‌captures‌ ‌were‌ ‌a‌  somewhat‌ ‌uncommon‌ ‌occurrence‌ ‌at‌ ‌OM,‌ ‌but‌‌ even‌ ‌the‌ ‌need‌ ‌for‌ ‌them‌ ‌shows‌ ‌how‌ ‌open‌ ‌and‌ ‌flexible‌ ‌this‌ ‌solution‌ ‌was.‌‌ ‌&lt;/p&gt;

&lt;p&gt;‌Summary‌ ‌&amp;amp;‌ ‌Final‌ ‌Thoughts‌&lt;br&gt;
What‌ ‌this‌ ‌transition‌ ‌has‌ ‌shown‌ ‌me‌ ‌so‌ ‌far‌ ‌is‌ ‌how‌ ‌much‌ ‌organisation‌ ‌scale‌ ‌needs‌ ‌to‌ ‌equalise‌ ‌with‌‌ engineering;‌ ‌an‌ ‌approach‌ ‌taken‌  at‌ ‌a‌ ‌hundred‌ ‌engineers‌ ‌won’t‌ ‌necessarily‌ ‌be‌ ‌the‌ ‌most‌‌ pragmatic‌ ‌at‌ ‌a‌ ‌thousand.‌ ‌It‌ ‌is‌ ‌a‌ ‌trade-off‌ ‌between‌ ‌flexibility‌ ‌and‌ homogenisation.‌ ‌A‌ ‌few‌ ‌smaller‌ developers‌ ‌need‌ ‌to‌ ‌be‌ ‌fast‌ ‌and‌ ‌agile‌ ‌(whatever‌ ‌works‌ ‌best‌ ‌for‌ ‌the‌ ‌team),‌ ‌while‌ ‌a‌ ‌relatively‌‌ enormous‌ ‌one‌ ‌with‌ ‌a‌ ‌constantly‌ ‌shifting‌ ‌and‌ ‌evolving‌ ‌catalog‌ ‌of‌ ‌services‌ ‌needs‌ ‌to‌ ‌be‌ ‌robust‌ ‌and‌‌ uniform‌ ‌(whatever‌ ‌works‌ ‌best‌ ‌for‌ all‌‌ ‌the‌ ‌teams).‌ ‌&lt;br&gt;
‌&lt;br&gt;
While‌ ‌I‌ ‌believe‌ ‌there‌ ‌was‌ ‌room‌ ‌for‌ ‌alignment‌ ‌at‌ ‌OM,‌ ‌I‌ ‌believe‌ ‌our‌ ‌message‌ ‌passing‌ ‌approach‌‌ was‌ ‌lightweight‌ ‌enough‌ ‌to‌ ‌be‌ ‌flexible,‌ ‌while‌ ‌still‌ ‌offering‌ ‌a‌ ‌common‌ ‌platform‌ ‌for‌ ‌a‌ ‌large‌ ‌number‌‌ of‌ ‌our‌ ‌services.‌ ‌Obviously‌ ‌when‌ ‌resources‌ ‌are‌ ‌tight‌ ‌(which‌ ‌was‌ ‌the‌ ‌case‌ ‌at‌ ‌OM‌ ‌compared‌ ‌to‌ ‌a‌‌ behemoth‌ ‌like‌ ‌IB),‌ ‌sleeker‌ ‌solutions‌ ‌are‌ ‌called‌ ‌for-‌ ‌and‌ ‌when‌ ‌they‌ ‌are‌ ‌in‌ ‌supply,‌ ‌entire‌ ‌teams‌‌ focused‌ ‌around‌ ‌this‌ ‌infrastructure‌ ‌can‌ ‌streamline‌ ‌engineering‌ ‌to‌ ‌its‌ ‌maximum.‌ ‌&lt;br&gt;
‌&lt;br&gt;
My‌ ‌time‌ ‌at‌ ‌OM‌ ‌was‌ ‌invaluable‌ ‌experience‌ ‌for‌ ‌learning‌ ‌about‌ ‌things‌ ‌like‌ ‌SSH,‌ ‌setting‌ ‌up‌ ‌a‌‌ project‌ ‌with‌ ‌logging‌ ‌and‌ ‌config‌ ‌etc.‌  manually.‌ ‌The‌ ‌sorts‌ ‌of‌ ‌things‌ ‌that‌ ‌every‌ ‌project‌ ‌requires,‌ ‌but‌‌ is‌ ‌not‌ ‌necessarily‌ ‌what‌ ‌you‌ ‌yourself‌ ‌will‌ ‌need‌ ‌to‌ ‌implement.‌ ‌It‌  impressed‌ ‌upon‌ ‌me‌ ‌the‌ ‌feeling‌ ‌of‌‌ ownership,‌ ‌in‌ ‌the‌ ‌sense‌ ‌that‌ ‌the‌ ‌VM‌ ‌my‌ ‌application‌ ‌was‌ ‌running‌ ‌on‌ ‌was,‌ ‌for‌ ‌the‌ ‌most‌ ‌part,‌ ‌truly‌‌ my‌ ‌responsibility,‌ ‌and‌ ‌this‌ ‌is‌ ‌something‌ ‌I‌ ‌think‌ ‌is‌ ‌fantastic‌ ‌experience‌ ‌especially‌ ‌for‌ ‌newcomers‌ to‌ ‌software‌ ‌engineering‌ ‌like‌ ‌myself‌ ‌when‌ ‌I‌ ‌joined‌ ‌OM.‌ ‌&lt;br&gt;
At‌ ‌the‌ ‌same‌ ‌time,‌ ‌I‌ ‌believe‌ ‌IB‌ ‌is‌ ‌the‌ ‌next‌ ‌logical‌ ‌step‌ ‌in‌ ‌this‌ ‌learning‌ ‌process-‌ ‌once‌ ‌you‌‌ understand‌ ‌and‌ ‌appreciate‌ ‌all‌ ‌the‌ ‌moving‌  parts‌ ‌to‌ ‌your‌ ‌software,‌ ‌it‌ ‌should‌ ‌become‌ ‌the‌‌ responsibility‌ ‌of‌ ‌another‌ ‌team‌ ‌to‌ ‌manage‌ ‌that‌ ‌for‌ ‌you.‌ ‌Your‌ ‌job‌ ‌as‌ ‌a‌ ‌developer‌ ‌becomes‌ ‌much‌‌ more‌ ‌streamlined,‌ ‌focusing‌ ‌on‌ ‌your‌ ‌service’s‌ ‌code‌ ‌rather‌ ‌than‌ ‌the‌ ‌supporting‌ ‌factors‌ ‌of‌ ‌it.‌ ‌So‌‌ while‌ ‌I‌ ‌think‌ ‌that‌ ‌the‌ ‌direction‌ ‌of‌ ‌IBs‌ ‌engineering‌ ‌infrastructure‌ ‌is‌ ‌on‌ ‌the‌ ‌right‌ ‌path,‌ ‌it‌ ‌always‌‌ helps‌ ‌to‌ ‌know‌ ‌how‌ ‌some‌ ‌of‌ ‌the‌ ‌stuff‌ ‌works‌ ‌under‌ ‌the‌ ‌hood!‌‌ ‌&lt;/p&gt;

</description>
    </item>
    <item>
      <title>SQLogs</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:54:31 +0000</pubDate>
      <link>https://dev.to/omtechblog/sqlogs-2087</link>
      <guid>https://dev.to/omtechblog/sqlogs-2087</guid>
      <description>&lt;p&gt;OpenMarket – June 3, 2021&lt;/p&gt;

&lt;p&gt;by Parker DeWilde&lt;/p&gt;

&lt;p&gt;I love logs!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Easy to add
Logger configurable to give you useful information for free: time, server, class, request id, etc.
Simple, easy to understand
For the dev team’s eyes, changes are low impact
Fantastic tool for debugging, troubleshooting, and general visibility
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Even better than logs, there are tools around logs that are awesome! My experience is with ELK (Elasticsearch, Logstash, Kibana), but there are other similar amazing tools such as Splunk, Graylog, and many others. Just NOT Cloudwatch.&lt;/p&gt;

&lt;p&gt;ELK is pretty much magic in my book. It reads huge volumes of logs, and then allows almost instant searching of them, returning vast amounts of results in a split second. It can break up your logs to multiple fields, and filter logs based on them. Its full text search works pretty darn well (as long as you are searching for entire tokens), and the UI in Kibana is very intuitive. I don’t even use json logs, which can make it even more powerful!&lt;/p&gt;

&lt;p&gt;Beyond the magic logging tools, there are still many questions that they are not great at answering:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Find all duplicate log lines
    If there are duplicate log lines, find out which request IDs are duplicate
    Find out how many distinct log lines were duplicated, and how many times each was duplicated, and return the 10 most duplicated along with count
Find all request IDs that went through step A but not step B
Find all requests that took more than 50ms*
Find the first and last log lines related to every user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;*This may be possible with json logging, I only use plain text logging and I don’t think it is possible&lt;/p&gt;

&lt;p&gt;Pretty much anything that needs to look at multiple log lines to find an answer, or things like range queries when you only have text indexing.&lt;/p&gt;

&lt;p&gt;If only there was a tool that was good at taking structured data, and doing things like joining multiple records, range queries, and flexibly grouping data…&lt;/p&gt;

&lt;p&gt;SQL! I love it almost as much as logs! 47 years later it’s still an amazingly accessible and versatile way to query structured data. There may be more sophisticated data models and query languages which all have their own pros and cons, but none as widely used and universally understood as SQL.&lt;/p&gt;

&lt;p&gt;So if I love SQL, and I love logs, why can’t I SQL my logs? Silly question, of course you can SQL your logs! You could probably bug ops to find some specific log management solution that supports SQL queries, or could export json logs to some DB that supports SQL syntax, but if you are like me, the need comes up so rarely that it might be faster and easier to just hack something up on your dev box.&lt;/p&gt;

&lt;p&gt;The first thing you will need to do is somehow convert your logs into a relational form. If you are using JSON, it’s probably pretty easy, just import the json, pick the fields you want, and away you go. If you are like me and use plaintext logs, you need to do a little more manual work, but conversion to relational form is generally pretty straightforward.&lt;/p&gt;

&lt;p&gt;My favorite way to do this is with Python. When I’m feeling really fancy, I have Python grab the logs directly from Elasticsearch using the python client. Other times, it may be easier to simply read log files directly.&lt;/p&gt;

&lt;p&gt;For the purpose of an example, let’s assume there have been complaints from customers about issues with their login sessions being reset, requiring them to log in multiple times. You are tasked with figuring out what is going on, as well as get a good grasp on the scope of the issue.&lt;/p&gt;

&lt;p&gt;The first place to start is to isolate users with the issue. If their sessions are being reset, we would be expecting to see the same user logging in multiple times within a small window. There unfortunately is not a single log line that can help us out here, but our friend SQL can come to the rescue!&lt;/p&gt;

&lt;p&gt;Let’s say we have a log file that looks something like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;021-05-28T09:11:11,501 INFO pool-1-thread-6 IdentityServiceClient 000-X6HAJ-U32KP-6PD6C-4HIUJ-LGN - User banner with account id 94404 successfully logged in
2021-05-28T09:13:03,469 INFO pool-0-thread-9 IdentityServiceClient 000-R992R-4KV1C-BEE3Y-FIGRD-GST - User spock with account id 16059 earned a gold star!
2021-05-28T09:14:15,727 INFO pool-2-thread-15 SandPounder 000-DZSXL-MT6K8-Y574Z-HLYOO-SPD - Pounded 5 pounds of sand.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Each log line has the same elements:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Timestamp
Log Level
Thread
Class/Function
Request Id
Payload/message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You could just split the log lines into these parts and call it a day, but that would make it hard to query based on things like user for logins. I usually like to pre-process my logs to only get the lines I care about. This could be done in Python, but I find it easier to just grep to pull the log lines I want.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep "IdentityServiceClient" example.log | grep "successfully logged in" &amp;gt; example_filtered.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that I only have the lines I care about, and I know they will all be the same format, I need to parse them.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import datetime, timezone
# open our file and iterate through the lines
line_dicts = []
with open("example_filtered.log", 'r') as log_file:
   for line in log_file:
       parts = line.split(' ') # space delimited for first fields
       time_string = parts[0]
       log_level = parts[1] # we already filtered so same for every line
       thread = parts[2] # not relevant to our investigation
       function = parts[3] # we already filtered so same for every line
       request_id = parts[4]
       # useful to have numeric form of time for doing math in sql
       epoch_millis = int(
           datetime.strptime(time_string, "%Y-%m-%dT%H:%M:%S,%f")
           .replace(tzinfo=timezone.utc)
           .timestamp() * 1000)
       # notice " - " before every payload. Split on first instance since
       #  string will not appear before the payload, but may appear inside
       payload = line.split(' - ', 1)[1]
       # system does not allow spaces in username/id,
       #  so let's just tokenize same way
       payload_parts = payload.split(' ')
       username = payload_parts[1]
       user_id = payload_parts[5]
       # lets keep the parts we care about in a dictionary
       line_dict = {
           "timestamp": time_string, "epochMillis": epoch_millis,
           "requestId": request_id, "username": username, "userId": user_id
       }
       line_dicts.append(line_dict)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we need to set up the SQL database. I usually just use sqlite, as its super easy to set up, and powerful enough for most things I do with logs. Plus you can do it all from inside Python!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# continuing on from above file
import sqlite3
import os
DBNAME = 'techblog.db'
# Remove old DB if exists
try:
   os.remove(DBNAME)
except OSError:
   pass
# connect to the DB
with sqlite3.connect(DBNAME) as conn:
   # Get a cursor, which is required to use the connection
   c = conn.cursor()
   # setup the schema
   c.execute('''
   CREATE TABLE logins (
   timestamp TEXT,
   epochMillis INTEGER,
   requestId TEXT,
   username TEXT,
   userId TEXT)
   ''')
   # add the data - takes a string with ?'s and an iterable to fill in the ?'s
   for cur_line in line_dicts:
       c.execute('''
       INSERT INTO logins VALUES(?, ?, ?, ?, ?)
       ''', (cur_line['timestamp'], cur_line['epochMillis'],
             cur_line['requestId'], cur_line['username'], cur_line['userId']))
   # run the optimize command to ensure efficient queries in the future
   # this is sqlite specific -- for some reason it doesn't keep statistics
   # for the optimizer unless you tell it to
   c.execute('PRAGMA analysis_limit=400')
   c.execute('PRAGMA optimize')
   # commit, connection will close automagically due to using "with" to open
   conn.commit()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we have our SQL database and are ready to do our investigation! I usually use a tool called DBeaver to look at my databases, though any sqlite-compatible client will work. You can also query the DB directly from Python (though I would only do that if automating something, as it is less ergonomic than a purpose-made database client).&lt;/p&gt;

&lt;p&gt;Connecting to SQLite in DBeaver is easy. New Database Connection -&amp;gt; SQLite -&amp;gt; Next -&amp;gt; Path set to the file you made (in my case techblog.db) -&amp;gt; Finish&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l6zEV7jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iv52uwn67m1moui8cmp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l6zEV7jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iv52uwn67m1moui8cmp1.png" alt="alt text"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You should be able to see the data in your database. I will show one example of how I might use this data, but as long as you know SQL, the world is your oyster!&lt;/p&gt;

&lt;p&gt;Recall we were having issues with users having their sessions expire and needing to log in again. We wanted to see if this was affecting multiple users, as well as the scope of the issue. I wrote a little SQL Script:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT a.username, COUNT(*) as 'number of repeated logins within a minute'
FROM logins a, logins b -- simple full join
WHERE a.requestId  != b.requestId  -- don't want to match self
AND a.userId  == b.userId  -- same user logging in
AND a.epochMillis &amp;lt; b.epochMillis -- let’s ensure a is always before b
AND (b.epochMillis - a.epochMillis) &amp;lt; 60000 -- let’s only look at logins less than a minute apart
GROUP BY a.username -- get stats by user
ORDER BY COUNT(*) DESC -- highest counts first
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Which gives us output something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ynlYTJlq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09cyhmqmvhf165spssi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ynlYTJlq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09cyhmqmvhf165spssi2.png" alt="alt text"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Given we only have 7 users, this seems like a pretty big issue, yikes! Better start looking for a solution! Maybe those request IDs in our database could help us out…&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT COUNT ( DISTINCT username ) AS "total users in system"
FROM logins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I hope you can keep this as another tool in your toolbox. It does take a little time to set up, but sometimes it can give you something that would not be easy with a tool like Kibana. Coworkers have also shown me some other cool tools such as q  — text as data which allows you to query text files that are well formatted as sql directly. This isn’t something I use often, maybe once or twice a year, but when I need it, I’m glad that I have this technique at my disposal.&lt;/p&gt;

&lt;p&gt;My example logs and code are available as a gist for your convenience: &lt;a href="https://gist.github.com/pdewilde/86c4d3d1cc718cbf44cdeb09f3e66b56"&gt;https://gist.github.com/pdewilde/86c4d3d1cc718cbf44cdeb09f3e66b56&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Disability and neurodivergence in tech: options and accommodations</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:53:29 +0000</pubDate>
      <link>https://dev.to/omtechblog/disability-and-neurodivergence-in-tech-options-and-accommodations-2bcp</link>
      <guid>https://dev.to/omtechblog/disability-and-neurodivergence-in-tech-options-and-accommodations-2bcp</guid>
      <description>&lt;p&gt;by Susanne Escher – May 17, 2021&lt;/p&gt;

&lt;p&gt;To start off, I need to make a couple of disclaimers.&lt;/p&gt;

&lt;p&gt;Firstly, as this post is about disability, it is necessary to discuss language surrounding disability. There has recently been a lot of discourse about whether to use person-first language (“person with a disability”) or identity-first language (“disabled person”). I am going to go with my personal preference as an autistic person, as well as that of most disability self-advocates whose work I follow (such as Imani Barbarin and Jessica Kellgren-Fozard), and use identity-first language.&lt;/p&gt;

&lt;p&gt;Secondly, this will be more about how to make tech jobs more accessible to disabled tech professionals, rather than about making tech which is itself accessible. I will mostly go into examples, because I know that when I started working, I wanted to see more examples of accommodations that actually helped people like me. I’m mostly speaking about my own experiences, and hope that even something as informal as personal anecdote might still be useful to readers.&lt;/p&gt;

&lt;p&gt;With that out of the way, I will briefly introduce myself and explain why I decided to speak on this topic. I am an autistic data scientist and have recently also been diagnosed with ADHD. Additionally, I have a bunch of relatively minor physical ailments, which aren’t much of a problem in and of themselves, but combine with the autism/ADHD to result in some symptoms like fatigue which do affect my work life. Because of this combination of things, I do refer to myself as disabled, but I realise that some autistic people don’t – however, no matter what label a person chooses, the world is not built for autistic people, so accommodations might still be useful.&lt;/p&gt;

&lt;p&gt;When I left academia to enter the general job market 2 years ago, I knew that I would likely need accommodations of some sort, but did not know at all what those might look like. At the time, I’d come from academia where I could set my own working hours (usually working 4 days a week), but otherwise there was no help available aside from weekly mentoring sessions with a disability mentor. I was hoping that going into full-time employment would go smoothly.&lt;/p&gt;

&lt;p&gt;The first accommodation I asked for, from the very start of working at OpenMarket, was the ability to work slightly odd hours to be able to avoid rush hour on public transport. Crowded public transport can be very difficult for people with sensory difficulties such as autistic people, including myself, to the point of badly impacting productivity at work due to the stress of sensory overload. To begin with, I would work 8-7 (before rush hour till after rush hour) 3 days a week, and 8-3:30 (before rush hour to before rush hour) the other 2 days. This arrangement worked fairly well in helping me avoid sensory overload. Flexible scheduling can be useful for many reasons, for example if someone has very bad pain flareups it is important to be able to schedule around that.&lt;/p&gt;

&lt;p&gt;However, within the first few months I found that I was spending so much time at work that I would fall asleep while working(this is where the fatigue problem comes in) as well as being unable to keep up with my chores at home and stress-eating all day. After some deliberation, I ended up asking my manager for reduced work hours. He sent me to HR and they sorted it all out for me without any issue. I am incredibly thankful that this was an option. I now take Wednesdays off to allow me to recharge mid-week, and it has improved my productivity at work as well as work-life balance and overall well-being. If part-time roles are not available, another option might be for two or more people to share a full-time position (this is referred to as job sharing). Wanting or needing to work part-time is of course not limited to disabled people but can also be sought by young parents, people who care for family members, or those who want to spend more time on goals that are not work-related.&lt;/p&gt;

&lt;p&gt;A more involved accommodation that OpenMarket is starting to roll out for employees in general over the next couple of months is executive function coaching. Executive functions are cognitive processes which affect the ability to do things we want to do, such as focus, the ability to get started on a new task, task breakdown and so on. These are often impaired in people with neurological or psychological conditions such as autism, ADHD, PTSD or depression, but will also vary greatly within the general population. The coaching consists of workshops which teach employees ways to improve executive functions.&lt;/p&gt;

&lt;p&gt;Also related to executive functions is extended task-planning. If someone struggles to break down tasks into manageable chunks due to their neurotype, it can be helpful for managers and team members to help with writing down step-by-step explanations to speed up the start-up process and progression of tasks for the affected employee. My current manager does this for me and it has been incredibly helpful.&lt;/p&gt;

&lt;p&gt;In summary, the accommodations I have received or am currently seeking at OpenMarket are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Flexible work hours
Reduced work hours
Executive function coaching
Task planning assistance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I have also asked friends for their experiences with requesting accommodations at work. Here are some ideas from those conversations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A stable desk instead of hotdesking for autistic/neurodivergent employees
Working from home arrangements
Private office if noise or light level is an issue
Noise-cancelling headphones to help with distractions
A quiet room (OpenMarket had these before the pandemic!)
Fans, humidifiers, minifridges for medications
Text-to-speech software for both blind employees and those with difficulty concentrating on textual information
Standing desks, ergonomic keyboards and mice
Organizational software – Engineering teams in OpenMarket already use this
Training for managers to understand the needs of their disabled employees
Accessible offices and bathrooms
Written instructions
Various accommodations for people with sensory (sight or hearing) disabilities that are too numerous to actually list here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Of course, this list is nowhere near exhaustive and very much skewed toward the needs of neurodivergent employees due to the skew in my own experience and my circle of friends. But I hope my experience and the other ideas listed can still help readers who are looking to figure out their own accessibility needs or those of their employees or loved ones. I know that for myself, figuring these things out on my own was trying, and I have had conversations with friends who felt similarly when first entering mainstream employment.&lt;/p&gt;

&lt;p&gt;If you don’t know how to ask for accommodations, the best points of contacts are likely your line manager, HR, and if extra help, research or knowledge is needed, diversity, equity and inclusion officers or groups within the organization. Some companies may also have specific employees in charge of providing accommodations. In many countries such as the UK and the US, there are laws in place that protect employees’ right to ask for reasonable accommodations, so it is worth asking for things that will help you improve your wellbeing and productivity at work. At OpenMarket in particular, I have had very positive experiences asking for accommodations and HR as well as my current manager are not just willing, but actually enthusiastic, about helping me achieve my full potential.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How To Properly Process a Delivery Receipt</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:47:51 +0000</pubDate>
      <link>https://dev.to/omtechblog/how-to-properly-process-a-delivery-receipt-36bd</link>
      <guid>https://dev.to/omtechblog/how-to-properly-process-a-delivery-receipt-36bd</guid>
      <description>&lt;p&gt;by Parker DeWilde – April 1, 2021&lt;/p&gt;

&lt;p&gt;I was shocked to find out that not all of our customers were properly handling delivery receipts, but when I took a peek at our publicly available documentation I realized that it was missing some key steps. This article will hopefully help patch any holes and allow customers to handle DRs in the correct manner.&lt;/p&gt;

&lt;p&gt;Delivery receipts are the primary way to get feedback about what happened to a message you sent to a handset. Our V4 HTTP API allows you to specify a URL in which to receive a callback when your message gets delivered (or fails).&lt;/p&gt;

&lt;p&gt;The first thing you need to do to receive a delivery receipt is to have a server with an endpoint at the url you specify to receive the DR callback able to receive the HTTP POST request with a JSON body.&lt;/p&gt;

&lt;p&gt;For the purposes of this blog, I will create an example using python with Flask due to brevity it allows. Other platforms can be used for the same effect. Me and PEP8 don’t get along, so bear with my formatting.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask, request
app = Flask(__name__)
# we will accept POSTs on the root context
@app.route('/', methods=['POST'])
def receiveDr():
 # JSON body will be automatically deserialized into Python data structures
 drJson = request.json
 return “OK”
# For dev purposes, we will use the built-in Flask web server. Production use cases should use a proper server such as gunicorn + nginx proxy
app.run(host='0.0.0.0', port=8080)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Most customers can get to this point correctly using our publicly available documentation, but the next part is where they usually mess up. Once we have the DR JSON, we need to do something with it. Apparently customers have been doing silly things like updating databases to mark the message as received or failed, triggering an email in case of a number no longer being deliverable, or feeding some sort of data visualization.&lt;/p&gt;

&lt;p&gt;These are all wrong. To find the correct way to handle a delivery receipt, you need to look at the name. It is a receipt. To handle it you need a thermal printer. Thanks to U.S.C. 4-01.2021 A.K.A. the KTRA (Keep The Receipts Act), you need to retain all delivery receipts in physical form for 7 years in case of an audit by the FMA (Federal Messaging Agency).&lt;/p&gt;

&lt;p&gt;The first step of course is to acquire a receipt printer. I would recommend an Epson TM-T88V Model M244A. It is widely used, has USB connectivity, and can be found on Ebay for less than $100.&lt;/p&gt;

&lt;p&gt;I have timed the time to print a single receipt at 1,040 ms, so to find out the number of printers you will need, divide TPS / 0.962. I would recommend looking at average TPS and adding a little extra, and then queue up receipts to be printed at peak times. For example, a customer sending 1000 messages a second would want about 1040 printers in parallel to satisfy their loads, with possibly an additional 100 to allow for redundancy as well as addition of receipt paper as they run out. Each receipt is 13.2 cm, and a roll of paper is 230 feet or 7010 cm, yielding about 531 receipts per roll. Given 1.040 second per receipt, you will need an employee to replace the paper every 9 minutes and 12 seconds. Assuming it takes 15 seconds to do so, you get a duty cycle of 97.35% or an effective duration of 1,068 ms. You will need another employee for every 36 printers you have. For 1000 tps, that means a staff of 30 paper-refillers.&lt;/p&gt;

&lt;p&gt;Once you have a receipt printer, you will need to plug it into your computer. Linux by default will not give your user permissions to access the device. Doing so is not too complicated:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lsusb
...
Bus 001 Device 009: ID 04b8:0202 Seiko Epson Corp. Interface Card UB-U05 for Thermal Receipt Printers [M129C/TM-T70/TM-T88IV]
...
$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A lsusb command will show your printer. We are interested in the vendor id and the product id, which are shown separated by a :. In my case the vendor id is 04b8 and the product id is 0202.&lt;/p&gt;

&lt;p&gt;Once we have those, we need to create a group to add our user to, as well as a udev rule to give that group access to the device. Note that you need to put in the product id and vendor id for your printer which you got using lsusb.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo groupadd usbusers
$ sudo addgroup &amp;lt;YOUR_USERNAME_HERE&amp;gt; usbusers
$ sudo sh -c  'echo "SUBSYSTEM==\"usb\", ATTRS{idVendor}==\"04b8\", ATTRS{idProduct}==\"0202\", MODE=\"0664\", GROUP=\"usbusers\"" &amp;gt; /etc/udev/rules.d/99-escpos.rules'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will need to reload udev rules for them to take effect. The easiest way is to reboot your computer.&lt;/p&gt;

&lt;p&gt;That’s the hard part! Now for a little python. First install the python-escpos library. This is how you will issue commands to your thermal printer from python.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from escpos.printer import *
# Substitute your vendor id and product id here!
p = Usb(0x04b8, 0x0202) # connect to printer, will fail if permissions not right
p.set() # reset the text properties
p.text("hello world\n")
p.cut() # cut the receipt with the automated knife if your printer supports it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should now have a receipt! It’s not quite a delivery receipt yet though! Almost there!&lt;/p&gt;

&lt;p&gt;Here is the entire code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from escpos.printer import * # use for connecting to thermal printer
from PIL import Image # use for opening and scaling images for logo
from flask import Flask, request # use for exposing API
# this scales any image to fit on your printer. 512 width works well for mine, you may need to choose a different value
def getImage(fileName):
 MAX_WIDTH = 512
 img = Image.open(fileName)
 wpercent = (MAX_WIDTH / float(img.size[0]))
 hsize = int((float(img.size[1]) * float(wpercent)))
 img = img.resize((MAX_WIDTH, hsize), Image.ANTIALIAS)
 return img
#printer setup
p = Usb(0x04b8, 0x0202)
p.set()
app = Flask(__name__)
@app.route('/', methods=['POST'])
def receiveDr():
 drJson = request.json # get json from post request
 # most fields are stored inside of a wrapper object, unwrap for convenience
 mtStatus = drJson['deliveryReceipt']['mtStatus']
 # print a logo at the top of the receipt
 # you will need the image in the same directory as the python script
 p.image(getImage('logo.png'))
 # tells printer to center align text we put in
 p.set(align='center')
 # tells the printer to print text
 p.text("\nDELIVERY RECEIPT\n\n")
 # helper function to handle formatting for us
 def printItem(first, second):
  p.set(align='left', bold=True)
  p.text(first)
  p.set(bold=False)
  p.text(str(second) + "\n")
 # print the parts of the DR, we will break into sections with a couple of dividers
 printItem("TICKET ID: ", mtStatus['ticketId'])
 printItem("DLVR DATE: ", mtStatus['deliveryDate'])
 printItem("DLR  CODE: ", mtStatus['code'])
 printItem("DLR  DESC: ", mtStatus['description'])
 printItem("NOTE    1: ", mtStatus['note1'])
 p.set(align='center')
 p.text("\n===DESTINATION===\n\n")
 printItem("DEST ADDR: ", mtStatus['destination']['address'])
 printItem("DST CNTRY: ", mtStatus['destination']['alpha2Code'])
 printItem("DST OP ID: ", mtStatus['destination']['mobileOperatorId'])
 p.set(align='center')
 p.text("\n===SOURCE===\n\n")
 printItem("SRC ADDR: ", mtStatus['source']['address'])
 printItem("SRC  TON: ", mtStatus['source']['ton'])
 # CODE128 barcodes need to be prefixed with ‘{B’, it seems to be a quirk of the library
 p.barcode('{BAPRILFOOLS', 'CODE128', function_type="B", pos='OFF')
 # cut the receipt
 p.cut()
 # return OK to OM -- 200 is implied unless we change the response code
 return "OK"
# development server for testing
if __name__ == "__main__":
 app.run(host='0.0.0.0', port=8080)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All that is left to do is to call OpenMarket’s SMS API! Note that I am calling with campaign ID rather than source address, as branded messaging customers would. This is due to how my test account is set up. Most customers would specify a source address when calling our API.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -X POST 'https://smsc.openmarket.com/sms/v4/mt' \
-H 'Authorization: Basic YXByaWw6Zm9vbHM6KQ==' \
-H 'Content-Type: application/json' \
--data-raw '{
    "mobileTerminate": {
        "options": {
            "campaignId": "MYTESTCAMPAIGN",
            "note1": "I'\''m making a note here: HUGE SUCCESS"
        },
        "destination": {
            "address": "13605556564"
        },
        "message": {
            "content": "Hello World!",
            "type": "text",
            "validityPeriod": 3599
        },
        "delivery": {
            "receiptRequested": "final",
            "url": "http://my-totally-real-url.tld:9088/"
        }
    }
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;An example of it working. I am using postman to make my api request. My phone number was obfuscated, but it is a working example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LpKQYoMH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hko7nynfvvipi2gw3n9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LpKQYoMH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hko7nynfvvipi2gw3n9e.png" alt="A printed OpenMarket receipt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy printing and April Fools!&lt;/p&gt;

&lt;p&gt;Credit:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://ramonh.dev/2020/09/22/usb-device-linux-startup/
https://vince.patronweb.com/2019/01/11/Linux-Zjiang-POS58-thermal-printer/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Work From Home in the times of COVID-19 and after</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:46:55 +0000</pubDate>
      <link>https://dev.to/omtechblog/work-from-home-in-the-times-of-covid-19-and-after-4806</link>
      <guid>https://dev.to/omtechblog/work-from-home-in-the-times-of-covid-19-and-after-4806</guid>
      <description>&lt;p&gt;February 26, 2021&lt;br&gt;
(by Vaibhav Sheth, originally posted on LinkedIn.)&lt;/p&gt;

&lt;p&gt;Working from home has been the new normal now and most of the IT companies and employees are adapting to the need of hour. Most of the companies have already implemented either mandatory or voluntary WFH, and this trend is here to stay for some more time and in all probabilities can be a permanent norm for most of the IT organizations, or even other industries.&lt;/p&gt;

&lt;p&gt;This is probably the first time where employees at scale are working from home on full-time basis over extended time frame in their careers, and this can pose quite a challenge both for employees and employers.&lt;/p&gt;

&lt;p&gt;For employees, because they have to balance their work life with daily chores and keep track of multiple activities like COVID-19 regulations which are changing week after week and impact basics like grocery and medical supplies, attend online classes with kids or keep them engaged otherwise, take care of pets and elderly etc. Most of us at some point in life must have dreamed of a continuous WFH for a better work life balance as compared to hassles of daily commutes to workplace, but now we acknowledge that WFH does come with its own set of challenges.&lt;/p&gt;

&lt;p&gt;For employers, on the other hand there are different set of challenges starting like managing logistics related to shifting employees from desk to WFH, handling Business Enabling Functions like physical security, operations, transport needs, taking care of Information Security and not but not the least the mammoth task of handling customer demands, mainly on assurance about companies capability of adapting to the current situations while ensuring that business is not disrupted.&lt;/p&gt;

&lt;p&gt;While you would have received and read many articles on managing work from home in the times of pandemic that highlight the importance of dedicated and comfortable workplace, communication and socializing, not getting overwhelmed by news, I would like to share a few tips and practices that can help in maintaining the balance between a full-time work from home and work for home (as most of us call it now)&lt;br&gt;
1) A predictable and steady ‘office’ time&lt;/p&gt;

&lt;p&gt;Clearly defined work hours will help you to transition between your home and work activities. A good starting point can be to follow your regular office times. i.e. if you reach office at 9:30, start your work at the same time. Same for winding off for the day. This will enable you to put in same amount of efforts as regular work hours. On a lighter note, this habit will ensure you are not negatively impacted when actual commute to office starts.&lt;/p&gt;

&lt;p&gt;Also, make sure that your lunch and other breaks are predictable. If there is a need to attend a call beyond office hours or to be available for collaboration with teams in different time zones, adjust the schedule accordingly.&lt;br&gt;
2) Maintain availability&lt;/p&gt;

&lt;p&gt;If you need to be away for some time, due to some unexpected event, keep your colleagues aware and also publish your availability on any other channels like SMS or WhatsApp or Slack. If you are not able to answer official queries during this time, use out of office replies and mention when you will be back and if there is someone who can cover you while you are away.&lt;br&gt;
3) Clear and concise communication&lt;/p&gt;

&lt;p&gt;While in office, we can always walk to colleagues or manager’s desk and have face to face discussions. Such interactions are now being replaced by telephonic and chat interactions. It is important to communicate and clear and concise manner over emails and conference calls such that there is no misunderstanding between the team, including direct reports and managers. Seek help from HR and managers if you face any issues on this front.&lt;br&gt;
4) Getting acquainted with online collaboration and conference tools&lt;/p&gt;

&lt;p&gt;Never was there more need to collaborate and use the online tools for business efficiency. Keep yourself well acquainted with all the online tools your company and even customers are using. Video conferencing tools like Zoom and communication/collaboration tools like Slack come with lots of features which can be used for better efficiency. Let your company or team know if there are better online tools which can be used for temporary basis for better collaboration and communication. Also, make sure you have the right hardware and accessories for using these tools. (Good headphones, extra monitor, docking station and a handy keyboard, if you are not used to type lengthy emails and notes on laptop.)&lt;/p&gt;

&lt;p&gt;5) Self-accountability&lt;/p&gt;

&lt;p&gt;This is the right time to build trust with your managers and senior leaders that work from home can be effective and productive if utilized in proper manner. Keep track of your Work in progress status &amp;amp; deliverables, plan ahead for the week and keep a constant check on calendar so that important meetings, especially the ones scheduled after office hours are not missed. Don’t miss filling out the weekly time-sheet and also maintain a log of activities performed. The last thing your manager would want to do is micro-management. If you were having 1:1 meeting with manager in office, make sure you conduct these catch-up calls on regular basis.&lt;br&gt;
6) Online training&lt;/p&gt;

&lt;p&gt;Now is the right time to re-skill and re-new yourself with technology and other aspects of your work that need you to learn soft and hard skills based on your role. There are many online websites available and even your company might be providing online training. Choose what suits you the best and start learning.&lt;/p&gt;

&lt;p&gt;7) Utilizing leave&lt;/p&gt;

&lt;p&gt;Yes, taking leave during work from home is absolutely necessary and you should plan for same. This time can be utilized for family time, learning new hobbies or reviving old ones or even doing nothing (it is fine too). Make sure you plan for your vacations and communicate the same in advance.&lt;br&gt;
8) Last but not least: take care of your media intake&lt;/p&gt;

&lt;p&gt;Don’t get overwhelmed by too much news and statistics about pandemic and don’t think too much about what your connections are posting on social media. If they are baking a new recipe of cake, you need not. You can have your own time at your own pace.&lt;/p&gt;

&lt;p&gt;Please let me know your thoughts and also what other practices you follow for a better work from home experience?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>You’re never done improving</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:31:23 +0000</pubDate>
      <link>https://dev.to/omtechblog/you-re-never-done-improving-4b1a</link>
      <guid>https://dev.to/omtechblog/you-re-never-done-improving-4b1a</guid>
      <description>&lt;p&gt;by Tomasz Ptak – February 2, 2021&lt;/p&gt;

&lt;p&gt;Last week I had a socially-locked-down celebration in my home office: our team’s Continuous Integration infrastructure has become a bottleneck. Read on to see why I see this as success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dv6lANFl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/az5200akhzr0zvrp7o9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dv6lANFl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/az5200akhzr0zvrp7o9e.png" alt="alt text"&gt;&lt;/a&gt; &lt;br&gt;
Source: &lt;a href="https://pixabay.com/photos/scarab-beetle-god-dung-beetle-2490586/"&gt;https://pixabay.com/photos/scarab-beetle-god-dung-beetle-2490586/&lt;/a&gt; &lt;br&gt;
Long long time ago in an OpenMarket far far away&lt;/p&gt;

&lt;p&gt;What would be your worst nightmare when it comes to building projects? Just think of one, or perhaps twenty would not be enough for you? A few years ago I joined a young team of talented engineers maintaining a lot of the company’s heritage and building foundations for its future. What could possibly go wrong?&lt;/p&gt;

&lt;p&gt;One of my key strategies for developing software is to not trust my own work. Thankfully the engineering world has spoiled us with tools and wisdom on how to prove that the outcomes match intentions: test driven development, testing frameworks, CI/CD pipelines, automation, static code analysis, vulnerability scanners, practices, conferences, ideas, standards, you name it. We just needed to make sure it was put to use. And seeing the state of some of the many projects we owned, we needed that rather quickly.&lt;br&gt;
What really matters?&lt;/p&gt;

&lt;p&gt;I like to see the direction in which my work is heading: the effect I need to achieve is always a moving target, but the value it needs to bring, if communicated clearly and thought through, will guide you in the right direction. In practice it feels like using a compass in a maze – you need to be able to point at the exit but the shortest path will usually get you lost. The maze you need to walk through represents the projects you maintain. The more technical debt you face, the more difficult it is to get past. When you don’t have the technical debt, each project you own is a sprint lane.&lt;/p&gt;

&lt;p&gt;That’s why if I want to deliver value to the customer, I need to ensure I tackle the technical debt.&lt;/p&gt;

&lt;p&gt;If I have the direction and it’s clear, I can focus on where I’m heading. I can take time to ensure I get the feedback I need to prove my rights and find my wrongs. I can stop and think, I can look for better solutions, I can learn, improve and propagate the learnings back on other projects I am responsible for.&lt;/p&gt;

&lt;p&gt;All that will work if the same rules apply on the organisational level: I am challenged to deliver value and am empowered to understand it, buy into it and build my ways to deliver.&lt;br&gt;
What gets in the way?&lt;/p&gt;

&lt;p&gt;I like the way of thinking about technical debt that Dave Rupert has shared in his blog post: the technical debt is a lack of understanding. While a lot of the debt comes from cutting corners to deliver something sooner with an intention to get it right later and usually never getting back to it, this is just part of the story.&lt;/p&gt;

&lt;p&gt;It’s not just the corner cutting that leads to the debt. If engineers do not have the resources to maintain projects which are not actively developed to propagate the improvements over all the projects, this leads to fragmentation, and fragmentation leads to loss of understanding. If we keep trying those new latest hot tech things that are so cool to write and have just each of them in single applications among many projects, this also leads to loss of understanding. If we don’t set the team or organisational standards of working, this will also contribute to the problem. If we don’t treat our development tools and staging environments (be it cloud, docker-based tests or twin servers in your data centre) as the most important project that we deliver, if we don’t touch them because they kind of work, it will contribute to the debt, and we will start tripping over our own feet.&lt;br&gt;
Azimuth&lt;/p&gt;

&lt;p&gt;As a team we are suffering from the tooling fragmentation that the heritage has brought. To make it worse, many of the attempts to get out of it have led to adding even more to it.&lt;/p&gt;

&lt;p&gt;Working over four versions of Java, two of Maven, a few of Gradle, Buck and a few shell scripts to build, with five deployment solutions, three of which were built in house as an attempt to unify everything can give you an idea.&lt;/p&gt;

&lt;p&gt;The first step to fix it that we took was identifying the outliers and making them similar to the majority of projects. This does not solve all our problems but has helped us to perform step two: declaring the direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3gHcZYFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufns6oecgqjx9mzkbg2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3gHcZYFY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufns6oecgqjx9mzkbg2t.png" alt="alt text"&gt;&lt;/a&gt;&lt;br&gt;
A whiteboard featuring development technology choices&lt;/p&gt;

&lt;p&gt;We have met and written down the tools and solutions we use for various aspects of our work and settled for one. If we find an area we missed, we add it. If we want to make a change, we need to do it using the “one in – one out” strategy, but for now we strongly prefer the “one out – another one out” approach.&lt;/p&gt;

&lt;p&gt;Having a list of things is not enough though, and we have a lot of wiki pages to prove it, so we decided to go a step further: we have introduced gamification to our projects.&lt;/p&gt;

&lt;p&gt;Our state of project ownership is maintained as code. We provide a list of projects, which team owns them, where code is located, what commands build/release it. Then we prepare a set of checks against the project: does it have a readme? Is it built with Maven or Gradle? etc. If a given criterion is not satisfied, the project loses points. Finally, we have a scoreboard showing the best projects and worst offenders that we look at every Friday before the standup. Friday is our fix-it day. If we change our choices, we change our checks and reevaluate the projects.&lt;br&gt;
The journey so far&lt;/p&gt;

&lt;p&gt;When I joined the company, my team used a very old “communal” build server for a few projects and just had its own Jenkins set up. The Jenkins was building only some of the projects, when it worked. We have unblocked it, added new agents, added all projects, removed outlying build solutions, added static code analysis and vulnerability scanners, added docker-based system and integration tests, parallelized some of the projects tests to speed up feedback, simplified the structure of most actively developed projects to have a main branch, no develop, gradle to build them, and a tag-based versioning, decommissioned a few services, merged some to reduce the releasing burden, introduced docker-based Jenkins agents to not rely on state of the Jenkins machines when building.&lt;/p&gt;

&lt;p&gt;This has let us speed up our delivery cycle and positioned us in a better starting point to migrate our services to cloud while still delivering value to our customers with sufficient trust that we’re not breaking anything else.&lt;/p&gt;

&lt;p&gt;It looks like we’ve now hit a limit of how many agents we can provision. What has enabled us to deliver at a scale and with confidence is now what is getting in our way. Looks like we’re starting a new cycle of improvement.&lt;br&gt;
Summary&lt;/p&gt;

&lt;p&gt;It’s easy to fall into a trap of a chronic not-yet-done-ness when one invests time, effort and emotions in making one’s own and everybody else’s lives easier. And it is technically true but it is important to understand that improvement to the ways of working is like breathing. We need the air to be of satisfactory quality, the lungs to be in order and with no faults and we need to keep pumping.&lt;/p&gt;

&lt;p&gt;With support from the organisation, with enough self-care within the team and with resources to do the work, we can keep going while focusing on delivering what matters to our customers: quality empathetic interactions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Cosmo’s Guide to Developer Builds and Deployment in the Cloud</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:30:21 +0000</pubDate>
      <link>https://dev.to/omtechblog/cosmo-s-guide-to-developer-builds-and-deployment-in-the-cloud-373k</link>
      <guid>https://dev.to/omtechblog/cosmo-s-guide-to-developer-builds-and-deployment-in-the-cloud-373k</guid>
      <description>&lt;p&gt;by Phil Jacobs – January 22, 2021&lt;/p&gt;

&lt;p&gt;Yes! My last merge request was accepted, the automated build kicked off, and I’m anxiously awaiting for the continuous integration (CI)  pipeline to complete.&lt;/p&gt;

&lt;p&gt;What are some of the steps that I could have taken?&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Used solid unit tests, part of a test-driven development process
Added new code that calls a web service, and validated the request was accepted
Perhaps, saved a new field to a persistent data source
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;During my test-driven development process, I experimented with the new API I’m consuming. I fulfilled the non-functional requirements related to logging, tracing, authentication, analytics, metrics, and monitoring.  All of these added complexity to the small piece of code  I developed.&lt;/p&gt;

&lt;p&gt;Using your devbox, and your favorite toolset, were you able to validate your changes? It turns out that validating a simple change isn’t always easy to do, even with the use of containers.&lt;/p&gt;

&lt;p&gt;You should rely on the CI pipeline for an automated build, and execution of integration tests to validate a change. To get to a build that passes the integration tests, you evolve your deliverable. Let’s spend some time looking at your development effort in more detail.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You use your IDE to compile, if necessary, and kick off your unit tests.
To sanity test your work, you exercise your changes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In my case there are multiple dependencies to my microservice, see Figure 1. To exercise my change, I check that I have a recent Application UI on my devbox, make sure my datasource schema is current, and so forth. But there has to be a quicker way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7-i-zcQi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q2lm41y9f272ukkr7h1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7-i-zcQi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q2lm41y9f272ukkr7h1.png" alt="Application UI connects to SSO Service and Microservice, which uses Datasource"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1&lt;/p&gt;

&lt;p&gt;Using CI with a pipeline for developer builds and deployment, I kick off a build and deployment by committing and pushing my work to my feature branch. For this pipeline I’m using Gitlab CI. Since my team is sharing the development environment in the cloud, our components are kept up to date.&lt;/p&gt;

&lt;p&gt;Our typical CI pipeline may have the stages Build and Release, see Figure 2. There are two jobs for the build, build-arm64 and build-x86_64.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mwNockUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/perbrg7sap6d6sjf1qj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mwNockUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/perbrg7sap6d6sjf1qj6.png" alt="A build pipeline with two build jobs feeding into a release job"&gt;&lt;/a&gt; &lt;br&gt;
Figure 2&lt;/p&gt;

&lt;p&gt;To incorporate feature branch build and deployment, I added the stage Deploy to the pipeline, with jobs to deploy our supported architectures, deploy-develop-arm64 and deploy-develop_x86_64.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eBieyUlp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iiwcfy0sj2f6dfk0xap0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eBieyUlp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iiwcfy0sj2f6dfk0xap0.png" alt="A build pipeline with a build job feeding into two deploy jobs"&gt;&lt;/a&gt; &lt;br&gt;
Figure 3&lt;/p&gt;

&lt;p&gt;In Gitlab CI, Jobs are defined in the gitlab-ci.yml file in your repository. So that our jobs for the Deploy stage are only used for development deployments, I made extensive use of Rules.&lt;/p&gt;

&lt;p&gt;rules:&lt;br&gt;
    - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == null &amp;amp;&amp;amp; $CI_COMMIT_REF_NAME != "develop" &amp;amp;&amp;amp; $CI_COMMIT_REF_NAME != "master"'&lt;br&gt;
      when: manual&lt;/p&gt;

&lt;p&gt;The Gitlab CI variable CI_MERGE_REQUEST_TARGET_BRANCH_NAME is compared to null so the job is not run for merge requests. The CI_COMMIT_REF_NAME variable is compared to our dedicated branches for our continuous integration and release. Deployment should occur if it is not one of our dedicated branches. And the step is manual, the deployment should occur if the developer desires the build to be deployed to the development environment.&lt;/p&gt;

&lt;p&gt;In conclusion, adding a development deployment step provides automation for a frequent developer task. Benefits include a simpler setup on your devbox, and the automation is there for the whole team. Tedious troubleshooting of devboxes is reduced, and having a working state in a development environment are all benefits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mzms_LEb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd3ctfxwg0yno16xd0cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mzms_LEb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd3ctfxwg0yno16xd0cu.png" alt="a ginger cat"&gt;&lt;/a&gt;&lt;br&gt;
 Cosmo&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to enable 100+ developers to deploy cloud resources in a controlled fashion</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:19:22 +0000</pubDate>
      <link>https://dev.to/omtechblog/how-to-enable-100-developers-to-deploy-cloud-resources-in-a-controlled-fashion-3hjc</link>
      <guid>https://dev.to/omtechblog/how-to-enable-100-developers-to-deploy-cloud-resources-in-a-controlled-fashion-3hjc</guid>
      <description>&lt;p&gt;by Bryan Wood – December 21, 2020&lt;/p&gt;

&lt;p&gt;Governance strategy in the cloud is a great new challenge that often gets overlooked. I’ve seen lots of organizations open an AWS account and turn developers loose to learn and deploy production services only to realize later that there’s large security consequences, cost ramifications, and infrastructure sprawl that they were not prepared to deal with.&lt;/p&gt;

&lt;p&gt;A full blown cloud initiative at an already profitable company is too wide of a topic to address in a single article like this so let’s zoom in on this one specific concern and look at how we’ve addressed part of that initiative at OpenMarket.&lt;/p&gt;

&lt;p&gt;There’s a long list of cloud providers and in each one you can deploy and configure resources with effectively infinite complexity, so writing down on paper what standards you would like to enforce early on is a good idea, in order to have something to work towards. Starting with, at the very least, a tagging standard.&lt;/p&gt;

&lt;p&gt;Keeping these standards in mind, it’s time to select tooling to manage your configurations and deploy your resources, I’m going to select Terraform for you. You can Google it’s strengths and shortcomings but in short, it’s going to support probably the widest variety of services across the widest variety of cloud providers. If you only need to deploy one thing, like Kubernetes, you might be better off choosing a tool to manage a specific toolchain or technology and its lifecycle. Our aim was to deploy any of the crazy number of services in AWS (or other cloud providers) that a developer might choose and allow us to manage the state of that service after it’s deployed. Terraform is a code controlled way to do that.&lt;/p&gt;

&lt;p&gt;Architecture&lt;/p&gt;

&lt;p&gt;We’re going to work in AWS in this example, but most of the examples should translate well across cloud providers. Terraform is code, and like any code, you shouldn’t test it in production. We’ve found that developing terraform in a “Development” account removes the risk of accidentally clobbering production resources. We have a “Staging” account for good measure, as well as the “Production” account where we run our production workloads.&lt;/p&gt;

&lt;p&gt;For each project we follow an “Environment Branches” pattern in git to make deployment very simple. Contributors follow normal git contribution practices and changes end up in the master branch. Each environment branch “dev” “stg” or “prd” have automation that will pick up that code and apply it to the corresponding account.&lt;/p&gt;

&lt;p&gt;We ensure that all resources in production have been deployed by terraform by only providing developers with read access to Staging and Production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yu7POAoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s33erwfpd7ir9lxtkbeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yu7POAoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s33erwfpd7ir9lxtkbeu.png" alt="alt text"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Terraform Modules&lt;/p&gt;

&lt;p&gt;At the beginning, we didn’t have a strong convention or enough people supporting the platform to enforce anything. As time went on, the desire for support and consistency became very real. We’ve been spending a lot of time developing our terraform modules with the intention of making it so developers requesting resources will never have to request them directly and only use very abstracted modules. If they need a different feature in a platform that they are deploying, they can submit a feature request to be finished by a Site Reliability Engineer.&lt;/p&gt;

&lt;p&gt;The Vision&lt;/p&gt;

&lt;p&gt;If we abstract these platforms enough, projects will read a little bit more like a bill of materials than actual code. Our goal is that nobody actually calling a module from a project will have to be particularly fluent in terraform. It might be that a savvy Technical Programme Manager could fill out the necessary requirements and get the resources deployed before their developers even need them.&lt;/p&gt;

&lt;p&gt;Example of calling an abstracted module for a new project:&lt;/p&gt;

&lt;p&gt;module "my_new_application" {&lt;br&gt;
  providers = {&lt;br&gt;
    aws = aws.us-west-2&lt;br&gt;
  }&lt;br&gt;
  source = "&lt;a href="mailto:git@git.company.com"&gt;git@git.company.com&lt;/a&gt;:cloud/om-modules.git/modules/beanstalk_uber_module?ref=v0.10"&lt;br&gt;
  has_mysql      = true&lt;br&gt;
  has_redis      = true&lt;br&gt;
  has_s3_bucket  = true&lt;br&gt;
  instance_types = "t3.micro"&lt;br&gt;
  tags = merge(local.common_tags,&lt;br&gt;
    {&lt;br&gt;
    Function = "awesome new microservice"&lt;br&gt;
  })&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;The module might have a bunch of other parameters with sane defaults and documentation would make those options clear. I find the above easy to read, even for the non-developer.&lt;br&gt;
Some Lessons Learned&lt;br&gt;
It feels unsafe to be deploying data services with data or state in them with terraform. I’m afraid that a change is going to unexpectedly delete a resource and my data.&lt;/p&gt;

&lt;p&gt;In terraform there are lots of useful “meta arguments”. Any resource block in terraform can have a “lifecycle” block. Inside of the lifecycle block you can use the “prevent_destroy” parameter. You don’t want to use it too often, because it will keep “terraform destroy” commands from working and can break your pipelines in multiple places. This will however prevent a datasource from being accidentally destroyed and is a good idea to add to resources that might contain important data.&lt;br&gt;
What if I want resources configured differently in dev, stg, or prd environments?&lt;/p&gt;

&lt;p&gt;We handle this by having dev, stg, and prd, .tfvars files. This makes it very simple to adjust values for parameters on a per environment basis. Things that we might make different between environments might be instance sizes, number of instances, tag values, or configurations like where you want your logs sent.&lt;/p&gt;

&lt;p&gt;Parting Shots&lt;/p&gt;

&lt;p&gt;This approach has some shortcomings. Without additional tooling we don’t have a good way of estimating costs or incorporating financial data into consideration before merging to an environment branch. Regardless, with consistent tagging at least the financial data is visible to us as we consume cloud resources.&lt;/p&gt;

&lt;p&gt;By giving developers more freedom in our development environment and the ability to provision resources without terraform we have some untagged and unmanaged resources that can cause unnecessary spend as well as other issues. We use Cloud Custodian policies to mop up non-compliant resources.&lt;/p&gt;

&lt;p&gt;There are lots of management solutions for cloud providers and we’re constantly evaluating new options. As we’ve progressed, this pattern with Terraform has matured into something that we find supportable and flexible enough to allow us to leverage the vast array of services that modern cloud providers make available.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to send a text message (using netcat to send SMS over SMPP)</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:18:00 +0000</pubDate>
      <link>https://dev.to/omtechblog/how-to-send-a-text-message-using-netcat-to-send-sms-over-smpp-38dh</link>
      <guid>https://dev.to/omtechblog/how-to-send-a-text-message-using-netcat-to-send-sms-over-smpp-38dh</guid>
      <description>&lt;p&gt;Andy Balaam – November 4, 2020&lt;/p&gt;

&lt;p&gt;SMPP is a binary protocol used by phone companies to send text messages, otherwise known as SMS messages.&lt;/p&gt;

&lt;p&gt;It can work over TCP, so we can use netcat on the command line to send messages.&lt;/p&gt;

&lt;p&gt;A much better way to understand this protocol is to use Wireshark’s SMPP protocol support but for this article, we will do it the hard way.&lt;br&gt;
Setting up&lt;/p&gt;

&lt;p&gt;[Note: the netcat I am using is Ncat 7.70 on Linux.]&lt;/p&gt;

&lt;p&gt;The server that receives messages is called an SMSC. You may have your own one, but if not, you can use the CloudHopper one like this:&lt;/p&gt;

&lt;p&gt;sudo apt install make maven  # (or similar on non-Debian-derived distros)&lt;br&gt;
git clone &lt;a href="https://github.com/fizzed/cloudhopper-smpp.git"&gt;https://github.com/fizzed/cloudhopper-smpp.git&lt;/a&gt;&lt;br&gt;
cd cloudhopper-smpp&lt;/p&gt;

&lt;p&gt;If you are a little slow, like me, I’d suggest making it wait a bit longer for bind requests before giving up on you. To do that, edit the main() method of src/test/java/com/cloudhopper/smpp/demo/ServerMain.java to add a line like this: configuration.setBindTimeout(500000); on about line 80, near the other similar lines. This will make it wait 500 seconds for you to send a BIND_TRANSCEIVER, instead of giving up after just 5 seconds.&lt;/p&gt;

&lt;p&gt;Once you’ve made that change, you can run:&lt;/p&gt;

&lt;p&gt;make server&lt;/p&gt;

&lt;p&gt;Now you have an SMSC running!&lt;/p&gt;

&lt;p&gt;Leave that open, and go into another terminal, and type:&lt;/p&gt;

&lt;p&gt;mkfifo tmpfifo&lt;br&gt;
nc 0.0.0.0 2776 &amp;lt; tmpfifo | xxd&lt;/p&gt;

&lt;p&gt;The mkfifp part creates a “fifo” – a named pipe through which we will send our SMPP commands.&lt;/p&gt;

&lt;p&gt;The nc part starts Ncat, connecting to the SMSC we started.&lt;/p&gt;

&lt;p&gt;The xxd part will take any binary data coming out of Ncat and display it in a more human-readable way.&lt;/p&gt;

&lt;p&gt;Leave that open too, and in a third terminal type:&lt;/p&gt;

&lt;p&gt;exec 3&amp;gt; tmpfifo&lt;/p&gt;

&lt;p&gt;This makes everything we send to file descriptor 3 go into the fifo, and therefore into Ncat.&lt;/p&gt;

&lt;p&gt;Now we have a way of sending binary data to Ncat, which will send it on to the SMSC and print out any responses.&lt;/p&gt;

&lt;p&gt;Note: we will be using SMPP version 3.4 since it is in the widest use, even though it is not the newest.&lt;br&gt;
Terminology&lt;/p&gt;

&lt;p&gt;“SMPP” is the protocol we are speaking, which we are using over TCP/IP.&lt;/p&gt;

&lt;p&gt;An SMSC is a server (which receives messages intended for phones and sends back responses and receipts).&lt;/p&gt;

&lt;p&gt;We will be acting as an ESME or client (which sends messages intended for phones and receives responses and receipts).&lt;/p&gt;

&lt;p&gt;The units of information that are passed back and forth in SMPP are called “PDUs” (Protocol Data Units) – these are just bits of binary data that flow over the TCP connection between two computers.&lt;/p&gt;

&lt;p&gt;The spec talks about “octets” – this means 8-bit bytes.&lt;/p&gt;

&lt;p&gt;TLV stands for tag-length-value, but really it’s an optional extra piece of data included within a PDU.&lt;br&gt;
ENQUIRE_LINK&lt;/p&gt;

&lt;p&gt;First, we’ll check the SMSC is responding, by sending an ENQUIRE_LINK, which is used to ask the SMSC whether it’s there and working.&lt;/p&gt;

&lt;p&gt;Go back to the third terminal (where we ran exec) and type this:&lt;/p&gt;

&lt;p&gt;LEN16='\x00\x00\x00\x10'&lt;br&gt;
ENQUIRE_LINK='\x00\x00\x00\x15'&lt;br&gt;
NULL='\x00\x00\x00\x00'&lt;br&gt;
SEQ1='\x00\x00\x00\x01'&lt;/p&gt;

&lt;p&gt;echo -n -e "${LEN16}${ENQUIRE_LINK}${NULL}${SEQ1}" &amp;gt;&amp;amp;3&lt;/p&gt;

&lt;p&gt;Explanation: an ENQUIRE_LINK PDU consists of:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4 bytes to say the length of the whole PDU in bytes. ENQUIRE_LINK PDUs are always 16 bytes, “00000010” in hex. I called this LEN16.
4 bytes to say what type of PDU this is. ENQUIRE_LINK is “00000015” in hex. I called this ENQUIRE_LINK.
4 bytes that are always zero in ENQUIRE_LINK. I called this NULL.
4 bytes that identify this request, called a sequence number. The response from the server will include this so we can match responses to requests. I called this SEQ1.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check back in the second terminal (where you ran nc). If everything worked, you should see something like this:&lt;/p&gt;

&lt;p&gt;00000000: 0000 0010 8000 0015 0000 0000 0000 0001  ................&lt;/p&gt;

&lt;p&gt;Ignoring the first and last parts (which are how xxd formats its output), the response we receive is four 4-byte parts, very similar to what we sent:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4 bytes to say the length of the whole PDU in bytes. Here it is “00000010” hex, or 16 decimal.
4 bytes to say what type of PDU this is. Here it is “80000015” in hex, which is the code for ENQUIRE_LINK_RESP.
4 bytes for the success status of the ENQUIRE_LINK_RESP. This is always “00000000”, which means success and is called ESME_ROK in the spec.
4 bytes that match the sequence number we sent. This is “00000001”, as we expected.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;BIND_TRANSCEIVER&lt;/p&gt;

&lt;p&gt;Now we can see that the SMSC is working, let’s “bind” to it. That means something like logging in: we convince the SMSC that we are a legitimate client, and tell it what type of connection we want, and, assuming it agrees, it will hold the connection open for us for as long as we need.&lt;/p&gt;

&lt;p&gt;We are going to bind as a transceiver, which means both a transmitter and receiver, so we can both send messages and receive responses.&lt;/p&gt;

&lt;p&gt;Send the bind request like this:&lt;/p&gt;

&lt;p&gt;LEN32='\x00\x00\x00\x20'&lt;br&gt;
BIND_TRANSCEIVER='\x00\x00\x00\x09'&lt;br&gt;
NULL='\x00\x00\x00\x00'&lt;br&gt;
SEQ2='\x00\x00\x00\x02'&lt;br&gt;
SYSTEM_ID="sys\x00"&lt;br&gt;
PASSWORD="pas\x00"&lt;br&gt;
SYSTEM_TYPE='typ\x00'&lt;br&gt;
INTERFACE_VERSION='\x34'&lt;br&gt;
ADDR_TON='\x00'&lt;br&gt;
ADDR_NPI='\x00'&lt;br&gt;
ADDRESS_RANGE='\x00'&lt;/p&gt;

&lt;p&gt;echo -n -e "${LEN32}${BIND_TRANSCEIVER}${NULL}${SEQ2}${SYSTEM_ID}${PASSWORD}${SYSTEM_TYPE}${INTERFACE_VERSION}${ADDR_TON}${ADDR_NPI}${ADDRESS_RANGE}" &amp;gt;&amp;amp;3&lt;/p&gt;

&lt;p&gt;Explanation: this PDU is 32 bytes long, so the first thing we send is “00000020” hex, which is 32.&lt;/p&gt;

&lt;p&gt;Then we send “00000009” for the type: BIND_TRANSCEIVER, 4 bytes of zeros, and a sequence number – this time I used 2.&lt;/p&gt;

&lt;p&gt;That was the header. Now the body of the PDU starts with a system id (basically a username), a password, and a system type (extra info about who you are). These are all variable-length null-terminated strings, so I ended each one with \x00.&lt;/p&gt;

&lt;p&gt;The rest of the body is some options about the types of phone number we will be sending from and sending to – I made them all “00” hex, which means “we don’t know”.&lt;/p&gt;

&lt;p&gt;If it worked, you should see this in the nc output:&lt;/p&gt;

&lt;p&gt;00000000: 0000 0021 8000 0009 0000 0000 0000 0002  ...!............&lt;br&gt;
00000010: 636c 6f75 6468 6f70 7065 7200 0210 0001  cloudhopper.....&lt;/p&gt;

&lt;p&gt;As before, the first 4 bytes are for how long the PDU is – 33 bytes – and the next 4 bytes are for what type of PDU this is – “80000009” is for BIND_TRANSCEIVER_RESP which is the response to a BIND_TRANSCEIVER.&lt;/p&gt;

&lt;p&gt;The next 4 bytes are for the status – these are zeroes which indicates success (ESME_ROK) again. After that is our sequence number (2).&lt;/p&gt;

&lt;p&gt;The next 15 bytes are the characters of the word “cloudhopper” followed by a zero – this is the system id of the SMSC.&lt;/p&gt;

&lt;p&gt;The next byte (“01”) – the last one we can see – is the beginning of a “TLV”, or optional part of the response. The xxd program actually delayed the last byte of the output, so we can’t see it yet, but it is “34”. Together, “0134” means “the interface version we support is SMPP 3.4”.&lt;br&gt;
SUBMIT_SM&lt;/p&gt;

&lt;p&gt;The reason why we’re here is to send a message. To do that, we use a SUBMIT_SM:&lt;/p&gt;

&lt;p&gt;LEN61='\x00\x00\x00\x3d'&lt;br&gt;
SUBMIT_SM='\x00\x00\x00\x04'&lt;br&gt;
SEQ3='\x00\x00\x00\x03'&lt;br&gt;
SERVICE_TYPE='\x00'&lt;br&gt;
SOURCE_ADDR_TON='\x00'&lt;br&gt;
SOURCE_ADDR_NPI='\x00'&lt;br&gt;
SOURCE_ADDR='447000123123\x00'&lt;br&gt;
DEST_ADDR_TON='\x00'&lt;br&gt;
DEST_ADDR_NPI='\x00'&lt;br&gt;
DESTINATION_ADDR='447111222222\x00'&lt;br&gt;
ESM_CLASS='\x00'&lt;br&gt;
PROTOCOL_ID='\x01'&lt;br&gt;
PRIORITY_FLAG='\x01'&lt;br&gt;
SCHEDULE_DELIVERY_TIME='\x00'&lt;br&gt;
VALIDITY_PERIOD='\x00'&lt;br&gt;
REGISTERED_DELIVERY='\x01'&lt;br&gt;
REPLACE_IF_PRESENT_FLAG='\x00'&lt;br&gt;
DATA_CODING='\x03'&lt;br&gt;
SM_DEFAULT_MSG_ID='\x00'&lt;br&gt;
SM_LENGTH='\x04'&lt;br&gt;
SHORT_MESSAGE='hihi'&lt;br&gt;
echo -n -e "${LEN61}${SUBMIT_SM}${NULL}${SEQ3}${SERVICE_TYPE}${SOURCE_ADDR_TON}${SOURCE_ADDR_NPI}${SOURCE_ADDR}${DEST_ADDR_TON}${DEST_ADDR_NPI}${DESTINATION_ADDR}${ESM_CLASS}${PROTOCOL_ID}${PRIORITY_FLAG}${SCHEDULE_DELIVERY_TIME}${VALIDITY_PERIOD}${REGISTERED_DELIVERY}${REPLACE_IF_PRESENT_FLAG}${DATA_CODING}${SM_DEFAULT_MSG_ID}${SM_LENGTH}${SHORT_MESSAGE}" &amp;gt;&amp;amp;3&lt;/p&gt;

&lt;p&gt;LEN61 is the length in bytes of the PDU, SUBMIT_SM is the type of PDU, and SEQ3 is a sequence number, as before.&lt;/p&gt;

&lt;p&gt;SOURCE_ADDR is a null-terminated (i.e. it ends with a zero byte) string of ASCII characters saying who the message is from. This can be a phone number, or a name (but the rules about what names are allowed are complicated and region-specific). SOURCE_ADDR_TON and SOURCE_ADDR_NPI give information about what type of address we are providing – we set them to zero to mean “we don’t know”.&lt;/p&gt;

&lt;p&gt;DESTINATION_ADDR, DEST_ADDR_TON and DEST_ADDR_NPI describe the phone number we are sending to.&lt;/p&gt;

&lt;p&gt;ESM_CLASS tells the SMSC how to treat your message – we use “store and forward” mode, which means keep it and send it when you can.&lt;/p&gt;

&lt;p&gt;PROTOCOL_ID is different depending what type of SMSC you are using. We assume GSM here, and provide a value that works for GSM.&lt;/p&gt;

&lt;p&gt;PRIORITY_FLAG means how important the message is – we used “interactive”.&lt;/p&gt;

&lt;p&gt;SCHEDULE_DELIVERY_TIME is when to send – we say “immediate”.&lt;/p&gt;

&lt;p&gt;VALIDITY_PERIOD means how long should this message live before we give up trying to send it (e.g. if the user’s phone is off). We use “default” so the SMSC will do something sensible.&lt;/p&gt;

&lt;p&gt;REGISTERED_DELIVERY gives information about whether we want a receipt saying the message arrived on the phone. We say “yes please”.&lt;/p&gt;

&lt;p&gt;REPLACE_IF_PRESENT_FLAG tells it what to do if a duplicate of this message is sent to the SMSC before this one is delivered – the value we used means “don’t replace”.&lt;/p&gt;

&lt;p&gt;DATA_CODING states what character encoding you are using to send the message text – we used “Latin 1”, which means ISO-8859-1.&lt;/p&gt;

&lt;p&gt;SM_DEFAULT_MSG_ID allows us to use one of a handful of hard-coded standard messages – we say “no, use a custom one”.&lt;/p&gt;

&lt;p&gt;SM_LENGTH is the length in bytes of the “short message” – the actual text that the user will see on the phone screen.&lt;/p&gt;

&lt;p&gt;SHORT_MESSAGE is the short message itself – our message is all ASCII characters, but we could use any bytes and they will be interpreted as characters in ISO-8859-1 encoding.&lt;/p&gt;

&lt;p&gt;You should see a response in the other terminal like this:&lt;/p&gt;

&lt;p&gt;00000020: 3400 0000 1180 0000 0400 0000 0000 0000  4...............&lt;/p&gt;

&lt;p&gt;The initial “34” is the left-over byte from the previous message as mentioned above. After that, we have:&lt;/p&gt;

&lt;p&gt;“00000011” for the length of this PDU (17 bytes).&lt;/p&gt;

&lt;p&gt;“80000004” for the type – SUBMIT_SM_RESP which tells us whether the message was accepted (but not whether it was received).&lt;/p&gt;

&lt;p&gt;“00000000” for the status – zero means “OK”.&lt;/p&gt;

&lt;p&gt;The last two bytes are chopped off again, but what we actually get back is:&lt;/p&gt;

&lt;p&gt;“00000003”, which is the sequence number, and then:&lt;/p&gt;

&lt;p&gt;“00” which is a null-terminated ASCII message ID: in this case the SMSC is saying that the ID it has given this message is “”, which is probably not very helpful! If this ID were not empty, it would help us later if we receive a delivery receipt, or if we want to ask about the message, or change or cancel it.&lt;br&gt;
DELIVER_SM&lt;/p&gt;

&lt;p&gt;If you stop the SMSC process (the one we started with make server) by pressing Ctrl-C, and start a different one with make server-echo, and then repeat the other commands (note you need to be quick because you only get 5 seconds to bind before it gives up on you – make similar changes to what we did in ServerMain to ServerEchoMain if this causes problems), you will receive a delivery receipt from the server, which looks like this:&lt;/p&gt;

&lt;p&gt;“0000003d” for the length of this PDU (59 bytes).&lt;/p&gt;

&lt;p&gt;“00000005” for the type (DELIVER_SM).&lt;/p&gt;

&lt;p&gt;“00000000” for the unused command status.&lt;/p&gt;

&lt;p&gt;“00000001” as a sequence number. Note, this is unrelated the sequence number of the original message: to match with the original message, we must use the message ID provided in the SUBMIT_SM_RESP.&lt;/p&gt;

&lt;p&gt;“0000003400” to mean we are using SMPP 3.4. (This is a null-terminated string of bytes.)&lt;/p&gt;

&lt;p&gt;“00” and “00” for the TON and NPI of the source address, followed by the source address itself, which is a null-terminated ASCII string: “34343731313132323232323200”. This translates to “447111222222”, which was the destination address of our original message. Note: some SMSCs switch the source and destination addresses like this in their delivery receipts, and some don’t, which makes life interesting.&lt;/p&gt;

&lt;p&gt;“00” and “00” for the TON and NPI of the destination address, followed by “34343730303031323331323300” for the address itself, which translates to “447000123123”, as expected.&lt;/p&gt;

&lt;p&gt;The DELIVER_SM PDU continues with much of the information repeated from the original message, and the SMSC is allowed to provide a short message as part of the receipt – in our example the cloudhopper SMSC repeats the original message. Some SMSCs use the short message to provide information such as the message ID and the delivery time, but there is no formal standard for how to provide it. Other SMSCs use a TLV to provide the message ID instead.&lt;/p&gt;

&lt;p&gt;Somewhere in the DELIVER_SM you should find some indication of whether the message was actually delivered to the phone. Often it’s in a TLV called “message state”, but it could also be in the message body. Bizarrely, a state of “4” is the normal code for “delivered successfully”.&lt;/p&gt;

&lt;p&gt;In order to complete the conversation, you should provide a DELIVER_SM_RESP, and then an UNBIND, but hopefully based on what we’ve done and the SMPP 3.4 standard, you should be able to figure it out.&lt;br&gt;
You did it&lt;/p&gt;

&lt;p&gt;SMPP is a binary protocol layered directly on top of TCP, which makes it slightly harder to work with by hand than the HTTP protocols with which many of us are more familiar, but I hope I’ve convinced you it’s possible to understand what’s going on without resorting to some kind of heavyweight debugging tool or library.&lt;/p&gt;

&lt;p&gt;Happy texting!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A tale of an event-driven platform</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:09:56 +0000</pubDate>
      <link>https://dev.to/omtechblog/a-tale-of-an-event-driven-platform-hjo</link>
      <guid>https://dev.to/omtechblog/a-tale-of-an-event-driven-platform-hjo</guid>
      <description>&lt;p&gt;by Julio Cesar Monroy – October 1, 2020&lt;/p&gt;

&lt;p&gt;Not so long ago, my team was tasked with creating new software that was capable of allowing our customers to model end-user interactions (known as conversations) in a very simple and flexible way.&lt;/p&gt;

&lt;p&gt;In a nutshell, the requirements of the system can be summed up as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Interactions are triggered by external actions, a user sending a text message, a broadcast to announce a new product, etc.
A conversation likely requires more information that should be retrieved from an external source, like variables for personalization.
The conversation lifespan could range from a few minutes to several days.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A fairly complex system that requires a high degree of integration with other internal and external tools – how do we solve the problem?&lt;br&gt;
Enter event-driven architecture (EDA)&lt;/p&gt;

&lt;p&gt;Per requirements, it was very clear that the system’s nature was asynchronous: reacting to incoming external actions, processing them, and, potentially, waiting for another external action to happen.&lt;/p&gt;

&lt;p&gt;Our main goal was to architect the system in a way that embraced that asynchronous nature while maintaining a high degree of resiliency, and be flexible enough to accommodate future enhancements without requiring any major re-work. We found the event-driven paradigm to really fit our needs.&lt;/p&gt;

&lt;p&gt;An event-driven architecture is modeled after events: an event is an action that has occurred, like a change of state or an incoming external action, and is significant enough that your system should process it.&lt;/p&gt;

&lt;p&gt;The main ideas of an event-driven architecture include: there are event producers and event consumers, a common communication pattern is the fire-and-forget style, producers do not know anything about consumers and similarly, consumers do not know anything about producers. In other words, a highly decoupled system.&lt;/p&gt;

&lt;p&gt;A great companion to an EDA is stream processing. To keep it simple, a stream can be seen as an unbounded list of events that can be consumed and processed in near realtime.&lt;/p&gt;

&lt;p&gt;Kafka, Apache Pulsar, and NATS Streaming are projects worth checking out as the foundation for event streaming.&lt;/p&gt;

&lt;p&gt;Some of the nice benefits we got by following an event-driven architecture:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Highly decoupled services
High scalability
Events are immutable, with the proper handling, you can reconstruct the state of your system, at any point in time, just by replaying the events.
A single event can have multiple consumers, and more consumers can be added in the future without any change in the underlying system.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Events as first-class citizens&lt;/p&gt;

&lt;p&gt;One of the first tasks we had to do was to come up with a list of the core events.  For that, we used a technique called event storming. The basic idea is to decompose the business process into a set of domain events. Once we had this list of the initial events, we decided to base our events implementation on the CloudEvents specification.&lt;/p&gt;

&lt;p&gt;Our events are represented as a JSON object and serialized to text format. We considered the idea of using binary formats, such as Apache Avro or Google’s proto buffers. However, at the end, we settled with a plain text format for simplicity.&lt;/p&gt;

&lt;p&gt;A simple event definition&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "specversion" : "1.0",
   "type" : "my.type",
    "source" : "my-awesome-producer",
    "subject" : "123",
    "id" : "A234-1234-1234",
    "time" : "2018-04-05T17:31:00Z",
    "data" : { //custom attributes }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Events are organized into channels (aka topics). Each channel represents a certain domain and all events related to that domain land in the same channel.  Consumers show “interest” in certain events by subscribing to the proper channel and defining a filter by event type (to only consume the relevant events).&lt;/p&gt;

&lt;p&gt;As of this writing, there are around 9 channels and 40 events distributed among those channels.  ~15 microservices act as producers/consumers. At some point all events are dumped into a long term, searchable storage where we can perform queries for troubleshooting or for analytical purposes. If required, we can also go back in time and replay the events.&lt;br&gt;
Testing in a distributed and eventual consistency world&lt;/p&gt;

&lt;p&gt;Our amazing QA team uses several tools for different purposes, from KarateDSL to test each microservice in isolation to Gatling for performance testing. However, our system operates under the eventual consistency model, which poses a challenge for integration testing. How do we solve it?&lt;/p&gt;

&lt;p&gt;In a simple and easy way, we treat the system as a black box: our testing scripts interact with the platform by injecting events.  We know beforehand which are the side effects (in this case, which other events should be produced as a result of the processing). After the relevant events are injected, the scripts query the long-term event storage and compare the available events against a predefined list. If something is wrong then we know exactly which event(s) are missing/have wrong data and we can take a deep dive into our other tooling to discover the problem.&lt;/p&gt;

&lt;p&gt;We also run the same testing in our production environment constantly – an effort to spot potential issues before our customers do. From time to time, we take samples of our event stream and run analytics over them. The results give us a better idea of our performance baseline and we can adjust our monitoring as needed.&lt;br&gt;
Conclusion&lt;/p&gt;

&lt;p&gt;Working with an asynchronous distributed system that operates under the eventual consistency model is not a walk in the park. Many of the rules that apply in the synchronous communication world are no longer valid (or are more difficult to apply), and, there is a very good chance that bad choices will hit you back harder. However, if you take the time to really understand what event-driven architecture is all about, follow best practices (there is a lot of literature out there), and, most important, an event-driven solution is a good fit for your problem, then, the pros far outweigh the cons.&lt;/p&gt;

&lt;p&gt;Overall, we have been very happy with our choice to architect the system using an event-driven architecture.  It has allowed us to iterate the product quickly: the events first approach combined with domain-driven design techniques gave us some powerful tooling to better understand the business processes and to model the interactions. On the technical side, adding new features is (almost always) just a matter of adding events and new producers/consumers. Sometimes, the required events are already there, meaning just adding another consumer will do the trick.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Defense of Plain Old Java</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:09:01 +0000</pubDate>
      <link>https://dev.to/omtechblog/a-defense-of-plain-old-java-3d4c</link>
      <guid>https://dev.to/omtechblog/a-defense-of-plain-old-java-3d4c</guid>
      <description>&lt;p&gt;by Larry Hohm – September 1, 2020&lt;/p&gt;

&lt;p&gt;The Java ecosystem is exploding with frameworks. There are frameworks for dependency injection, web services, servlet containers, data access, persistence, configuration, user interfaces, and more. Like many Java developers, I have a love-hate relationship with frameworks. I love them when they make my life easier, and hate them when they make it more difficult.&lt;/p&gt;

&lt;p&gt;Some developers are eager to embrace frameworks, some are reluctant, and many are somewhere in the middle of the scale. I am generally reluctant to adopt frameworks, for a number of reasons. I will continue to use them, especially those preferred by my teammates. But I believe it is always worthwhile to think twice before adopting a framework. I say this out of deep respect for the power, flexibility, expressiveness, and simplicity of plain old Java.&lt;/p&gt;

&lt;p&gt;I offer one example to illustrate my viewpoint. Suppose you want to select a good strategy for implementing dependency injection in a new project. The three main alternatives these days are Spring, Guice, and plain old Java. Before weighing the pros and cons of Spring or Guice, let’s ask a simple question: why do we need a framework at all?&lt;/p&gt;

&lt;p&gt;(I realize that Spring Boot is used widely at OpenMarket and throughout the industry, and is favored by many thought leaders in our industry. And it is much, much, more than just a dependency injection framework. My discussion here is concerned only with the issue of dependency injection.)&lt;/p&gt;

&lt;p&gt;Dependency injection is a very simple idea. Your code injects one object (the dependency) into another object (the target). What could be simpler? This can be done with constructor injection or setter injection. It is hard to imagine anything simpler! Every Java developer understands constructors and setters. We all learn about them in elementary school. How could any framework make this task simpler? Why do we need a framework for dependency injection?&lt;/p&gt;

&lt;p&gt;Some will suggest that if you need to inject many dependencies, a framework would be better. A constructor with lots of parameters can get ugly, because the order of the parameters is important, so the developer who is using the constructor might need to track down the source code or documentation for your class. It’s even worse if some of the parameters are optional.&lt;/p&gt;

&lt;p&gt;However, this problem can be solved easily using plain old Java. If you have two or three dependencies, it’s not much of a problem. If you have more, then you should probably think about applying the Single Responsibility Principle, and refactor your class into two or more classes. But even if you decide that you want one class with lots of dependencies, it is still easy to implement in plain old Java, using the Builder Pattern. (Joshua Bloch has a nice discussion of it ) For example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MyDao {
    private final AuthClient authClient;
    private final Configuration config;
    private final DataSource dataSource;
    private MyDao(Builder builder) {
        this.authClient = builder.authClient;
        this.config = builder.config;
        this.dataSource = builder.dataSource;
    }
    public static Builder builder() {
        return new Builder();
    }
    . . .
    public static class Builder {
        private AuthClient authClient;
        private Configuration config;
        private DataSource dataSource;
        public Builder withAuthClient(AuthClient authClient) {
            this.authClient = authClient;
            return this;
        }
        public Builder withConfiguration(Configuration config) {
            this.config = config;
            return this;
        }
        public Builder withDataSource(DataSource dataSource) {
            this.dataSource = dataSource;
            return this;
        }
        public MyDao build() {
            return new MyDao(this);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;USAGE:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MyDao myDao = MyDao.builder()
    .withAuthClient(new AuthClient())
    .withConfiguration(getConfigurationFromSomewhere())
    .withDataSource(getDataSourceFromSomewhere())
    .build();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The builder pattern offers a hybrid approach to dependency injection: setter injection (with fluid setters) is used to inject dependencies into the builder, and constructor injection is used to inject the builder into the target class. This makes it easy to build immutable objects, and to guarantee that all required dependencies are set before constructing the target object. (There are plugins for IDEA that auto-generate builders, and the Lombok library also auto-generates them.)&lt;/p&gt;

&lt;p&gt;Suppose your project has dozens of objects that need to be initialized with dependency injection, and you want to have one central place in your code where all of the “wiring” takes place. Is a framework needed to handle this? Of course not. It is absolutely trivial to do this in plain old Java. Most web applications have a high-level object representing the entry point for the entire web app, such as a ServerMain or an ApplicationContext, and that is an obvious place to instantiate your objects with all of their dependencies.&lt;/p&gt;

&lt;p&gt;Suppose your ServerMain or ApplicationContext gets bloated because of all the “wiring” needed. Is a framework needed to handle this situation? Of course not. It is absolutely trivial to break up your class into a few well-organized classes, all within one package that is responsible for wiring all components in your project.&lt;/p&gt;

&lt;p&gt;On the other hand, if we use a dependency injection framework, life is more complicated. For example, with the Spring framework, dependency injection is typically handled by Spring’s autowiring, which is often cited as an advantage of using Spring. But autowiring is not trivial. It comes in many flavors, such as byName, byType, constructor, and autodetect. And it is controlled by a number of annotations working together, including @Autowired, @Configuration, @Component, @Bean, @Server, and @Qualifier. These annotations have their own syntax, semantics, and nuances; they effectively comprise a domain specific language. It takes some studying to understand their complexities. Autowiring is significantly more complicated than constructor injection and setter injection, which require no annotations.&lt;/p&gt;

&lt;p&gt;Autowiring also complicates unit testing. If you are trying to test a class with autowired dependencies, simple Junit tests won’t be enough. Your class depends on the Spring container to inject its dependencies, but simple Junit tests are not running inside of a Spring container, so your autowired dependencies will be null. To make unit tests work, you need to use another Spring annotation, @RunWith, and perhaps @SpringBootTest. Or you could use another framework such as Mockito and the @InjectMocks annotation. And if your tests do not behave as expected, you might need to research the semantics for @Runwith or @InjectMocks. Your unit tests become tightly coupled to the framework.&lt;/p&gt;

&lt;p&gt;Spring’s autowiring is a classic example of taking something simple and making it complicated. If you search the web for “Spring autowiring”, you will find plenty of tutorials and user guides that explain how to use it; and you will also find plenty of discussions among people having trouble understanding how it works, getting it to work, and trouble-shooting it when it doesn’t work as expected. Occasionally, you will hear people talk about “autowiring hell”, when they are deep into trouble-shooting, and a bit frustrated by it. If you search the web for “Guice dependency injection” you will find similar results.&lt;/p&gt;

&lt;p&gt;By contrast, if you search the web for “constructor injection” or “setter injection”, you won’t find anyone who has had trouble with them. There is no need for lengthy tutorials and documentation. No one has trouble understanding them, implementing them, testing them, or trouble-shooting when something doesn’t work. Writing unit tests for classes that use constructor or setter injection could not be more straight-forward. You can inject mocks or stubs in exactly the same way you would inject real objects in your production code. You can write your own mock objects or use your favorite mocking framework. All of this can be done without the magic of mysterious annotations that add another layer of complexity to your code, and make your code tightly coupled to a framework. That is the beauty of plain old Java.&lt;/p&gt;

&lt;p&gt;It is instructive to compare a simple task, such as dependency injection, to a complicated task, such as JSON parsing and serialization. It would be tedious and error prone to write your own JSON parser and serializer. This is a task that cries out for a library or framework. The widely used Jackson library is a good example of an annotation-driven library that greatly simplifies our lives. It stands in stark contrast to Spring’s autowiring, which complicates our lives.&lt;/p&gt;

&lt;p&gt;Frameworks often promise to reduce the amount of boiler-plate code in your projects, which makes them enticing. But they often come with a price to be paid when the time comes for testing and trouble-shooting. It is often straight-forward to test and trouble-shoot plain old Java. It is often difficult to test and trouble-shoot code that is replete with framework annotations.&lt;/p&gt;

&lt;p&gt;Frameworks typically lead to tight coupling. In many cases, once you start using a framework, it creeps into every corner of your code, and your code quickly becomes tightly coupled to the framework. It would be difficult to remove a dependency on Spring from a project that uses it; doing so would basically require a complete rewrite.&lt;/p&gt;

&lt;p&gt;The next time you are tempted to reach for a framework, pause for a moment, take a breath, and ask yourself: what problem are you trying to solve with the framework? And how easy or difficult would it be to solve the problem with plain old Java?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>COVID-19 – Lockdown, not a Lockout</title>
      <dc:creator>omtechblog</dc:creator>
      <pubDate>Tue, 07 Sep 2021 14:56:38 +0000</pubDate>
      <link>https://dev.to/omtechblog/covid-19-lockdown-not-a-lockout-1poo</link>
      <guid>https://dev.to/omtechblog/covid-19-lockdown-not-a-lockout-1poo</guid>
      <description>&lt;p&gt;by Saffi Khan – August 10, 2020&lt;/p&gt;

&lt;p&gt;Across the globe, people are facing the reality of COVID-19. While we face individual difficulties in this crisis at home and at work, we all face a common threat. We are in this together and these are truly unprecedented times. Large sectors of the economy such as leisure and tourism remained completely shut down but some industries saw a rise in recruitment, particularly in health and technology.&lt;br&gt;
Technology companies have shifted into high gear to accommodate the sudden demand for remote working. The change has been so abrupt and dramatic that it is calling into question how companies will ever go back to the way they previously operated.&lt;br&gt;
Keeping employees productive is key to making our services work for our customers. I will share some of the ways to make this as friction-less as possible and help create flow.&lt;br&gt;
Setting up work environment at home&lt;/p&gt;

&lt;p&gt;First things first. With a patchy network, it can be troublesome to stay connected to the internet. As most work applications are accessed online, an uninterrupted broadband connection is an absolute must to stay efficient at home especially while using text chat for work-related correspondence or Outlook to exchange emails, not to mention various other work-related applications.&lt;br&gt;
As in-person interaction is not possible, the closest one can get is via video call. Skype, Zoom and Jitsi offer video-conferencing options. It is important to ensure that your chosen communication platform becomes a primary way in which people connect. This is not just a place for remote workers, but office workers too.&lt;br&gt;
Keeping employees informed&lt;/p&gt;

&lt;p&gt;One of the challenges that remote workers face is that they lose out on the “water cooler” culture of an office. That is, they often miss out on the usual office gossip and often hear about the company news and changes last.&lt;br&gt;
Announcing company related news updates in broader communication channels on Slack keeps everyone informed about latest development and encourages employees to provide input and updates.&lt;br&gt;
Create your personal work schedule&lt;/p&gt;

&lt;p&gt;Try to find yourself a dedicated and comfortable spot to work that you can associate with your job and leave when you finish. It is also worthwhile creating a morning routine just like your work day. Freshen up, change into work clothes, take a 5-minute walk like a work commute and grab your cup of coffee or tea before you start. If possible, eat your meals away from your workstation.&lt;br&gt;
Giving yourself a break&lt;/p&gt;

&lt;p&gt;Working remotely for a prolonged period of time may make us feel like our professional and personal life have become intermingled. If you are passionate about your job, you may feel tempted to work too much and if you don’t have a strategy to take regular breaks, you will overwork yourself. It is important to have realistic expectations from yourself as pandemic social distancing measures have meant that you have children out of school or daycare and a partner or roommate also trying to work from home. Breaks can help you balance all these elements.&lt;br&gt;
Keeping employees motivated&lt;/p&gt;

&lt;p&gt;As employees adjust to how, when and where they work, it is a great opportunity to create the best learning experiences possible under the circumstances. At OpenMarket, we have a great work culture where every “last Thursday of the month”, employees gather together to celebrate each teams achievements and take part in social activities.&lt;br&gt;
What the future holds…&lt;/p&gt;

&lt;p&gt;For many companies, remote working is an interesting prospect to be able to grow, be more efficient and have more flexibility. If you have to move fast, it does not have to be perfect and it often takes months or even years to find a right balance but in times like this with COVID-19, there simply isn’t the luxury of a long timeline.&lt;br&gt;
At OpenMarket, we have found that maintaining a collaborative and productive culture, empowering engineers with remote-friendly tools to ensure we stay productive are all effective ways to help employees move to remote work. As we move forward, it is going to be critical for all members of the tech community to work with the businesses they support to devise more specific long-term plans and strategies to handle an uncertain future.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
