<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Allen T.V&gt;</title>
    <description>The latest articles on DEV Community by Allen T.V&gt; (@allentv).</description>
    <link>https://dev.to/allentv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/allentv"/>
    <language>en</language>
    <item>
      <title>Tracking redirects on Cloudflare with Google Analytics</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Sun, 28 Jan 2024 11:59:32 +0000</pubDate>
      <link>https://dev.to/allentv/tracking-redirects-on-cloudflare-with-google-analytics-3bfi</link>
      <guid>https://dev.to/allentv/tracking-redirects-on-cloudflare-with-google-analytics-3bfi</guid>
      <description>&lt;p&gt;I recently came across this interesting problem of wanting to track URL redirects with Google Analytics on Cloudflare and it got me spending some time into investigating how things work and a potential solution.&lt;/p&gt;

&lt;p&gt;Before I explain my solution, let me provide some context on the problem. I am working on a side project that requires directly invoking a 3rd party native mobile application when the user visits a certain URL on my domain. As the final touch point for the user is a native app, I don't have much control over tracking the user action for opening an app.&lt;/p&gt;

&lt;p&gt;A typical solution for this would be to introduce an interstitial page where you would load all of your tracking and then redirect the user. As the end result is a mobile app and to avoid loading another page in between, keeping the number of steps and intermediaries to a minimum will keep the latency as low as possible.&lt;/p&gt;

&lt;p&gt;As my domain is currently setup with Cloudflare, I started exploring options for computing at the edge as that provides the lowest latency physically possible. Cloudflare has an offering called Workers which is essentially computing at the edge and it can hook into handling incoming requests for one or more routes through wildcards. So if I can capture the requests, extract necessary metadata and send a request to Google Analytics, then I should be able to have analytics on my URLs with low latency.&lt;/p&gt;

&lt;p&gt;The first step for the process is to create a Cloudflare worker. That was pretty straightforward based on the documentation that gives you a good scaffolding for either plain JavaScript or to use Typescript, I went with the latter to have type safety and to detect any potential issues early on. Using the wrangler CLI, I was able to build and deploy the project directly from my machine with very little hassle. I directly added environment variables in the dashboard and updated the deploy command to skip deleting these vars. The dashboard also gives you an option to encrypt secrets such as passwords and API secrets.&lt;/p&gt;

&lt;p&gt;The next step was to integrate with the Google Analytics API. After setting up a new property on Google Analytics 4, I obtained an API secret for the Measurement Protocol. As per docs, this API supports server to server invocation and is used for enriching existing events sent directly from the UI. In my case, there is no web UI and so I will solely be using this for sending event data.&lt;/p&gt;

&lt;p&gt;Creating the event payload was tougher than I thought. It wasn't easy to find the right combination of attributes to get the events to show up on the realtime report view in Google Analytics. After going over multiple posts on StackOverflow and trial-n-error, I was able to get the events to show up with the attribute data that I sent.&lt;/p&gt;

&lt;p&gt;User location information is important to understand where your users are from and how your marketing campaigns are performing so that you can make a call on how to structure your content marketing strategy. Sending location information: city and country directly to Google Analytics did not show up on the default user demographics dashboard which was frustrating but after some investigation into the documentation, I found that sending geo location information for users is not accepted by the Measurement Protocol for now.&lt;/p&gt;

&lt;p&gt;The workaround for viewing User location data is straightforward. Attach the location information for each event and then create a custom dashboard where this information shows up in with aggregates on timeframe such as 6 hours, 1 day, 3 days etc.&lt;/p&gt;

&lt;p&gt;If you have come across this problem before and would like to implement the same solution as described above, you can use this &lt;a href="https://github.com/allentv/cloudflare-worker-ga4"&gt;Github repository &lt;/a&gt; to deploy the changes. Just update the environment variables and deploy using wrangler. Good luck!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>cloudcomputing</category>
      <category>analytics</category>
      <category>edgecomputing</category>
    </item>
    <item>
      <title>Imposter Syndrome and Senior Engineers</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Sat, 16 Jul 2022 10:17:50 +0000</pubDate>
      <link>https://dev.to/allentv/imposter-syndrome-and-senior-engineers-2inh</link>
      <guid>https://dev.to/allentv/imposter-syndrome-and-senior-engineers-2inh</guid>
      <description>&lt;p&gt;Having been on the journey of software engineering a couple of years (10+) now, you reach a point where you take on more responsibilities as an Individual Contributor (IC) and your team starts looking up to you for answers especially for problems that others cannot find a solution to. This puts you in a spot where you start feeling that you cannot make any mistakes lest they don't consider you an expert anymore. Sometimes your team mates find novel solutions to problems that you feel you could have never come up with, no matter how hard you try, making you wonder deep down if you are really qualified to help the team get to the next level. You also find yourself doing Google foo to find answers to questions about programming that should have come to you naturally a few years ago but now you are not too sure.&lt;/p&gt;

&lt;p&gt;If you have been through the above as a senior engineer, then let me assure you that there are others out there who are on the same boat including me(!) who go through the same emotions / disbelief in one's abilities from time to time. Going through these feelings is not a sign of weakness but a confirmation about increased self-awareness. Letting the feeling ride through without taking an action is one way to deal with this but that does not stop it from happening again nor lower the intensity of the experience.&lt;/p&gt;

&lt;p&gt;What I have instead found useful is to dig deeper a bit and find the trigger that caused this experience. This can be a tricky process as you may not immediately be able to remember what happened but as you revisit the experiences as part of your daily mindfulness routine or mental breaks during the day, you would be more aware of the details. This is a very personal experience for everyone and so the triggers will differ from person-to-person.&lt;/p&gt;

&lt;p&gt;The technology landscape is changing quite rapidly with newer tools and platforms cropping up pretty much every week. It is very hard to be on top of everything all the time. New technologies are good for water cooler talk but unless you understand the basics, what may be a novel idea would distill into something quite simple on a closer look. I always stress the importance of getting to know your tools better as they help you to be more productive and open up avenues to write better code that is readable, performant and maintainable.&lt;/p&gt;

&lt;p&gt;The common trait that I have seen with software engineers is that they prefer to bury themselves in code and nuances of how software should be developed and thus lose sight of why the software is being built in the first place. In my early years, I had trouble trying to understand the big picture as they call it, of how the work I am doing is going to impact people (clients). This was not to lack of empathy but rather not being familiar with the business domain that I was building the software for. Being able to understand and correlate the high level goal when building software goes a long way in managing expectations about how fast things can be shipped that would be useful immediately to users rather than hiding them behind an alpha release. Having early and actionable feedback is very useful for engineers to make data driven tradeoffs on how the software should be built thus aligning to the growth efforts of the organization.&lt;/p&gt;

&lt;p&gt;Lastly, let me mention the very important aspect of psychological safety. To be able to bring your whole self to work and not be criticised for your ideas and questions, is missing in a lot of workplaces. A healthy flow of ideas opens up opportunities for cross collaboration thus improving camaraderie and encouraging failures early on so that engineers take risks without reproach. Failing fast is much better in terms of time and money rather than much later on. If you are missing this environment at work, please start conversations to bring about this change as it helps to expand your overall experience as an engineer in person skills that goes a long way in having an excellent career in technology!&lt;/p&gt;

</description>
      <category>career</category>
      <category>motivation</category>
      <category>watercooler</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Managing service accounts with Terraform for GCP</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Sun, 20 Feb 2022 12:37:31 +0000</pubDate>
      <link>https://dev.to/allentv/managing-service-accounts-with-terraform-for-gcp-78p</link>
      <guid>https://dev.to/allentv/managing-service-accounts-with-terraform-for-gcp-78p</guid>
      <description>&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; is currently the go-to tool for managing infrastructure through version control. There is somewhat of a learning curve but then it is fairly straightforward to provision new infrastructure. Being able to create a dependency graph and provide details about various components involved is a great way for explaining the nuances of an existing infrastructure to new engineers. Being able to express the infrastructure via code also helps with dissemination of information across multiple engineering teams and avoids having to over document things.&lt;/p&gt;

&lt;p&gt;One of the challenges that I have come across when working with Google Cloud Platform (GCP) is managing service accounts. Creation of service accounts is straightforward but managing keys is a different matter altogether especially for cases where you use the keys in different services. &lt;/p&gt;

&lt;p&gt;When you create a new JSON key for service accounts, you can download the key directly from the UI and you can also manage it via Terraform (TF). If you go with the former approach, you will have to manage the keys yourself especially around who has access. With TF, the keys are re-generated every time you run &lt;code&gt;terraform apply&lt;/code&gt; and you would not have access to them to share with services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_service_account"&lt;/span&gt; &lt;span class="s2"&gt;"service_account"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;account_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service-account-id"&lt;/span&gt;
  &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Service Account"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To deal with this problem of re-generation and to have access, I went with a hybrid approach of using TF to &lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account"&gt;manage service accounts&lt;/a&gt; and then manage the keys myself. After the accounts are created, I use the Google IAM section to generate JSON key files for the service accounts that were just generated. These keys are then stored in the same TF state bucket which is private (by default) but at a location that is not mapped in the TF files. The location would be at a path something like &lt;code&gt;/keys/sa/svc-microservice1.json&lt;/code&gt; and the hierarchy can be of any classification that makes sense for the team.&lt;/p&gt;

&lt;p&gt;A potential classification can take the form of service names and then each folder with have all of the service account keys used by that service. There could also be a separate folder for shared keys. The folder hierarchy does not actually matter as the storage bucket does not have a concept of folder. The path is actually the name of the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;+/
+--keys
   +--sa
      +--microservice1
         +--svc-db.json
         +--svc-storage.json
      +--microservice2
         +--svc-tasks.json
         +--svc-storage.json
      +--shared
         +--svc-build.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the access to the TF state bucket is limited (private) and an automatic audit log is maintained by GCP about who accessed the files, it is relatively safe to maintain the service account key files in the bucket. It also makes it easier for anyone else apart from you to find the keys when needed especially when you are not around.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>gcp</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>My story of publishing a Go package</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Sat, 09 May 2020 23:46:34 +0000</pubDate>
      <link>https://dev.to/allentv/my-story-of-publishing-a-go-package-6jp</link>
      <guid>https://dev.to/allentv/my-story-of-publishing-a-go-package-6jp</guid>
      <description>&lt;p&gt;I recently transferred to a new team at work and the tech stack was Golang. Python has been my language of choice for the past 5 years and so the thought making a change to Go, made me apprehensive. I was no stranger to the language. I had been learning different aspects of the language in the past year and also taught a beginners workshop at the Google Dublin office over a weekend in one of the Google Developer Group events.&lt;/p&gt;

&lt;p&gt;But all the learning is of no use unless you actually put it into practice. When the rubber meets the road, you really know the extent to which your learning process has prepped you. I had look into adding REST endpoints to an existing web application that was serving mobile clients. There was also a new requirement for adding 3rd party integration. All of these are crucial to the smooth working of the application. So having automated tests are key to reliably shipping code.&lt;/p&gt;

&lt;p&gt;So I started on the path of writing unit tests and learnt quite a bit in the process. Mitchell Hashimoto has an excellent talk about advanced testing in Go which I highly recommend to write scalable and easy to extend tests. I did adopt some of the recommendations and found it to be easy to follow. There was also the case of having the same set of boilerplate code repeated in multiple places which brings up the case of using packages to abstract repeated functionality.&lt;/p&gt;

&lt;p&gt;I wrote a set of helper functions for encapsulating common operations that I came across in tests but thought I should extend it further and deploy it as a package so that I can get some feedback from the community as well. So started on the journey of publishing a package over the weekend and expected the process to be hard based on what I heard in the past from package publishers in other languages.&lt;/p&gt;

&lt;p&gt;Published Go packages can be searched for in &lt;code&gt;pkg.go.dev&lt;/code&gt; and is a good way to get yourself recognised. The initial step is to create a Github repository preferably with the prefix &lt;code&gt;go-&lt;/code&gt; which is a convention. Next is to create a package hierarchy that you deem fit and start writing your code along with tests. In my case, the package was about creating unit test helpers. So I created a single package and added the helper functions I had in mind to different files under the same package.&lt;/p&gt;

&lt;p&gt;Once I was done with coding and tests, the next step was to add documentation so that others can read and adopt my code. I started with the default comments for functions to add some context and then added a &lt;code&gt;doc.go&lt;/code&gt; for package level documentation. Running &lt;code&gt;godoc&lt;/code&gt; is a good way to make sure that your package documentation will render correctly when accessed by others. I had &lt;code&gt;godoc&lt;/code&gt; running while I was adding documentation to that I can keep watching on the fly for changes.&lt;/p&gt;

&lt;p&gt;I wanted to add code examples for the different functions that I created, in a way that it would be part of the documentation rather than forcing users to read through tests and figure it out themselves. Fortunately the Go authors already thought of this and it is built into &lt;code&gt;godoc&lt;/code&gt;. In your test files (&lt;code&gt;*_test.go&lt;/code&gt;) add functions that start with &lt;code&gt;Example&lt;/code&gt; and do not take any arguments. For a function, it would be named &lt;code&gt;ExampleFunc&lt;/code&gt;, for a function with a struct receiver it would be &lt;code&gt;ExampleStructName_Func&lt;/code&gt;. In the function body, add the code snippet and finish off with &lt;code&gt;// Output:&lt;/code&gt; followed by the expected output in one or more lines. When you trigger &lt;code&gt;go test ./...&lt;/code&gt;, the test tool will run the example functions and then compare the output with the output mentioned in the comments. If it doesn't match, then an error will be shown in the documentation as well as in the console. I found this process of self-documenting tests very interesting and sped through all the example test functions.&lt;/p&gt;

&lt;p&gt;I recommend using a separate package name for your tests &lt;code&gt;package abc_test&lt;/code&gt; where &lt;code&gt;abc&lt;/code&gt; is the intended package. This is to make sure that your tests are not accessing any private information and to replicate what users of your package will actually see.&lt;/p&gt;

&lt;p&gt;Last step is tagging. Make all your changes available on the &lt;code&gt;master&lt;/code&gt; branch and then create a release tag following semantic versioning. This will make sure that the correct versions are pulled down by Go when there are new releases.&lt;/p&gt;

&lt;p&gt;And that is all it takes to publish your Go package. Here is my first package : &lt;a href="https://pkg.go.dev/github.com/allentv/go-testhelpers@v0.1.1/unithelpers"&gt;https://pkg.go.dev/github.com/allentv/go-testhelpers@v0.1.1/unithelpers&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>package</category>
      <category>documentation</category>
    </item>
    <item>
      <title>Trying Flutter</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Thu, 02 Apr 2020 05:02:14 +0000</pubDate>
      <link>https://dev.to/allentv/trying-flutter-24pi</link>
      <guid>https://dev.to/allentv/trying-flutter-24pi</guid>
      <description>&lt;p&gt;I have been seeing more news about Flutter lately mostly through recommendations from YouTube and from my social feed, which is mostly about technology developments. Having seen some of the examples that were highlighted in Google I/O and the vision for future, I must say I am quite impressed so far. Having tried multiple mobile frameworks/libraries in the past like phonegap, jquery mobile, android, Ionic and react native, I was a bit skeptical as to how things would pan out. &lt;/p&gt;

&lt;p&gt;To have something to measure against, I thought of creating a simple mobile project that required some UI and some backend which is usually the norm across most applications. The source code is available &lt;a href="https://github.com/allentv/calculator-app"&gt;here&lt;/a&gt; for reference. This took me a week to develop in my spare time along side a full time job and raising a toddler. With this project, I was trying to understand how hard it would be to translate ideas that I had into an actual working product and would there be any show stoppers. Here are my findings.&lt;/p&gt;

&lt;p&gt;The development environment was easy to setup. VSCode has good extensions for Flutter and the SDK was a breeze to setup. Having integration built into the IDE especially around debugging and UI updates felt amazing. I had both iOS and Android simulators running on my mac along with VSCode and did not see a drop in performance at all which speaks volumes about how well the tooling is done. Upgrades are fairly easy to do through the CLI and it also has commands to check the status of SDKs and devices whic I find quite useful.&lt;/p&gt;

&lt;p&gt;You have to learn a new language that you would probably would not have heard about, called Dart. Apparently it is used quite a bit within Google and touted to be easy to learn for those with some OOP experience. The last time I read about Dart was in the SASS blog where they talked about rewriting the compiler in Dart for performance reasons and also for the ability to generate Javascript code automatically. So naturally I was quite curious on how the language is structured. Having used multiple programming languages in the past, learning Dart was quite straightforward and was easy to get productive. Finding help was not too hard especially with good examples on dart.dev website.&lt;/p&gt;

&lt;p&gt;UI Layout was something to get used to. I had expected this part to be hard to grasp having dealt with XML based declarative layouts and code based layout creation in the past. I was skeptical as to how complicated things would be. The nesting of widgets did turn out to be quite a monstrosity to handle since widgets deal with one thing at a time. The focus is good for understanding what can be achieved easily but then you are faced with a large nesting tree that is not easy to read and understand easily. The way to reduce cognitive load is to create your own reusable widgets especially for those that have similar structure and then reuse them. This approach made my layout code more readable especially for the buttons. The upside is, you can subclass the standard widgets and expect things to work correctly out of the box. I like the approach of having specialised component set for both Android and iOS platforms that try to keep the behaviours as close as possible to the native feature set.&lt;/p&gt;

&lt;p&gt;Expressivity is touted as one of the best things about Flutter where the community tries to prove that having design parity is not impossible. Customization is lauded and trying to get unique user experiences into the hands of the user is key. I haven't had much time to explore customization in detail though I would definitely like to try out some of the design concepts from Pintrest or Dribble in future.&lt;/p&gt;

&lt;p&gt;The mobile app responsiveness was quite impressive. Haven't noticed any lag at all, in rendering or interaction and felt like a native app even though the whole framework does a lot of custom stuff under the hood. The Flutter team has given talks about how they built the framework and what goes on under the hood which are available on &lt;a href="https://www.youtube.com/channel/UCwXdFgeE9KYzlDdR7TG9cMw/"&gt;YouTube&lt;/a&gt;. Considering that everything you see on the screen is painted in real-time at 60fps, it is quite a feat!&lt;/p&gt;

&lt;p&gt;The community around Flutter is growing everyday which for me is a sign of increasing adoption. The website pub.dev is a good place to find packages that are contributed by both the Flutter team and the community. The variety and availability of packages reassures me that my productivity won't be affected much when I want to build my next mobile app as I expect to find a package that would match my needs. I learnt a good deal of Flutter by just watching YouTube tutorials and trying to understand the process by which folks built apps. It was nice to see how designs are coded to life with a fairly good throughput and motivated me to try things out. The number of Google Developer Experts (GDEs) in Flutter are also increasing.&lt;/p&gt;

&lt;p&gt;Overall, I would definitely recommend looking into Flutter if you would like to try out mobile development without much hassle. If you are looking for something with strong native (hardware) integration, then this might not be the solution at this time. Happy Hacking!&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>android</category>
      <category>ios</category>
      <category>dart</category>
    </item>
    <item>
      <title>Why I detest React Hooks</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Wed, 25 Mar 2020 23:54:03 +0000</pubDate>
      <link>https://dev.to/allentv/why-i-detest-react-hooks-20da</link>
      <guid>https://dev.to/allentv/why-i-detest-react-hooks-20da</guid>
      <description>&lt;p&gt;React Hooks has been the new hotness ever since it was introduced. I have heard many people discuss about how hooks help them write less code with the same functionality and how it is more performant since everything is now functions. There has also been many articles published online touting, we should ditch classes for functions altogether as less lines of code (LOC) is always better.&lt;/p&gt;

&lt;p&gt;What gets me, is how folks think brevity is always better and trying to be clever with their code is somehow the best way to write code. I disagree on both fronts.&lt;/p&gt;

&lt;p&gt;Brevity should not be at the expense of clarity as code is written for humans, not machines. Any code you write today will be encountered by you or someone else in your team again in the next 6 months. Being able to still understand the context behind that block of code and make changes confidently, is what well-written code is all about.&lt;/p&gt;

&lt;p&gt;I always prefer to be explicit rather than implicit. And React Hooks seems to me like a clever hack. Having converted multiple class based React components to functional components using Hooks, I feel like the function is bloated and violates Single Responsibility Principle (SRP). The hook related code seems to be floating around in the function definition trying to separate the main section of how the component will be rendered, from the function signature.&lt;/p&gt;

&lt;p&gt;Compare this to a class based React component where every section is clearly separated into functions that are named after what they represent in the React lifecycle or what action they perform for event handlers. Compare this to the &lt;code&gt;useEffect&lt;/code&gt; hook which is trying to consolidate mount, update and unmount processes into one. No UI engineer would be confused when they implement lifecycle methods in a class but would certainly be stumped in the beginning when they see the code within &lt;code&gt;useEffect&lt;/code&gt; being invoked 3 times when they first implement this hook.&lt;/p&gt;

&lt;p&gt;Also, trying to adopt the Redux patterns within React seems like moving from being a library to a framework. React is a very good UI library and gives the freedom to use whatever works in other areas. Trying to push towards the redux pattern of using reducers and dispatchers, is a bad move in my books. Not sure if that is because the creator of Redux is now part of the React team. This reminds me of the example of how the React team was pushing for using mixins in the beginning even when a lot of folks had been burnt using it in either other languages or in Javascript. The React team has now denounced the use of mixins.&lt;/p&gt;

&lt;p&gt;I hope React will stay as an excellent UI library that is a go-to standard for high-performance UIs and stop trying to be framework. I would love to see more investment in tooling, especially create-react-app and make it more easier to build UIs by standardizing some of conflicting issues that developers have when they start React projects. This is an aspect that I like about the Go programming language where they have published an article about writing Idiomatic Go code to make sure folks follow the same conventions. The tools that Go has take out most of the friction that teams usually have, making any open-source Go code look very much the same.&lt;/p&gt;

&lt;p&gt;I look forward to seeing more improvements in the library that lets developers focus more on implementing business features as fast as possible and reduce friction in writing tests by generating test code for most common scenarios like clicking a button, shallow render, etc&lt;/p&gt;

</description>
      <category>react</category>
      <category>hooks</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Why Figma deserves my time?</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Wed, 25 Mar 2020 23:29:51 +0000</pubDate>
      <link>https://dev.to/allentv/why-figma-deserves-my-time-39l0</link>
      <guid>https://dev.to/allentv/why-figma-deserves-my-time-39l0</guid>
      <description>&lt;p&gt;I have been dabbling with Adobe XD recently coming from Sketch and found the interface to be very user-friendly. After going through a bunch of tutorials about XD and with plans to build UI prototypes in it, I came across Figma. It was highly recommended by multiple folks and was dubbed as "highly collaborative". With my current workplace also giving out accounts, I thought why not give it a shot? Turns out, Figma is indeed highly collaborative in nature. It is like Google Docs for design. I was genuinely surprised to see multiple folks accessing the document at the same time, highlighted by coloured cursors with their names on it. Supporting Single-Sign-On (SSO) also helps with a faster login process. Having all my documents available through the browser was an excellent feature which I can also use through a desktop shell that is available to download, for those who prefer such an interface.&lt;/p&gt;

&lt;p&gt;The UI preview feature works quite well and the ability to create high fidelity icons is a plus. Being able to see the CSS properties for different elements makes the process of translation to code quite easy. It is also helpful to see the relative distances between elements just by clicking and moving the cursor between elements.&lt;/p&gt;

&lt;p&gt;Being able to share the designs with other by just a simple hyperlink is very powerful compared to sharing huge file blobs. It makes it so much more easier to share updates rather having to worry about different versions. For me, it makes the whole collaboration process much simpler and aids with faster iteration.&lt;/p&gt;

&lt;p&gt;The only feature that I miss in Figma is the ability to add micro-interactions like a hover effect when the user moves the mouse pointer over an element, or a toggle state to highlight an action has happened. Introducing this feature would give designers an extra tool to communicate their ideas with developers especially to improve the user interaction.&lt;/p&gt;

&lt;p&gt;Other potential features that I think would make a difference is the ability to generate a design system from the figma document that has all the common styles. This could be a combination of one or more SASS or LESS files. Right now it has to be manually generated by the developers to be used in a web application. If it was automatically generated based on an easy to use convention, making sweeping style changes to an application would be deterministic and easy to be done.&lt;/p&gt;

&lt;p&gt;Being able to extend Figma through an API or plugins would make it easier for designers and developers to implement workflows that align to their organisational structure and way of doing things. After all, any productivity gains, is time saved!&lt;/p&gt;

&lt;p&gt;I look forward to my journey with Figma and creating designs for new apps with it. The team behind Figma has been very receptive to feedback and understand good design. I look forward to the updates to the product and what others in the industry do with it!&lt;/p&gt;

</description>
      <category>design</category>
      <category>ux</category>
      <category>productivity</category>
      <category>figma</category>
    </item>
    <item>
      <title>Using Docker for local development</title>
      <dc:creator>Allen T.V&gt;</dc:creator>
      <pubDate>Sat, 08 Jun 2019 15:27:17 +0000</pubDate>
      <link>https://dev.to/allentv/using-docker-for-local-development-1eb5</link>
      <guid>https://dev.to/allentv/using-docker-for-local-development-1eb5</guid>
      <description>&lt;p&gt;Local development involves setting up and experimenting with a lot of tools to reach a point where the app is ready to be developed. Experimenting with the right tools takes time and also runs the risk of polluting the global space of your machine which in turn can cause other software to stop working because a shared library was updated by another. A similar situation called "DLL hell" is quite well-known in the Microsoft World. Containerization is an excellent solution to get around this problem with very little downside.&lt;/p&gt;

&lt;p&gt;Docker is a well known player in the container domain and has world wide domination in terms of mind space when someone thinks about containers. The rise of DevOps culture has played a massive role with tools like Kubernetes gaining wide spread adoption. There is also a huge demand for engineers who understand how Docker works. So investing time to learn Docker is totally worthwhile.&lt;/p&gt;

&lt;p&gt;To improve the local development experience, install Docker from &lt;a href="https://www.docker.com/products/docker-desktop"&gt;https://www.docker.com/products/docker-desktop&lt;/a&gt; specific to your environment. The next step is to create a Docker file that defines the custom environment and tools required by your application. This step will take some time as we it requires trial and error to build an optimised image. After the image has been built, you can now spawn docker containers which is the runtime for your application. The container can load resources from your machine inside your container very similar to mounting an external resource inside a Virtual Machine (VM). Fine tuned options like controlling the hardware resources available for the container like CPUs and Memory gives the extra that experienced engineers are interested in. Such options also make a lot of sense especially when scaling of the app is required and is managed automatically on a cluster by something like Kubernetes. Clearly defining hardware resources also goes a long way in managing hardware utilization and especially on the wallet since better utilization would translate to lower operating costs.&lt;/p&gt;

&lt;p&gt;The key advantage that docker containers provide is isolation of resources and the ability to easily recover if there is a crash. The container is managed by the OS through a process and on a crash, only the process is killed without having an impact on the rest of the system. This is very handy when dealing with migrating legacy software systems to newer versions and there is an expectation from business stakeholders that both systems run in parallel till the legacy version is retired. Having isolation between 2 versions and the ability to monitor how things work in parallel is very useful to the success of large software migration project.&lt;/p&gt;

&lt;p&gt;If you think this is too much work, it is not. The whole docker setup takes less than 5 minutes. You can find pre-built docker images on docker hub which is a public registry for anyone to host docker images that they created. Docker file management and stringing of multiple containers can become very hard to maintain. In such cases, you can use docker compose tool to manage a multi-tier stack. The community around docker is very supportive and with local meetups/events happening all around the world, it is a good idea to get started today!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>opinion</category>
    </item>
  </channel>
</rss>
