<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shayde Nofziger</title>
    <description>The latest articles on DEV Community by Shayde Nofziger (@shayde).</description>
    <link>https://dev.to/shayde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shayde"/>
    <language>en</language>
    <item>
      <title>Honeycomb, why are my service's requests so slow?!</title>
      <dc:creator>Shayde Nofziger</dc:creator>
      <pubDate>Sun, 16 May 2021 00:07:42 +0000</pubDate>
      <link>https://dev.to/shayde/honeycomb-in-action-why-are-my-service-s-requests-so-slow-516g</link>
      <guid>https://dev.to/shayde/honeycomb-in-action-why-are-my-service-s-requests-so-slow-516g</guid>
      <description>&lt;p&gt;&lt;a href="https://honeycomb.io" rel="noopener noreferrer"&gt;Honeycomb&lt;/a&gt; can be used to help identify the best "bang for your buck" in terms of time spent optimizing for performance.&lt;/p&gt;

&lt;p&gt;Suppose I have been asked to spend the next 2 sprints researching and contributing code changes and improvements to help increase service performance. The only information I have been given from customers is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Things run slowly right after I login during certain hours of the day."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Needle....haystack...right?&lt;/p&gt;

&lt;p&gt;Okay, at least I have &lt;em&gt;some&lt;/em&gt; starting points. Honeycomb should still allow me to take this broad symptom of slow requests and investigate further. Trace data includes timestamps and duration fields, which should allow me to get some information about the requests my system sees over time. Our customers seem to notice this most directly after signing in. I know that after signing in, and anytime a user accesses common parts of the platform, their account information is queried. So I think I at least know what microservice to drill into.&lt;/p&gt;

&lt;p&gt;Honeycomb can help me drill into even more granular data with a simple query. I construct a query for all of the events my service emits for each API request. I then group them by the endpoint name and the status code result, and create a visualization of the 95th percentile of request duration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4t6tglojjeegwfur0ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4t6tglojjeegwfur0ry.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This query tells me that the &lt;code&gt;GET .../profile&lt;/code&gt; endpoint responds to 95% of requests in ~910ms or less. Knowing that this is a heavily used endpoint, and compared to the &lt;code&gt;GET .../connectioninfo&lt;/code&gt; endpoint that sees similar traffic but at a 100x faster duration, it is apparent that this route is pretty slow compared to other requests to my system. Further querying tells me the average response duration for the &lt;code&gt;/profile&lt;/code&gt; endpoint over that same course of time was 473ms. Additionally, &lt;code&gt;P50(duration_ms)&lt;/code&gt; (half of all requests) was 363ms.&lt;/p&gt;

&lt;p&gt;This endpoint seems like it may be a good candidate for further inspection -- it looks like it is used heavily and that kind of response rate could definitely be a source a slowness for end-users.&lt;/p&gt;

&lt;p&gt;Hmm...&lt;br&gt;
How can I learn more?&lt;br&gt;
What specifically about that endpoint is so slow?&lt;br&gt;
Do I need to set up some local performance testing to figure it out?&lt;/p&gt;

&lt;p&gt;Of course not!&lt;/p&gt;

&lt;p&gt;Honeycomb has all of the info I need in the trace data graphs for the requests I am curious about:&lt;/p&gt;

&lt;p&gt;From the query results view, I can drill further into the data by hovering over the &lt;code&gt;name&lt;/code&gt; column's value I am interested in (&lt;code&gt;GET .../profile&lt;/code&gt;), clicking the &lt;code&gt;...&lt;/code&gt; menu that appears, and selecting the option to show only results where &lt;code&gt;name = GET .../profile&lt;/code&gt; and &lt;code&gt;http.status_code = 200&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is where I can put my detective hat on and start doing the interrogating I talked about in &lt;a href="https://dev.to/shayde/everyday-observability-with-honeycomb-4o3h"&gt;my initial post&lt;/a&gt; -- now that I have specific data and a good question to ask ("what is causing the most heavily endpoint of my service, &lt;code&gt;/profile&lt;/code&gt;, to be so damn slow?!") I can use Honeycomb to start looking for patterns and the source of the slowness. From my query above, I can see that the pattern is pretty typical during the weekdays -- I narrow my search to 24-48 hours of the work week by highlighting that time range and clicking the "+" button that appears to zoom in my search window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54dpnij84bdj4vm8rk2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54dpnij84bdj4vm8rk2m.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0nce86dibdkccrx0fpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0nce86dibdkccrx0fpi.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh okay -- at this level I can see that the data is pretty normal aside from a few outliers during off-peak times. I'm mostly curious about peak-usage and typical requests, so I'm going to zoom in even further to the peak usage-time of that endpoint over a 1-2 hour duration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0gf0v1lzxibwk64hsg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0gf0v1lzxibwk64hsg8.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome -- that's much more manageable and more "typical" of the usage scenario I'm concerned with. Once I have drilled in enough and crafted the query for the request data I am interested in examining, I am going to start poking at the trace data for these requests and see if I can't find some common patterns.&lt;/p&gt;

&lt;p&gt;I can access trace data for requests a few different ways. The &lt;em&gt;Traces&lt;/em&gt; tab in Honeycomb will give me the 10 slowest traces that Honeycomb has for the time range of my query. I can also head on over to the &lt;em&gt;Raw Data&lt;/em&gt; tab and click on the &lt;code&gt;trace.trace_id&lt;/code&gt; column value for any event I am curious about. Clicking into a trace shows me a time-series waterfall visualization of the request:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer75h7iaxwf62ieql77e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer75h7iaxwf62ieql77e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The trace above shows a typical request to the &lt;code&gt;/profile&lt;/code&gt; endpoint and helps me understand potential areas for improvement. Keep in mind, I have not even touched the code that runs this service and I have already identified key areas of it that may be in need of attention. If I analyze the trace and walk through the codebase alongside it, I can get context about the external requests the original one generated (these are notated in blue on the screenshot above).&lt;/p&gt;

&lt;p&gt;Armed with the request data and its context in code, I can quickly discern what the slow-up is. It turns out, the &lt;code&gt;/profile&lt;/code&gt; endpoint is very popular in our platform -- it can be called dozens of times in quick succession as a user navigates to different areas of our applications. This in-and-of-itself is not problematic, however, every time the user's &lt;code&gt;/profile&lt;/code&gt; endpoint is called I can see in code and tell from the trace that data changed with a low-frequency (user first/last name, email, avatar image url, user permissions and rights to products on the platform, etc.) is being retrieved from various downstream services and ultimately resulting in multiple database executions.&lt;/p&gt;

&lt;p&gt;When the same data is being queried by and for the same user multiple times in quick succession, this results in more resource usage by the server, slower response times, and greater database traffic / spend -- especially in PaaS database models that charge on a request transaction throughput basis.&lt;/p&gt;

&lt;p&gt;The highest-trafficked endpoint in our service also happens to be reading the same rarely-changed data from a database for each user, oftentimes in rapid succession (several times in 10 seconds). This is a ripe candidate for a smart in-memory caching mechanism.&lt;/p&gt;

&lt;p&gt;Armed with the request duration data, I am now able to tell a full story and pinpoint a worthwhile time to spend investing in performance for my service, all without doing any sort of local performance testing, or even looking at the logic in the code.&lt;/p&gt;

&lt;p&gt;Allow the observability data to give you the data and answers you are looking for. It can help identify specific the microservices, endpoints, and areas of the codebase to look at so you are ultimately making the best use of your time and efforts, especially in companies and enterprises with hundreds of microservices supporting a large platform.&lt;/p&gt;

&lt;p&gt;In my next post, I'll show how Honeycomb and observability/trace data can be useful tools to help implement smart caching mechanisms to balance service performance and cost. I'll also highlight some other features in Honeycomb, such as Boards and Derived Columns - stay tuned!&lt;/p&gt;

&lt;p&gt;Learn more, straight from the Hive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.honeycomb.io/working-with-your-data/tracing/" rel="noopener noreferrer"&gt;Explore trace data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.honeycomb.io/working-with-your-data/rawdatatable/" rel="noopener noreferrer"&gt;View your raw data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.honeycomb.io/working-with-your-data/heatmaps/" rel="noopener noreferrer"&gt;Creating and using Heatmaps&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>performance</category>
      <category>observability</category>
      <category>honeycomb</category>
      <category>o11y</category>
    </item>
    <item>
      <title>Accelerating Developer Onboarding with Honeycomb</title>
      <dc:creator>Shayde Nofziger</dc:creator>
      <pubDate>Mon, 10 May 2021 16:13:12 +0000</pubDate>
      <link>https://dev.to/shayde/everyday-observability-with-honeycomb-scenario-developer-onboarding-i15</link>
      <guid>https://dev.to/shayde/everyday-observability-with-honeycomb-scenario-developer-onboarding-i15</guid>
      <description>&lt;h3&gt;
  
  
  Scenario: Developer Onboarding
&lt;/h3&gt;

&lt;p&gt;Let's suppose I just inherited a new set of services. I'm new to the team, am familiar with the tech stack, but have never worked directly with the services my team is responsible for.&lt;/p&gt;

&lt;p&gt;I would like to become acquainted with our services, what they do, and how we interact with other services on our platform. I load up Honeycomb, and with a quick query, I can get a list of the top 10 API calls our services have seen in the past 24 hours:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xRqlvSnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oe8j5xndnmmjy9j7tlad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xRqlvSnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oe8j5xndnmmjy9j7tlad.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Huh -- seems like the &lt;code&gt;/session/exists&lt;/code&gt; and &lt;code&gt;/oauth2/token&lt;/code&gt; endpoints are the most-used. Using other information from that event, such as the service name and method name, I can easily pull up our codebase and find the entry point of that controller function. If I pull up a specific trace for one of those API calls, I can follow along in our code as it makes calls to internal and external integrations, and follow the exact path through code that request took.&lt;/p&gt;

&lt;p&gt;This is usually the first thing I do when becoming familiar with a new system. Follow the common requests in code. Interpret what is happening. This will help build familiarity with the current state of the system, and contribute to a deeper understanding of the major paths through the service very quickly.&lt;/p&gt;

&lt;p&gt;Don't stop there! Treat your systems as the living, breathing, ever-changing, increasingly-complex entities they are!&lt;/p&gt;

&lt;p&gt;Spend 30-60 minutes of each morning browsing the service's events.&lt;/p&gt;

&lt;p&gt;Can you identify the slowest average requests? How do releases have an effect on the event data such as request duration? How does traffic this week compare to traffic last week? How about differences between releases? What other services on our platform does this one interact with most?&lt;/p&gt;

&lt;p&gt;Becoming intimately familiar with your system's state can help you identify areas for improvement and gives you a baseline to evaluate change against as you push feature releases and fixes to your deployment pipeline.&lt;/p&gt;

&lt;p&gt;Struggling with where to start? &lt;a href="https://docs.honeycomb.io/working-with-your-data/collaborating/history-search/"&gt;Honeycomb offers a neat feature that allows you to browse queries your colleagues have run recently&lt;/a&gt;. Click through a few and tweak them to target a service and operation you are interested in. I have found this feature extremely helpful in getting started with query structuring and learning about the query operators and visualizations in Honeycomb. You can also link the query directly to the colleague who wrote it in a DM and ask for additional clarification / context.&lt;/p&gt;

&lt;p&gt;One of the easiest contributions you can make early on after joining a new team and doing the investigation above is to add trace/span operations and tags of context information that may be useful when looking at traces or interrogating observability data. Ask someone familiar with your observability tooling for an example of adding a trace operation / span tag to your service code. Work from that example to add tags and trace operation context of your own.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;Performing a memory cache operation? If there was an issue in the request, it might be useful to know whether or not an entity was read from the cache - add a &lt;code&gt;cache_hit&lt;/code&gt; tag and set its value to &lt;code&gt;true&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt; accordingly. While we're doing that, we might as well emit other info about the cache we have available to us, such as its current size. Now, every trace operation that retrieves the target entity will include a cache operation event that lets you know whether or not the entity was retrieved from the cache as well as the number of entities stored in the cache at the time of the operation.&lt;/p&gt;

&lt;p&gt;Browsing your events and system traces in Honeycomb can bring about answers to questions you may not even have thought to ask. Events for your service coming from 10 different ip addresses? Looks like your service may run on 10 instances in production -- perhaps a  distributed/shared cache would be more helpful than a local memory one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Arbitrarily-wide events are cheap.&lt;/strong&gt; Throw as much context data in there as you have available to you. It may prove useful to you or others later on.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>o11y</category>
      <category>honeycomb</category>
      <category>devops</category>
    </item>
    <item>
      <title>Everyday Observability with Honeycomb</title>
      <dc:creator>Shayde Nofziger</dc:creator>
      <pubDate>Mon, 10 May 2021 16:13:03 +0000</pubDate>
      <link>https://dev.to/shayde/everyday-observability-with-honeycomb-4o3h</link>
      <guid>https://dev.to/shayde/everyday-observability-with-honeycomb-4o3h</guid>
      <description>&lt;p&gt;Observability is a topic that has gained increased attention, popularity, and focus the past few years - and for good reason. The ability for engineers to easily discover, walk through, and reason about the state of their systems and services is crucial to efficiently and effectively acting upon outages, bugs, and failure states.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.honeycomb.io/"&gt;Honeycomb&lt;/a&gt;, a leader in this space, has developed a powerful observability tool for event ingestion and interrogation.&lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://thenewstack.io/what-is-observability/"&gt;plenty&lt;/a&gt; of blog posts out there &lt;a href="https://charity.wtf/2020/03/03/observability-is-a-many-splendored-thing/"&gt;about Observability&lt;/a&gt;, what it is, what it is not, and plenty of "getting started with Honeycomb" guides. &lt;a href="https://www.honeycomb.io/play/"&gt;Honeycomb even offer's live playgrounds of their product that you can demo for free&lt;/a&gt;! If you are unfamiliar with the concept of observability, Honeycomb, or both, I encourage you to seek out and start with those posts and tutorials and return to this series later. This series is primarily aimed at engineers who have a basic familiarity with observability, may have recently joined a team or company using Honeycomb, and want to get some real-world examples of every-day use.&lt;/p&gt;

&lt;p&gt;I am by no means an expert on the use of Honeycomb - If a feature exists that I am unfamiliar with, or you know something about the tool that could help myself or others, please chime in in the comments or feel free to write up your own post about it and let me know! The intent of this series is to show how I use it in my day-to-day work, to share with others and learn as well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;I am a senior software engineer at Blackbaud, a mid-size enterprise/SaaS company focused on software targeting the Social Good sector. Over the past several months, a team of my colleagues took on the work of creating standard libraries and tooling to emit events from more than 400 microservices to our Honeycomb dataset. While I won't go into the implementation specifics in this blog post, it's important to understand the sheer magnitude of the events we're dealing with. Since event sampling was put in place, we usually see about 250-275 million successful events ingested into Honeycomb every workday.&lt;/p&gt;

&lt;p&gt;My team is primarily responsible for the identity and authentication services that allow our customers (and theirs) to authenticate and interact with our systems. Needless to say, our services are among the most critical of the entire stack -- if authentication is down so are the rest of our services. As such, our ability to respond to, triage, diagnose, and resolve issues accurately and efficiently is extremely important. Honeycomb is a tool that helps us answer questions effectively, and feel more confident in the status of our services.&lt;/p&gt;




&lt;h2&gt;
  
  
  An Interrogative Mindset
&lt;/h2&gt;

&lt;p&gt;Traditionally, the industry has leaned on logs, metrics, and alerting to diagnose, troubleshoot, and resolve issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We develop dashboards to monitor "key areas" of our systems for issues.&lt;/li&gt;
&lt;li&gt;We rely on symptoms like "high cpu", "error rate", and "connection status" to alert us to problems with our services.&lt;/li&gt;
&lt;li&gt;When problems arise, we sift through logs, scan metric charts, and try to guess at where the issue is.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We had an idea of where the problems &lt;em&gt;might&lt;/em&gt; arise, so we structured our logging, monitoring, and alerting to focus on those known areas.&lt;/p&gt;

&lt;p&gt;The core tenets of Observability require us to shift from a symptom-reactive, guess-and-check mindset to one that is proactive and interrogative. Distributed microservice architectures have brought with them added complexity - a single call to one API service could result in dozens of calls to other services on the back-end, any one of which could fail for any number of reasons.&lt;/p&gt;

&lt;p&gt;We can no longer reasonably expect to predict, monitor, and prevent all failure states of our application. We can put logging, monitoring, and alerting in place to watch for symptoms, but to truly understand root cause and pinpoint complex issues quickly requires the ability to get answers from our telemetry to questions that we haven't even thought of yet.&lt;/p&gt;

&lt;p&gt;Honeycomb is a tool for doing just that -- answering questions about our systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenarios - Honeycomb In Action
&lt;/h2&gt;

&lt;p&gt;In this series, I will walk through a few different real-world examples of how Honeycomb can be used that demonstrate the practicality, usefulness, and benefits of adding it to your toolbelt - no matter the stage of your career or familiarity with the service.&lt;/p&gt;

&lt;p&gt;Scenarios will differ in complexity and technicality. I intend this to be a running series of common and interesting real-world use-cases for Honeycomb that I run into. Feel free to offer up feedback and suggestions of your own in the comments!&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenarios Coming Soon
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Performance and Cost Optimization&lt;/li&gt;
&lt;li&gt;Solving for Customer Delight&lt;/li&gt;
&lt;li&gt;Incident Response&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Wish List
&lt;/h2&gt;

&lt;p&gt;This is a living list of features that I would find helpful in Honeycomb that do not yet exist (I think!) as of this post.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ability to re-arrange query operators and visualizations.

&lt;ul&gt;
&lt;li&gt;Right now, in order to re-arrange operators/visualizations, you need to remove then and re-add them. It would be nice to be able to re-arrange/edit these on the fly.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Allow me to restrict Marker visibility to certain conditions, such as Microservice name.

&lt;ul&gt;
&lt;li&gt;This will allow Markers to be useful for larger customers with hundreds of services and developers utilizing the tool and querying across a shared dataset.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>observability</category>
      <category>o11y</category>
      <category>honeycomb</category>
      <category>devops</category>
    </item>
    <item>
      <title>Open the GitHub project page of a repo from Terminal</title>
      <dc:creator>Shayde Nofziger</dc:creator>
      <pubDate>Tue, 18 Apr 2017 15:51:56 +0000</pubDate>
      <link>https://dev.to/shayde/open-the-github-project-page-of-a-repo-from-terminal</link>
      <guid>https://dev.to/shayde/open-the-github-project-page-of-a-repo-from-terminal</guid>
      <description>&lt;p&gt;I find myself frequently needing to go to the homepage of the projects I contribute to on GitHub. I have bookmarked the important ones, and a quick Google search of the project name and "github.com" &lt;em&gt;should&lt;/em&gt; take me to the right place, but that can be time-consuming and requires a non-negligible amount of context switching. I decided to automate this process so as to not interrupt my workflow.&lt;/p&gt;

&lt;p&gt;To do so, I wrote up a quick &lt;strong&gt;bash function&lt;/strong&gt; to perform the automation, placed it in my &lt;code&gt;.bashrc&lt;/code&gt; file, and assigned it to the alias &lt;code&gt;github&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;github&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GitHub

&lt;span class="k"&gt;function &lt;/span&gt;GitHub&lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; .git &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; 
        &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: This isnt a git directory"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nv"&gt;git_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;git config &lt;span class="nt"&gt;--get&lt;/span&gt; remote.origin.url&lt;span class="sb"&gt;`&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$git_url&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; https://github&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ERROR: Remote origin is invalid"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nv"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;git_url&lt;/span&gt;&lt;span class="p"&gt;%.git&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
    open &lt;span class="nv"&gt;$url&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what that code is doing, in layman's terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we check to see if we are at the root of a git repository by checking for the existence of a &lt;code&gt;.git/&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;Next, we get the &lt;code&gt;remote.origin.url&lt;/code&gt; property from the &lt;code&gt;git config&lt;/code&gt;. This is the URL of the remote git repo with which the project is associated. For GitHub projects, this takes the form: &lt;code&gt;https://github.com/[USERNAME]/[PROJECT_NAME].git&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If the &lt;code&gt;remote.origin.url&lt;/code&gt; is not from GitHub, we can't guarantee our ability to open it, and throw an error.&lt;/li&gt;
&lt;li&gt;Finally, we remove &lt;code&gt;.git&lt;/code&gt; from the URL and use the &lt;code&gt;open&lt;/code&gt; command in macOS to open a browser at that given URL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, any time I'm in the base directory of a project cloned from GitHub, I can open its project page straight from Terminal by running &lt;code&gt;github&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is my first time really exploring the power of incorporating bash functions and aliases into my workflow. I'm now actively looking out for tasks I perform regularly that could be automated, and will try to create bash functions for them as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ShaydeNofziger/development_scripts/blob/master/bash/openGitHubProject.sh" rel="noopener noreferrer"&gt;This code snippet&lt;/a&gt; and another I've created to &lt;a href="https://github.com/ShaydeNofziger/development_scripts/blob/master/bash/getDirectorySize.sh" rel="noopener noreferrer"&gt;get the size of the working directory&lt;/a&gt; are on GitHub, and I plan to publish all the future ones I create as well! Feel free to contribute and share your own!&lt;/p&gt;

&lt;p&gt;Some questions to inspire discussion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What actions do developers perform regularly that could benefit from automation?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have you used bash functions to automate your workflow? If so, feel free to share examples so others may learn! If not, could you see yourself using them in the future?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>bash</category>
      <category>github</category>
      <category>macos</category>
      <category>tips</category>
    </item>
  </channel>
</rss>
