<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tamás Deme 'tomzorz'</title>
    <description>The latest articles on DEV Community by Tamás Deme 'tomzorz' (@tomzorz).</description>
    <link>https://dev.to/tomzorz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tomzorz"/>
    <language>en</language>
    <item>
      <title>The era of more personal computing</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Fri, 13 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/the-era-of-more-personal-computing-2fdm</link>
      <guid>https://dev.to/tomzorz/the-era-of-more-personal-computing-2fdm</guid>
      <description>&lt;p&gt;In &lt;em&gt;two months&lt;/em&gt; code generation meeting my bar for quality became cheap and conversational. LLM-based coding agents can now translate intent into working code fast and correctly enough to significantly alter how teams build and iterate.&lt;/p&gt;

&lt;p&gt;There's this insult "may you live in interesting times" which I love (as much as an insult can be loved) - and maybe the whole "when you love something you set it free" saying is true because it seems it's been set loose on the world. Now there are &lt;em&gt;many&lt;/em&gt; aspects of &lt;em&gt;interesting&lt;/em&gt; I could pick, but today I want to talk about how it's getting almost weirdly easy to "make" software. I can definitely say in my ~15 years professionally (20+ if we count messing around with FrontPage '97 at age 10 "experience") there hasn't been a bigger change in how I approach software development than in the past two months. This is something that's been on my mind a lot recently, trying to compare to similar events in history - maybe hoping to find an answer there.&lt;/p&gt;

&lt;h2&gt;
  
  
  In search of a precedent &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#in-search-of-a-precedent" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As I spent some time looking at historical events I've found &lt;a href="https://en.wikipedia.org/wiki/Containerization" rel="noopener noreferrer"&gt;quite&lt;/a&gt; a &lt;a href="https://en.wikipedia.org/wiki/Printing_press" rel="noopener noreferrer"&gt;few&lt;/a&gt; examples that could serve us well, eventually settling on the following two.&lt;/p&gt;

&lt;h3&gt;
  
  
  The revolution of the American system of manufacturing &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#the-revolution-of-the-american-system-of-manufacturing" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;What made it safe for more producers to participate?&lt;/em&gt; (&lt;a href="https://en.wikipedia.org/wiki/American_system_of_manufacturing" rel="noopener noreferrer"&gt;On wikipedia&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;In the 19th century, parts became standardized to tolerances using jigs, gauges and machine tools, so assembly and repair no longer required artisanal fitting. This expanded the group of viable producers, as many shops could produce compatible parts, improved reliability, enabled supply chains and made maintenance cheaper.&lt;/p&gt;

&lt;p&gt;To find the parallels in software engineering, we can look at these interchangeable parts and see them as small and understandable diffs. We can look at the no-longer artisanal fittings as strong automated checks. CI in place, tests in place, clear boundaries set up - and a capable team that is aware of the &lt;a href="https://en.wikipedia.org/wiki/There_are_unknown_unknowns" rel="noopener noreferrer"&gt;unknown unknowns&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Manufacturing did not end up replacing the craftsmen, it ended up creating a standard that let more people contribute. Suddenly it's no longer only the mega-corps that can allow themselves to have their own custom software. And gone are the days of ideas being cheap talk, when in a well designed and guarded environment ideas can become real quicker than ever before.&lt;/p&gt;

&lt;h3&gt;
  
  
  The collapse of the ice trade &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#the-collapse-of-the-ice-trade" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;What happens when the product becomes reliable, cheap and boring?&lt;/em&gt; (&lt;a href="https://en.wikipedia.org/wiki/Ice_trade" rel="noopener noreferrer"&gt;On wikipedia&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The ice trade used to be a major industry in the 19th century, where ice was harvested large-scale during cold months or permanently-cold areas and then stored / transported to warmer climates - primarily the east coast of the United States. The invention of the refrigeration cooling systems, and therefore the output of the ice manufacturing plants in the early 20th century outpaced the harvest within a couple of decades - causing the entire trade to collapse soon after (despite the attempts to claim artificial ice was less pure or contaminated).&lt;/p&gt;

&lt;p&gt;Finding our parallels yet again: if the commodity work - the CRUD, migrations, glue and everyday ticket churn - becomes cheap. The value shifts to the people who control the production system. Anyone who owns the tooling, the platforms or the distribution will come out as a winner. It's not by accident that Anthropic says to developers "You can make software-as-a-service easier than ever before", but to everyone else they say "you don’t need software-as-a-service anymore".&lt;/p&gt;

&lt;p&gt;Either way - the tools are already writing the code, so the question becomes if we'll adapt or melt into irrelevance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The myth of productivity &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#the-myth-of-productivity" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As countless people have pointed out over the past few weeks: the limit in software engineering output was never the speed of typing. Measuring productivity as "lines of code shipped" has always been wrong - personally, my favorite commits are the ones with a net negative line count. So if there's suddenly &lt;sup&gt;&lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#fn1" id="fnref1" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;&lt;/sup&gt; a tool out there that makes typing "free", why do some teams still feel slow?&lt;/p&gt;

&lt;p&gt;Software engineering projects don't happen in clean rooms. We wish we could be working on perfect problems encapsulated in our ideal worlds where none of the messiness of "real life" comes into play, but that's almost never the case. And if the marginal cost of generating output approaches zero, that'll mean the cost of understanding and coherence will go up. I think I already spend a significant percent of my time gluing various pre-existing systems together, and what's been going on is just making it worse. "Free typing" does not buy you "free shipping". To quote a meme I saw recently:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Are you smart now?&lt;/li&gt;
&lt;li&gt;No, I'm just stupid faster.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;What we most often get is just more surface area for mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is intelligence an emergent behavior of statistics and predictions? &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#is-intelligence-an-emergent-behavior-of-statistics-and-predictions%3F" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Determining whether this time becomes a golden age of builders or a collapse of value comes down to whether we've "invented" intelligence or not. For a decade plus now we've been promised magic: "Hey Siri, what's the weather in Albuquerque?" but we got "I added weather in Albuquerque to your shopping list" in return. Alexa, Cortana and more have promised and failed to deliver.&lt;/p&gt;

&lt;p&gt;Today, though, all our magic rectangles contain an army of sycophants - tools that reply "that's absolutely right" to us enthusiastically no matter what we say. Agreeing when we make a point and disagreeing when asked to question us, driven by what's statistically most likely the next word that would follow. And when unleashed on an unsuspecting system we can achieve those dreams of what we were led to believe Siri was going to be with &lt;del&gt;ClawdBot&lt;/del&gt; (sorry, I meant) &lt;del&gt;MoltBot&lt;/del&gt; (sorry again, I actually meant) OpenClaw. Just... you know... don't look under the engine hood or notice your car keys taped on the outside of the driver's door. Astonishingly quick turnarounds by an enthusiastic-to-ship tool that also often fails to consider the basics of security.&lt;/p&gt;

&lt;p&gt;Part of that security hole is, in my view, the lack of &lt;em&gt;actual&lt;/em&gt; intelligence. I continue to refuse to call these LLMs &lt;em&gt;Artificial Intelligence&lt;/em&gt; - I might be proven wrong but that's a consequence I am happy to accept. Quoting the &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;GPT 5.3 Codex announcement&lt;/a&gt; ...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;GPT‑5.3‑Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some look at a tool that helped build the next tool and say "acceleration / takeoff". This is predicated on it being real intelligence. What I see is closer to the first electronic drill making it easier to manufacture the 2nd one. We certainly keep moving the goalposts, they are competitive with experts on many constrained tasks, but it's still not true &lt;em&gt;artificial&lt;/em&gt; intelligence. Can I define what &lt;em&gt;intelligence&lt;/em&gt; is? No. But I take comfort in the fact that much smarter people than I am don’t have a crisp definition that survives contact with edge cases either.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the human brain were so simple that we could understand it, we would be so simple that we couldn't.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I personally view intelligence as a combination of genuine understanding + reliable agency; neither of which we have achieved yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hoping to get railroaded into success &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#hoping-to-get-railroaded-into-success" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What we require is strict guardrails. Shocking, but I am about to quote my mom:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Son, a railing doesn't only block you from doing things, it is also something you can lean on".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LLMs allowed to iterate freely will yield something like movie set of miniatures in a visual effects studio... a kitbashed hodgepodge of solutions pulled in from a thousand sources. But if we can set down good guardrails, or a "paved road", as everyone quotes &lt;a href="https://www.oreilly.com/videos/oscon-2017/9781491976227/9781491976227-video306724/" rel="noopener noreferrer"&gt;the Netflix talk&lt;/a&gt;, we can become incredibly productive and yet ship maintainable code. This will come from our ability to provide said paved road, to create the railings both us and our tools are allowed to move in. And they'll have to become significantly stricter and specific than ever before.&lt;/p&gt;

&lt;p&gt;Companies are &lt;a href="https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance" rel="noopener noreferrer"&gt;drawing conclusions from potentials, not reality&lt;/a&gt;. They already cashed checks that were only promised to be written; slashed real headcounts to backfill them by virtual ones. And even just observing this causes &lt;a href="https://www.cnbc.com/2026/01/24/ai-artificial-intelligence-worries-therapy.html" rel="noopener noreferrer"&gt;FOBO, or "fear of becoming obsolete"&lt;/a&gt; in many. And while these are real issues, I think the main reason why it's not an easy time right now to be a junior developer is that they don't have the sense for the guardrails yet. They have freedom and capability - more than they ever had before - but unconstrained it'll just yield chaos. That &lt;em&gt;potential&lt;/em&gt; can't understand for you, and with each prompt the mental load increases further. The key will be to stop instinctively asking LLMs to solve problems for you, and instead learn from LLMs how to solve it yourself.&lt;/p&gt;

&lt;p&gt;The genie is out of the bottle: LLMs are here, major companies are going full steam ahead, and even the free / open versions you can run at home are just months behind in capabilities. Our job now is to learn how to constrain them and wield them with great care so we can all become craftsmen. Assuming we do it well, we'll be ushering humanity into the era of genuinely more personal computing: one where problems specific to you will be solved by tailor made solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  On a final side note, why do we tolerate machine help in engineering more than in art? &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#on-a-final-side-note%2C-why-do-we-tolerate-machine-help-in-engineering-more-than-in-art%3F" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;As many others, I am also viscerally against using any of the new "AI" tools for art. Just apparently not the art of software engineering. Somehow I am not bothered when it comes to it, or at least not &lt;em&gt;as&lt;/em&gt; bothered. The culture of open source thankfully won the industry over in the past decade. But also I guess code is closer to the ingredients, not the end result. If I were to use the same paints and brushes as an artist, I wouldn't necessarily be copying the painting - and I guess this is the best metaphor I could come up with.&lt;/p&gt;




&lt;ol&gt;
&lt;li id="fn1"&gt;
&lt;p&gt;I say suddenly, but I am well aware that GPT et al. have been around for a while. However, in my personal experience there's been a drastic change in the capabilities of these tools with the arrival of OpenAI's GPT-5.2-Codex and Anthropic's Claude Opus 4.5. These were the first models that were capable of translating my line of thinking and pseudocode specificity into mostly reliable code, and also notice and fix mistakes when instructed. And I won't even mention the improvements in the various harnesses and tool usage. &lt;a href="https://shoreparty.org/posts/the-era-of-more-personal-computing/#fnref1" rel="noopener noreferrer"&gt;↩︎&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>A Technical Conversation about all of our Authentication Headaches</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Thu, 20 Jul 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/a-technical-conversation-about-all-of-our-authentication-headaches-2ao3</link>
      <guid>https://dev.to/contenda/a-technical-conversation-about-all-of-our-authentication-headaches-2ao3</guid>
      <description>&lt;p&gt;As you might’ve noticed, we’ve recently launched our new auth stack, giving an easier and more robust method of accessing Contenda to our valued current and future users. I thought it’d be interesting to talk about the process and the technological challenges we faced, so strap in… this might get a bit technical!&lt;/p&gt;

&lt;p&gt;First of all, some clarification on what we mean by “auth”, as these two closely related topics are often mentioned under the same umbrella. Authentication and authorization are like a bouncer and a VIP list at an exclusive club.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication is the burly bouncer at the door, checking IDs to make sure you are who you say you are.&lt;/li&gt;
&lt;li&gt;Authorization is the VIP list they hold in their hand, determining whether you get access to the general area or the exclusive room upstairs. Just because you’ve passed the bouncer doesn’t mean you get access to the fancy VIP area – you need both the right ID and your name on the special list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our case, we were confident we could handle any authorization woes ourselves… and actually—letting you in on a little secret here—we have been all along: you’ve always had a workspace, even though we never really surfaced that. Just some thinking ahead on our part, knowing we’d get here soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weapon of choice
&lt;/h2&gt;

&lt;p&gt;However entertaining it is to watch &lt;a href="https://www.youtube.com/watch?v=wCDIYvFmgW8" rel="noopener noreferrer"&gt;Christopher Walken waltz his way across a hotel lobby&lt;/a&gt;, I’m talking about choosing our authentication provider. The “don’t roll your own security” line of wisdom is well known, and we agree. While there are some “obvious” choices, like Okta or Auth0, we didn’t necessarily want to go with them. They are &lt;em&gt;huge&lt;/em&gt;, which means you’re not really special as a client… if something is wrong, you’re often on your own, unless you pay the big $$$ for the enterprise support contracts. (Also, these big players often have business practices that we’d call &lt;em&gt;unfriendly&lt;/em&gt; at best, as in, when you hit an arbitrary number of users a month they make their service suddenly cost a &lt;strong&gt;magnitude&lt;/strong&gt; more than before.)&lt;/p&gt;

&lt;p&gt;We looked at roughly 10 different providers overall, and while many were promising we had to cut a bunch simply due to them not supporting some part of our tech stack (React frontend, Python backend). After some deliberation, we ended up choosing FusionAuth, an option with friendlier pricing than most. It was also important for us that they’d handle their own hosting, so that’s one less problem we have to deal with.&lt;/p&gt;


  


&lt;h2&gt;
  
  
  What we had to do
&lt;/h2&gt;

&lt;p&gt;Ever since the beginning, we wanted our users to access Contenda in the way that’s most convenient to them: let our technical users use an API to potentially automate things, but also have a friendly way of working with us on the web for everyone. This means that whatever authentication stack we choose, we need to make sure both of these avenues are supported and secure.&lt;/p&gt;

&lt;p&gt;Sadly, this also means that we are off the beaten path, as this is not a typical scenario for most services. In practice, we couldn’t just use a prepared wrapper for React web apps, we had to implement things ourselves both on the backend and on the frontend.&lt;/p&gt;

&lt;p&gt;To properly support our web app, we implemented the beautifully named “Authorization Code Flow with Proof Key for Code Exchange (PKCE)”, also known as the “pixie” flow. This is the currently safest and best way to let web apps authenticate securely, without exposing any authentication secrets to clients (who could then read those out from the source code or using the browser dev tools). This is done by having the backend do most of the heavy lifting, and only asking information from the client that the client itself created (therefore there isn’t really anything there that an evil actor could try and steal).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fpkce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fpkce.png" alt="A screencap from my explanation of our auth flow to the fellow engineers at Contenda." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(A screencap from my explanation of our auth flow to the fellow engineers at Contenda.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This flow of course doesn’t really work for a typical API, so to make that path happen we decided to use one of FusionAuth’s unique features: Application Authentication Tokens. (Although I can imagine us shifting towards something like the Client Credentials Grant eventually, depending on how various future integrations might shake out.)&lt;/p&gt;

&lt;h2&gt;
  
  
  What we didn’t think we had to do
&lt;/h2&gt;

&lt;p&gt;No major development effort goes without issues, and that was true for this one as well. It’s an age-old adage in the world of (web) development: the only constant is change. Recently, these shifts have been driven in large part by an increased focus on privacy and security, particularly around the use and management of cookies. Which is great - obviously - but also made our lives a bit harder.&lt;/p&gt;

&lt;p&gt;If you happened to have played around with our API recently, you might’ve noticed we’ve changed our API domains from &lt;code&gt;.io&lt;/code&gt; to &lt;code&gt;.co&lt;/code&gt;, now matching our web app. I personally like having the separation between API and app on the domain level, but our hands were essentially forced by what I’m about to describe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fcookie.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fcookie.jpg" alt="A cookie." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cookies have been the cornerstone of the web for years, serving numerous purposes from personalizing user experiences to tracking user activity for analytics. However, they are also critical for managing user authentication… and now you can probably see where the problem was. Recent advancements in browser security measures have introduced new challenges that we had to overcome: having our backend and frontend be on different domains essentially meant we were trying to work with third-party cookies.&lt;/p&gt;

&lt;p&gt;Third-party cookies are set by a website different from the one you’re currently visiting and are commonly used for cross-domain tracking, advertising and cross-domain authentication. Except not anymore, as in an effort to combat privacy concerns related to user tracking and personal data usage, popular browsers have started to block third-party cookies by default already (or very soon). Similarly, the SameSite attribute’s behavior and defaults have been changed too - also to strengthen security, but coincidentally make cross-domain authentication harder.&lt;/p&gt;

&lt;p&gt;While theoretically these challenges are solvable, our testing revealed that some cases (such as incognito/private windows) were completely broken already, and even our “happy path” was deemed unreliable for reasons still mostly unknown to us. So as I mentioned above, we ended up accepting the new reality, and committing to the hassle-hurdle of moving our API over to a new TLD that hasn’t really been inside our AWS umbrella before.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s still coming
&lt;/h2&gt;

&lt;p&gt;This work is never really done, as the cat and mouse game between malicious actors and security experts never stops. Cross-Site Request Forgery, Cross-Site Scripting and more, all hopefully crossed out from the potential issues… but who know what comes next. At minimum we’re committed to keep up with the best security practices, but our aim is to make our auth flows the best and painless they can be, e.g., adding more social sign-in options for your convenience.&lt;/p&gt;




&lt;p&gt;Photo of a cookie by &lt;a href="https://unsplash.com/fr/@vyshnavibisani?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Vyshnavi Bisani&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/z8kriatLFdA?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Guidelines to migrate to our new API version</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Tue, 18 Jul 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/guidelines-to-migrate-to-our-new-api-version-1e7j</link>
      <guid>https://dev.to/contenda/guidelines-to-migrate-to-our-new-api-version-1e7j</guid>
      <description>&lt;p&gt;Our platform is constantly evolving, and as part of our recent &lt;a href="https://dev.to/contenda/auth-into-our-new-platform-bg4"&gt;auth&lt;/a&gt; &amp;amp; &lt;a href="https://dev.to/contenda/pain-free-content-management-with-contenda-dashboards-35i0"&gt;dashboards&lt;/a&gt; release we’ve also upgraded our API version to version 3. Thankfully while the version number change is major, and there &lt;strong&gt;is&lt;/strong&gt; a breaking change, generally there’s not much to adjust.&lt;/p&gt;

&lt;h2&gt;
  
  
  0. New way to get an API Key
&lt;/h2&gt;

&lt;p&gt;Before, the only way to access Contenda was via an API Key - didn’t matter whether you accessed our web app or our API directly.&lt;/p&gt;

&lt;p&gt;Now, all our users can sign up in the web app where you can find the option to generate an API key from the dashboard. These new (version 3) API keys are 43 characters long, so they’re easily distinguishable from the old (version 2) API keys, which are 72 characters long.&lt;/p&gt;


  


&lt;h2&gt;
  
  
  1. API URL Update
&lt;/h2&gt;

&lt;p&gt;In version 2 of our API, you were used to interacting with the API URL prefix &lt;code&gt;api/v2/&lt;/code&gt; and the &lt;code&gt;https://prod.contenda.io&lt;/code&gt; domain. With version 3, we have updated this to be &lt;code&gt;/api/v3/&lt;/code&gt; and &lt;code&gt;https://prod.contenda.co&lt;/code&gt; (notice the TLD change). This new prefix will direct your requests to the appropriate API version, with a few more capabilities and most importantly, using the new authentication methods.&lt;/p&gt;

&lt;p&gt;We’ll still be supporting “version 2” until the end of this month. But it’s important to note that the TLD change is required for both versions. You’ll need to use &lt;code&gt;https://prod.contenda.co&lt;/code&gt; as &lt;code&gt;https://prod.contenda.io&lt;/code&gt; will not work anymore.&lt;/p&gt;

&lt;p&gt;Our documentation for all our endpoints is available here: &lt;a href="https://prod.contenda.co" rel="noopener noreferrer"&gt;https://prod.contenda.co&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Changes in Authentication
&lt;/h2&gt;

&lt;p&gt;The way you authenticate to our API has also changed. In the previous version, you had to use the &lt;code&gt;/api/v2/identity/token&lt;/code&gt; endpoint to obtain an access token.&lt;/p&gt;

&lt;p&gt;To obtain your access token for version 3 of the API, use the &lt;code&gt;/auth/v1/flow/apilogin&lt;/code&gt; endpoint. This authentication endpoint requires you to submit &lt;code&gt;user_email&lt;/code&gt; and &lt;code&gt;api_key&lt;/code&gt; within the JSON body of your request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user_email": "your_email@example.com",
  "api_key": "bla4815162342bla"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the exact call and expected response, you can check our &lt;a href="https://prod.contenda.co/auth/v1/docs#/User%20Flow/login_with_api_key_flow_apilogin_post" rel="noopener noreferrer"&gt;auth docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Token Supply Method
&lt;/h2&gt;

&lt;p&gt;When interacting with our old API you had to supply your access token as a query parameter &lt;code&gt;token&lt;/code&gt;. We have updated this in version 3 for enhanced security and ease of use - matching industry standard practices.&lt;/p&gt;

&lt;p&gt;To call the version 3 APIs, you need to supply your token in the &lt;code&gt;Authorization&lt;/code&gt; header as a &lt;code&gt;Bearer&lt;/code&gt; value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fnewapi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fnewapi.png" alt="Comparing the old and the new API call methods" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Setting the User-Agent Header
&lt;/h2&gt;

&lt;p&gt;While this was not a requirement in the previous version (and it’s still not in version 3), it’s recommended that you set the &lt;code&gt;User-Agent&lt;/code&gt; header in your requests. Including this information in your requests can help us provide you with better, more personalized support.&lt;/p&gt;

&lt;h2&gt;
  
  
  In conclusion
&lt;/h2&gt;

&lt;p&gt;We understand that any transition can bring challenges, but we believe these changes will ultimately offer a more efficient, secure, and user-friendly experience for everyone using our platform. We appreciate your understanding and cooperation during this transition period, and our support team is always ready to help with any issues or concerns you might have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our current plans are to deprecate the old “version 2” of the API on July 31st, 2023.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thank you for your continued support and use of our platform! 🚀 We’re looking forward to seeing what amazing things you will build with our new and improved API!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Auth into our new platform!</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Thu, 29 Jun 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/auth-into-our-new-platform-bg4</link>
      <guid>https://dev.to/contenda/auth-into-our-new-platform-bg4</guid>
      <description>&lt;p&gt;If you were up during demon hours last night (for your sake, I hope you weren’t), you might have noticed that we were under maintenance. This small maintenance window has been the culmination of many many hours as we’ve been hammering away behind the scenes… and are finally ready to reveal a transformative update that’s designed to amplify your journey with us. Hold on to your hats, because we’re unlocking a new realm of user experience – proper authentication!&lt;/p&gt;

&lt;h2&gt;
  
  
  So what’s new?
&lt;/h2&gt;

&lt;p&gt;Aside from the industry standard security practices now in place to make sure your account is only accessible by you –and only you–, we are pushing forward in 3 major areas.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplified sign-ups
&lt;/h3&gt;

&lt;p&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;To all the API key sign in haters out there, we hear you loud and clear! We acknowledge that our previous API key-based sign-in process could be a bit daunting and cumbersome. We’re thrilled to announce that we’re leaving that behind! And for all you API key lovers, don’t worry! You can still use it to interact directly with our API. However, when it comes to signing in on our frontend, we’re introducing two new options: email and password or Google SSO. So, get ready for a smoother and more convenient sign-in experience that suits your preferences. You can expect more SSO options to come as well, if you have your allegiances elsewhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workspaces, a shared journey
&lt;/h3&gt;

&lt;p&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Join forces and collaborate with others on your content more seamlessly. You’ll now be able to make your own workspace, join an existing one, and invite others. Once in a workspace, your team can submit, see, and edit each others posts!&lt;/p&gt;

&lt;h3&gt;
  
  
  A dashing Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Want to see what you generated last month without having to dig through your inbox? Now you can just scroll through your dashboard to find it. We’ll be going into more detail on the creation of the dashboard and all the cool things it does next week! So keep your eye out for that blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out!
&lt;/h2&gt;

&lt;p&gt;If you don’t have an account with us yet, &lt;a href="https://app.contenda.co/" rel="noopener noreferrer"&gt;open the Contenda App&lt;/a&gt; which should take you straight to the sign up screen! Or if you already have an account with us, you should have gotten an email to set up your new password. Follow the instructions in the email and it should take you to your new dashboard! We’ve migrated everyone into their own workspaces, so if you’d like us to merge yours with a coworkers just let us know somewhere and we’ll sort it out for you!&lt;/p&gt;

&lt;p&gt;If you have any feedback or run into any issues, please feel free &lt;a href="https://discord.gg/bYda4pQz2v" rel="noopener noreferrer"&gt;to reach out to our team on Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>authentication</category>
      <category>platform</category>
    </item>
    <item>
      <title>Netting the first Contenda SDK</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Thu, 18 May 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/contenda/netting-the-first-contenda-sdk-21ge</link>
      <guid>https://dev.to/contenda/netting-the-first-contenda-sdk-21ge</guid>
      <description>&lt;p&gt;We’re dropping our new .NET Software Development Kit (SDK) like it’s hot, for all you C#, F# and VB.NET code warriors out there! This baby is the fruit of our burning love for making your life with Contenda a walk in the park. Your friendly neighborhood dev advocate just got a turbo boost! And guess what? This .NET SDK is open source, up for grabs on both &lt;a href="https://www.nuget.org/packages/Contenda.Sdk" rel="noopener noreferrer"&gt;NuGet&lt;/a&gt; and &lt;a href="https://github.com/Contenda-Team/contenda-dotnet-sdk/" rel="noopener noreferrer"&gt;Github&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fnetsdk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcontenda.co%2Fblogimages%2Fnetsdk.gif" alt="Demo gif of .NET SDK" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An SDK is similar to a LEGO set, providing all the necessary pieces and instructions to assemble a complex structure - all without you having to build those pieces themselves.&lt;/p&gt;

&lt;p&gt;This release is like the first step on the moon for us - a giant leap towards making our product your new favorite content creation tool. We can’t wait to hear your thoughts, and oh boy, are we stoked to see the genius stuff you’ll cook up with our SDKs! And hold on to your keyboards, ‘cause this ain’t the end of the line! We’re geared up to roll out libraries in even more languages, spreading the love to more tech advocates.&lt;/p&gt;

&lt;p&gt;If you’d like to learn more, please feel free &lt;a href="https://discord.gg/bYda4pQz2v" rel="noopener noreferrer"&gt;to reach out to our team on Discord&lt;/a&gt; or &lt;a href="https://contenda.ck.page/96e4e60222" rel="noopener noreferrer"&gt;sign up for our email list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Prepare to do more with less, ‘cause we’re just getting started!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>sdk</category>
    </item>
    <item>
      <title>Tiny Tools of the Trade - Part I.</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Tue, 26 Apr 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/tiny-tools-of-the-trade-part-i-5d30</link>
      <guid>https://dev.to/tomzorz/tiny-tools-of-the-trade-part-i-5d30</guid>
      <description>&lt;p&gt;Fairly soon after I started my collection of tiny tools I had the idea to write about them, and share the awesomeness that might be flying under the radar. According to the folder metadata, I started actively collecting in February of 2013. I called the folder "um" for "ultra-mini" which sort of stuck over the years, so this is the story of &lt;code&gt;D:\um\&lt;/code&gt; - somewhat grouped by frequency of usage and function. This time in Part I, let's take a look at various media-wrangling tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grab any video with yt-dlp &lt;a href="https://shoreparty.org/posts/tiny-tools-of-the-trade-part-i/#grab-any-video-with-yt-dlp" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A fork of the no-longer maintained youtube-dl, &lt;a href="https://github.com/yt-dlp/yt-dlp" rel="noopener noreferrer"&gt;yt-dlp&lt;/a&gt; is a simple command line tool that lets you grab essentially any video from any site. Amongst &lt;a href="https://github.com/yt-dlp/yt-dlp#usage-and-options" rel="noopener noreferrer"&gt;many other things&lt;/a&gt;, it supports grabbing entire youtube channels or playlists, grabbing subtitles or outputting just audio files if that's what you need.&lt;/p&gt;

&lt;p&gt;Using yt-dlp is as simple as it gets. After making sure you &lt;a href="https://github.com/yt-dlp/yt-dlp#installation" rel="noopener noreferrer"&gt;installed it by following the steps appropriate for your operating system&lt;/a&gt;, type &lt;code&gt;yt-dlp https://www.youtube.com/watch?v=dQw4w9WgXcQ&lt;/code&gt; into the terminal and let it do its thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grab any image with RipMe &lt;a href="https://shoreparty.org/posts/tiny-tools-of-the-trade-part-i/#grab-any-image-with-ripme" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Same idea as before, just in a small java UI instead of a command line, and with images instead of videos. Want to grab that cool album of wallpapers from Flickr, or those &lt;em&gt;dank&lt;/em&gt; memes from imgur? &lt;a href="https://github.com/RipMeApp/ripme" rel="noopener noreferrer"&gt;RipMe is your friend&lt;/a&gt;. Just plop a URL into the UI, press the button and let the magic happen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bxbfxl5fb93hu8k3p4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bxbfxl5fb93hu8k3p4x.png" width="424" height="474"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;RipMe UI - focus your attention on the top row, it has all the things you need.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trim, crop, edit and more with ffmpeg &lt;a href="https://shoreparty.org/posts/tiny-tools-of-the-trade-part-i/#trim%2C-crop%2C-edit-and-more-with-ffmpeg" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Underlying a significant amount of products that deal with media nowadays, you'll find &lt;a href="https://ffmpeg.org/" rel="noopener noreferrer"&gt;ffmpeg&lt;/a&gt;. To twist a quote from The Martian a little, &lt;em&gt;ffmpeg is magic and should be worshipped&lt;/em&gt;. While simple conversion using it is relatively simple, I wouldn't call the rest of it that... maybe quite the opposite. With that said, the capabilities are surprisingly advanced - if you're patient enough writing complex filter "code" you can edit entire videos with transitions and more - just from the terminal.&lt;/p&gt;

&lt;p&gt;Two extra capabilities are also worth mentioning: 1) because ffmpeg is a terminal based tool, it's easy to do scripting to mass process large batches of media; and 2) many actions, like trimming or replacing audio tracks can be done without re-encoding the entire media file, which makes these tasks &lt;em&gt;significantly&lt;/em&gt; faster to do compared to re-rendering in a "proper" video editor.&lt;/p&gt;

&lt;p&gt;To make sure I don't scare anyone away, here's a simple example making an audio file out of a video: &lt;code&gt;ffmpeg -i never.mkv never.mp3&lt;/code&gt; - simple, right? For more advanced examples keep an eye out on this blog. I've been collecting my own ffmpeg scripts and templates, and I'm planning to share them in a few follow-up posts here soon™.&lt;/p&gt;

</description>
      <category>recommendations</category>
      <category>tools</category>
      <category>ffmpeg</category>
    </item>
    <item>
      <title>Now That's What I Call Visual Studio Extensions (2022 edition)</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Thu, 03 Feb 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/now-thats-what-i-call-visual-studio-extensions-2022-edition-4256</link>
      <guid>https://dev.to/tomzorz/now-thats-what-i-call-visual-studio-extensions-2022-edition-4256</guid>
      <description>&lt;p&gt;Funnily enough I've started this article when I was using Visual Studio 2017... and somehow never ended up finishing it. Nevertheless, finally with the release of Visual Studio 2022, I had an "opportunity" to move my extensions over from 2019 - make a list, check it twice and all that. Not all of the things I used are compatible &lt;em&gt;yet&lt;/em&gt;, but on the other hand I've found a few good new ones.&lt;/p&gt;

&lt;p&gt;You might be wondering "Won't installing all these extensions slow down Visual Studio?", to which the answer is a resounding &lt;strong&gt;yes&lt;/strong&gt;... &lt;strong&gt;BUT&lt;/strong&gt; they'll save you magnitudes more time by increasing your productivity. It's definitely a tradeoff, but I strongly have a preference. I barely ever close VS, so for example the 30 seconds lost at launch waiting for things to load are nothing compared to the minutes I often save on refactoring something.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReSharper &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#resharper" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This here "extension" gets its own category. &lt;a href="https://www.jetbrains.com/resharper/" rel="noopener noreferrer"&gt;ReSharper&lt;/a&gt; is JetBrains's mega-extension for Visual Studio, adding way more &lt;a href="https://www.jetbrains.com/resharper/features/" rel="noopener noreferrer"&gt;new features, improvements and straight up replacements&lt;/a&gt; that I could list here. I recommend visiting the features page I just linked and reading about its various capabilities. It's also one of the tools that I credit with making me a better developer - the static analysis provided by it made me write better, faster and more secure code. (Although it's worth mentioning that VS has been catching up in the area somewhat since Roslyn arrived.)&lt;/p&gt;

&lt;p&gt;ReSharper itself has its own extensions and text templates - some of which are pretty cool, e.g. &lt;a href="https://plugins.jetbrains.com/plugin/11629-unity-support" rel="noopener noreferrer"&gt;this one for Unity developers&lt;/a&gt; that aligns its suggestions more with the Unity standard practices, or &lt;a href="https://plugins.jetbrains.com/plugin/11621-enhanced-tooltip" rel="noopener noreferrer"&gt;this one that enhances the built-in tooltips&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unlike all the other extensions I'll list, this one needs a &lt;a href="https://www.jetbrains.com/resharper/buy/#personal" rel="noopener noreferrer"&gt;paid subscription&lt;/a&gt; but I think it's a $ value that gets returned to you in saved time and effort within weeks.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2010 to VS2022&lt;/strong&gt; &lt;a href="https://www.jetbrains.com/resharper/download/#section=offline-installer" rel="noopener noreferrer"&gt;https://www.jetbrains.com/resharper/download/#section=offline-installer&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Productivity &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#productivity" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Productivity Power Tools &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#productivity-power-tools" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Microsoft DevLab's extension bundle is a true gem: it includes ~10 smaller extensions (that you can install separately if you'd like) that add small but handy enhancements to VS. My two favorites are &lt;a href="https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.MatchMargin2022" rel="noopener noreferrer"&gt;Match Margin&lt;/a&gt; that highlights other occurrences of the selected text in the editor scrollbar, and &lt;a href="https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.SyntacticLineCompression2022" rel="noopener noreferrer"&gt;Shrink Empty Lines&lt;/a&gt; that saves screen space by shrinking empty lines vertically, letting me see more of the actual code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ymknj65tsasbmutfsbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ymknj65tsasbmutfsbb.png" width="800" height="203"&gt;&lt;/a&gt;Notice the purple dots in the scrollbar map&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.ProductivityPowerPack2022" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.ProductivityPowerPack2022&lt;/a&gt;&lt;br&gt;&lt;br&gt;
💾 &lt;strong&gt;VS2017 to VS2019&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.ProductivityPowerPack2017" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=VisualStudioPlatformTeam.ProductivityPowerPack2017&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Output Enhancer &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#output-enhancer" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This extension by Nikolay Balakin adds coloring to the Build and Output windows, making them a lot easier to digest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri33xiov254fd4c3m6hr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri33xiov254fd4c3m6hr.png" width="531" height="99"&gt;&lt;/a&gt;Yellow warnings, green successes&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2012 to VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=NikolayBalakin.Outputenhancer" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=NikolayBalakin.Outputenhancer&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  File Icons &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#file-icons" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Mads Kristensen is a VS giant by day and apparently by night too - while working on VS itself at Microsoft (at least at the time of writing this) &lt;a href="https://marketplace.visualstudio.com/publishers/MadsKristensen" rel="noopener noreferrer"&gt;he has also authored over a 100+ extensions for it&lt;/a&gt;. The one I'm recommending here adds a bunch of new icons to the Solution Explorer, so various non-code related files are more easily identifiable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsaynss59ukn0y8b2ka9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsaynss59ukn0y8b2ka9.png" width="249" height="64"&gt;&lt;/a&gt;SQLite gets an icon&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2017 to VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=MadsKristensen.FileIcons" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=MadsKristensen.FileIcons&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Understanding &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#code-understanding" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  CodeBlockEndTag &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#codeblockendtag" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;KhaosPrinz's extension might be my favorite tiny enhancement to VS. Whenever the beginning of a block is not visible, it adds little tags to the block's end reminding you of their beginning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaybw0inqnxf4w700khs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqaybw0inqnxf4w700khs.png" width="800" height="176"&gt;&lt;/a&gt;See the end of lines 226 and 236&lt;/p&gt;

&lt;p&gt;A bit of extra history &lt;em&gt;as a treat&lt;/em&gt;: I first came across this feature in the VSCommands extension, but sadly both the extension and the company behind it disappeared around 2014 - with no upgrades available to new VS versions.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2015 to VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=KhaosPrinz.CodeBlockEndTag" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=KhaosPrinz.CodeBlockEndTag&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CI CodeLens Info &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#ci-codelens-info" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;CodeLens is a much loved VS feature that adds a little extra context to classes, interfaces and methods. Thankfully it's also extendable, so now next to the default source control and reference information we can get a little more information about the selected element itself. We can thank Luiz Fernando DINATO for authoring this one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd7ln7cgczk56wmy0co3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd7ln7cgczk56wmy0co3.png" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=LuizFernandoDINATO.CICodeLensInfo2022" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=LuizFernandoDINATO.CICodeLensInfo2022&lt;/a&gt;&lt;br&gt;&lt;br&gt;
💾 &lt;strong&gt;VS2019&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=LuizFernandoDINATO.cicodelensinfoextension" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=LuizFernandoDINATO.cicodelensinfoextension&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  microscope &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#microscope" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Robert Hofmann's extension adds a few windows and a CodeLens bit that allow us to see the generated IL of methods, lambdas, closures and more. As Robert phrases on the extension's page &lt;em&gt;"It's mostly useful for learning and getting a better understanding of how C# works internally"&lt;/em&gt; and I completely agree. Trying to be smarter than the compiler is often futile, but at least it's nice to see what it's doing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhegvm92146osmqkp8pfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhegvm92146osmqkp8pfi.png" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=bert.microscope" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=bert.microscope&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This extension is also available for VS2019 but you'll have to manually install it - see the linked VS2022 page above for instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#debugging" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ArrayPlotter &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#arrayplotter" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Clue's in the name, as always - Rodney Thomson's extension adds various ways to visualize C, C++ and C# arrays in multitudes of ways. Understanding large numerical arrays in the debug tooltips as endless lines of numbers is hard - and this is an excellent way to alleviate that problem.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=RodneyThomson.ArrayPlotter64" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=RodneyThomson.ArrayPlotter64&lt;/a&gt;&lt;br&gt;&lt;br&gt;
💾 &lt;strong&gt;VS2012 to VS2019&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=RodneyThomson.ArrayPlotter" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=RodneyThomson.ArrayPlotter&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ArrayVisualizer &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#arrayvisualizer" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;More of the same, originally by Amir Liberman, but more recently by Manuel Eisenschink. This time it's less &lt;em&gt;plotting&lt;/em&gt; but more &lt;em&gt;visualizing&lt;/em&gt; of tables, cubes and more. I've found this extension very useful during the most recent Advent of Code challenge, where almost every time we had to deal with various 2D arrays.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2015 to VS2022&lt;/strong&gt; &lt;a href="https://github.com/Skyppid/Array-Visualizer/" rel="noopener noreferrer"&gt;https://github.com/Skyppid/Array-Visualizer/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll have to find and pick the proper release for your VS version following the link above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Formatting &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#formatting" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  CodeMaid &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#codemaid" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This superb code-cleanup extension by Steve Cadwallader helps a ton with arranging, formatting and cleaning up code. Highly customizable and a treat to use, especially with the exponentially growing files. &lt;em&gt;(That we should definitely refactor into multiple files, we just need to fit it into one of these sprints coming up. One of these days.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=SteveCadwallader.CodeMaidVS2022" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=SteveCadwallader.CodeMaidVS2022&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This extension is also available for older VS versions but you'll have to manually install it - see the linked VS2022 page above for instructions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trailing Whitespace Visualizer &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#trailing-whitespace-visualizer" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The 2nd extension in my list from Mads Kristensen, this time - as the title suggests - to highlight trailing whitespaces in your code. They are invisible and annoying by default - this helps with getting rid of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u9zw6agke46l69wi3vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u9zw6agke46l69wi3vh.png" width="770" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2017 to VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=MadsKristensen.TrailingWhitespaceVisualizer" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=MadsKristensen.TrailingWhitespaceVisualizer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(The marketplace page doesn't list VS2022 but it installed for me just fine.)&lt;/p&gt;

&lt;h3&gt;
  
  
  XamlStyler &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#xamlstyler" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Probably the most niche extension in the list, this is only useful for the XAML based UI developers. Xavalon's tremendous addition to VS helps with (optionally auto-) formatting your XAML documents. Highly customizable to fit your team's preferences, and helps a lot with maintaining consistency and readibility.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=TeamXavalon.XAMLStyler2022" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=TeamXavalon.XAMLStyler2022&lt;/a&gt;&lt;br&gt;&lt;br&gt;
💾 &lt;strong&gt;VS2017 to VS2019&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=TeamXavalon.XAMLStyler" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=TeamXavalon.XAMLStyler&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#other" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Font Sizer 2.0 &lt;a href="https://shoreparty.org/posts/now-thats-what-i-call-visual-studio-extensions/#font-sizer-2.0" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The third and final extension from Mads - something that's mostly useful for screen sharing / casts or presentations. The extension provides a quick and easy way to adjust the font size of the UI or code.&lt;/p&gt;

&lt;p&gt;💾 &lt;strong&gt;VS2017 to VS2022&lt;/strong&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=MadsKristensen.FontSizer2" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=MadsKristensen.FontSizer2&lt;/a&gt;&lt;/p&gt;

</description>
      <category>visualstudio</category>
      <category>extensions</category>
      <category>recommendations</category>
    </item>
    <item>
      <title>Shared Engine Spaces in the age of Mixed Reality Operating Systems</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Thu, 18 Mar 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/shared-engine-spaces-in-the-age-of-mixed-reality-operating-systems-2571</link>
      <guid>https://dev.to/tomzorz/shared-engine-spaces-in-the-age-of-mixed-reality-operating-systems-2571</guid>
      <description>&lt;p&gt;Users of traditional "2D" or "pancake" operating system user interfaces have long expected multiple different rendering engine powered applications to work together simultaneously. I can easily run a game using Unreal Engine with e.g. DirectX11 backing it in a window next to an enterprise app using Vulkan, while both are eventually placed on screen by Windows's DWM/Composition engine which is probably using DirectX12. And if I were to launch a Unity3D powered app on top of this, it'd just work as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sh42wk2achfu7rmbmka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sh42wk2achfu7rmbmka.png" width="800" height="450"&gt;&lt;/a&gt;Unreal Engine, WPF, Vulkan, UWP, DX12 playing along&lt;/p&gt;

&lt;p&gt;This integration is obviously not perfect, two issues I'd call out specifically:&lt;/p&gt;

&lt;p&gt;Mixing multiple rendering engines within a single application is heavily dependent on the engines' interoperability and "closeness". You can often work around this and embed a window using rendering engine &lt;code&gt;A&lt;/code&gt; within a window using rendering engine &lt;code&gt;B&lt;/code&gt;, but that usually also means you can't have rendering engine &lt;code&gt;B&lt;/code&gt; draw on top of the area that's being used by engine &lt;code&gt;A&lt;/code&gt;. These issues are dubbed &lt;strong&gt;airspace issues&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9crmuwmqr9frapslymn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9crmuwmqr9frapslymn0.png" width="800" height="626"&gt;&lt;/a&gt;You can place a Vulkan renderer inside a WPF app, but the WPF app can't draw on top of the Vulkan part&lt;/p&gt;

&lt;p&gt;There's also space to improve in the balanced &lt;strong&gt;sharing of resources&lt;/strong&gt; available. However fast computer you might build, a game loading will probably make your youtube playback in a different window start freezing up for a bit. Based on my observations, Windows seems to prioritize the foreground application - which does make sense in most use cases.&lt;/p&gt;

&lt;p&gt;I chose these two pain points specifically as in my opinion they are critical to a fully capable 3D / mixed reality operating system's user interface. Let's take a look at what solutions we have today:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Oculus Home / Windows Mixed Reality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both of the above simply don't solve the issue, at all: they all guarantee exclusivity to the foreground 3D application the user is interacting with and don't let developers create shared 3D experiences. The only exception (and imagine me using air-quotes here) to this is application icons: developers can provide 3D content in a .gltf file to act as a 3D representation of the application in the users' home space.&lt;/p&gt;

&lt;p&gt;Both of the above also try and inject system UI into the foreground 3D applications with various levels of success:&lt;/p&gt;


Your browser does not support playing HTML5 video.
You can &lt;a href="https://shoreparty.org/img/mrosengine/quest.mp4" rel="noopener noreferrer"&gt;download the file&lt;/a&gt; instead.
Here is a description of the content: Quest System UI in an app


&lt;p&gt;On the Hololens 2 it's now possible to open and interact with 2D apps while being "inside" a 3D app, but as you can see both in this video and the previous one, the projection looks off and what's really in the foreground or background gets ignored.&lt;/p&gt;


Your browser does not support playing HTML5 video.
You can &lt;a href="https://shoreparty.org/img/mrosengine/winmr.mp4" rel="noopener noreferrer"&gt;download the file&lt;/a&gt; instead.
Here is a description of the content: HoloLens 2D app overlay in a 3D app


&lt;p&gt;&lt;strong&gt;Lumin OS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Magic Leap's operating system is the only OS that at least has an approach to this currently. Applications using their own app building environment can exist in the same shared 3D space, as this example shows me having their browser and 3D gallery open at the same time:&lt;/p&gt;


Your browser does not support playing HTML5 video.
You can &lt;a href="https://shoreparty.org/img/mrosengine/magicleap.mp4" rel="noopener noreferrer"&gt;download the file&lt;/a&gt; instead.
Here is a description of the content: Lumin OS multiple 3D apps


&lt;p&gt;With that said, this only works for their own apps… apps using a 3rd party engine like Unity are still only capable of running in exclusive mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SteamVR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's include this one, just for completeness's sake. There aren't multiple 3D experiences allowed here either, BUT I think it's important to mention that using their SDK allows great integrations to be made, &lt;a href="https://store.steampowered.com/app/586210/OVRdrop/" rel="noopener noreferrer"&gt;e.g. a twitch chat viewer that's attached to the backside of the controller&lt;/a&gt;. It doesn't respect perspectives/foreground-background either, but it's the only platform where 3rd parties are allowed to do this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to next? &lt;a href="https://shoreparty.org/posts/shared-engine-spaces/#where-to-next%3F" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;While there'll always be applications that demand and deserve an exclusive 3D space, e.g. games, it's completely realistic to imagine...&lt;/p&gt;

&lt;p&gt;...watching a point cloud stream of a sporting event on a coffee table...&lt;/p&gt;

&lt;p&gt;...while being in a 3D Skype call with friends placed on the wall...&lt;/p&gt;

&lt;p&gt;...and having a browser open on the right with 3D charts and models of the players.&lt;/p&gt;

&lt;p&gt;It should be immediately clear to anyone who has a bit of rendering or OS knowledge that this is a &lt;strong&gt;really hard problem to solve&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To achieve a completely seamless experience we have to share "everything with everyone": all rendering engines need to match camera settings and lighting between them, materials, sounds and more. This is not only complex due to the differing architecture of the engines, but also because of the performance metrics required from such experiences. On desktop it's often accepted to have applications react with a 100-200ms latency, but in MR even a 50ms latency between two applications' interactions will be immediately noticed.&lt;/p&gt;

&lt;p&gt;From now on let's jump into the &lt;strong&gt;theorycrafting zone&lt;/strong&gt; , and try imagine how these problems are actually going to get solved. I see three possible paths forward... let's start in order of increasing odds by yours truly.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. No solution whatsoever &lt;a href="https://shoreparty.org/posts/shared-engine-spaces/#1.-no-solution-whatsoever" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Engine developers stay hostile to each other, platforms and operating systems stay hostile to each other... no interop, no cross operation - maybe only within the operating system's own.&lt;/p&gt;

&lt;p&gt;I don't think this will be the case. We've seen &lt;em&gt;some&lt;/em&gt; historical examples for this, but I think it's a different time now, and if anything, UX demands will make it happen.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Engine interoperability gets sorted out &lt;a href="https://shoreparty.org/posts/shared-engine-spaces/#2.-engine-interoperability-gets-sorted-out" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxtymqcirbyiz3riq652.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxtymqcirbyiz3riq652.png" width="500" height="283"&gt;&lt;/a&gt;There are n+1 competing standards&lt;/p&gt;

&lt;p&gt;Imagine a wondrous land where the &lt;a href="https://xkcd.com/927/" rel="noopener noreferrer"&gt;XKCD Standards comic&lt;/a&gt; isn't real, and somehow every interested party manages to agree on a common interface, that allows them to share:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lights,&lt;/li&gt;
&lt;li&gt;shadows,&lt;/li&gt;
&lt;li&gt;textures / materials,&lt;/li&gt;
&lt;li&gt;camera settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc. It's not completely out of the range of possibilities - there've been recent endeavours that are going down this avenue, such as &lt;a href="https://en.wikipedia.org/wiki/Physically_based_rendering" rel="noopener noreferrer"&gt;Physically based rendering&lt;/a&gt; or the &lt;a href="https://www.khronos.org/gltf/" rel="noopener noreferrer"&gt;GLTF&lt;/a&gt; format.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. A single engine wins &lt;a href="https://shoreparty.org/posts/shared-engine-spaces/#3.-a-single-engine-wins" rel="noopener noreferrer"&gt;#&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This is one of those things where the more and more I thought about it I went from "nah, that's silly" to "hmm, no, this might actually be it". And I have a current and strong example as to why.&lt;/p&gt;

&lt;p&gt;Consider the following: what's something that's probably equally as hard to create as a good 3D rendering engine, features / complexity / portability-wise? If you thought "browser engine" you get a 100 points.&lt;/p&gt;

&lt;p&gt;If we look back and see what happened in the last 20 years to browsers a clear pattern emerges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firefox is doing its own thing with Gecko,&lt;/li&gt;
&lt;li&gt;IE's Trident got replaced by EdgeHTML in Edge,&lt;/li&gt;
&lt;li&gt;Opera gave up on Presto, went with webkit/blink,&lt;/li&gt;
&lt;li&gt;Vivaldi, Brave and others went with blink from the start,&lt;/li&gt;
&lt;li&gt;most recently, Microsoft gave up EdgeHTML and moved to blink as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Aside from an assortment of tiny players, there are 2 contenders remaining: WebKit/Blink and Gecko (apologies for treating the former two as a group). If you take a look at the &lt;a href="https://gs.statcounter.com/browser-market-share" rel="noopener noreferrer"&gt;browser market share data&lt;/a&gt; it's clear that Gecko has ~4% market share, and almost everything else is using its competitor. Personal feelings aside raised by this situation, I don't think we're far from Gecko throwing in the towel as well. The complexity of the problems faced isn't necessarily impossible to solve, but the &lt;a href="https://arstechnica.com/information-technology/2020/08/firefox-maker-mozilla-lays-off-250-workers-says-covid-19-lowered-revenue/" rel="noopener noreferrer"&gt;engineering effort available is dwindling&lt;/a&gt; along with the userbase.&lt;/p&gt;

&lt;p&gt;I'd argue that in the past 3D engines were primarily used in the entertainment industry which had differing and often niche requirements, but with mixed reality becoming a commonality this will change. Because of this I feel comfortable applying all that we learned about browser engines to 3D engines, and I see a strong possibility for the single-engine-wins outcome. What is that engine going to be? Not sure. Unreal is already open-source, but Unity has a larger userbase. Those two are clearly strong contenders.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks to &lt;a href="https://twitter.com/vbandi" rel="noopener noreferrer"&gt;András Velvárt&lt;/a&gt; for the HoloLens 2 and Oculus Quest videos.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mixedreality</category>
      <category>operatingsystems</category>
      <category>renderingengines</category>
    </item>
    <item>
      <title>Who needs a Sensibo anyway?</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Sat, 18 Jul 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/who-needs-a-sensibo-anyway-5dnj</link>
      <guid>https://dev.to/tomzorz/who-needs-a-sensibo-anyway-5dnj</guid>
      <description>&lt;p&gt;If you're reading this post you're probably in the target audience for a certain Instagram ad alongside with me, advertising Sensibo: a magical IoT gadget that turns your "dumb" air conditioner (AC) into a "smart" one. This is the story of me not buying one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodhp3v4c5hj28ymc4fhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodhp3v4c5hj28ymc4fhp.png" width="719" height="510"&gt;&lt;/a&gt;A Sensibo on a wall&lt;/p&gt;

&lt;h2&gt;
  
  
  The origins #
&lt;/h2&gt;

&lt;p&gt;Let's roll back the wheel of time far... far into the past: the year is 2016, the month is March-ish. Yours truly suddenly has a flashback to the summer of 2015 and remembers how incredibly hot and unbearable it was, and henceforth orders an AC. The AC installer guy comes out and offers me a few options for a unit, but essentially 2 choices remain that match my requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AC A: does everything I want, costs $500&lt;/li&gt;
&lt;li&gt;AC B: does exactly the same as AC A, but it has wifi, costs $800.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not intending to pay $300 for a chip worth ~$3 and software that can be probably licensed for $20 a unit, I went with option A.&lt;/p&gt;

&lt;p&gt;Years go by until Sensibo becomes a thing, inserting itself into the back of my mind - but somehow I never purchased it (guess this is surprising for someone having 25 devices with IP addresses in their home). Then 2020 rolls around, and I finally decide to do it - obviously prefacing my purchase with my usual check of the internets: this is the point where I come across rumors that Sensibo is considering making their service subscription based. I don't know whether there's any truth to that whatsoever. What I know on the other hand is the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I've heard about other IoT devices going subscription based later in their lifecycle;&lt;/li&gt;
&lt;li&gt;I have a Raspberry Pi Zero W in my drawer;&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;and finally&lt;/em&gt;, I have a masochistic tendency to reverse engineer protocols.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A plan started to formulate in my mind: I could - in theory - record what my AC's own infrared (IR) remote sends, figure out the encoding and the protocol for the control commands, then recreate said commands from a networked device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingredients #
&lt;/h2&gt;

&lt;p&gt;I dove head first into my "IoT drawer" (which is right above the fabled "cable drawer" everyone in IT has), grabbed the aforementioned raspberry pi and a little plastic bag (which is even older than my AC), containing IR parts. These parts were sadly not really labeled, and while I and others I asked could identify the LED and the receiver - we had no clue what the other bits and bobs (diodes, capacitors or resistors) were. No bits and bobs, no IR signals. Bummer.&lt;/p&gt;

&lt;p&gt;Luckily, a cursory search directed me towards an IR hat, the &lt;a href="https://energenie4u.co.uk/catalogue/product/ENER314-IR" rel="noopener noreferrer"&gt;Energenie ENER314-IR&lt;/a&gt; which I managed to find in stock on Amazon. These "hats" are similar to expansion cards in PCs - you just plug them into the pre-existing pins and you're set - no need for any extra tinkering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfxcgcigyby91e0oj7tq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfxcgcigyby91e0oj7tq.png" width="618" height="566"&gt;&lt;/a&gt;Energenie ENER314-IR hat&lt;/p&gt;

&lt;p&gt;There was only one small problem:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbbggqtyextfvwt17ohv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbbggqtyextfvwt17ohv.png" width="600" height="400"&gt;&lt;/a&gt;Raspberry PI Zero W&lt;/p&gt;

&lt;p&gt;The Raspberry Pi Zero W doesn't have the pins on it, I need to solder them on. Luckily it was a good idea earlier...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to order a soldering iron with some accessories;&lt;/li&gt;
&lt;li&gt;buy a 3D Printer to print a machine vise that'd hold the PCB;&lt;/li&gt;
&lt;li&gt;and upgrade my PC last year so that I'd have a leftover CPU fan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(I totally planned this, all the purchases were leading me to this, yep.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Doing something I never did before #
&lt;/h2&gt;

&lt;p&gt;Even though I watched a few tutorials on soldering I dove into the task with the "enthusiasm" of a cat about to get in the shower. I just had this vision in my head of my raspberry's PCB melted to the PLA plastic holding it...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fequf9nmdmiegrk4wrs85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fequf9nmdmiegrk4wrs85.png" width="479" height="616"&gt;&lt;/a&gt;Raspberry PI Zero W w/ pins, in a 3D Printed vise, with a CPU fan as ventilation. 100% pure, certified, mil. spec. jank™&lt;/p&gt;

&lt;p&gt;BUT, I made it. Woo! It's definitely &lt;em&gt;not&lt;/em&gt; perfect, but decent enough. And more importantly: it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xamk04n63uyxfrbi8co.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xamk04n63uyxfrbi8co.png" width="800" height="373"&gt;&lt;/a&gt;The results of my handiwork - as I got to the right side it started to become decent&lt;/p&gt;

&lt;p&gt;Quickly popped the IR hat on, a nicely prepared microSD card in, and I was ready to rock. I'll skip the details of the microSD setup - there are many readily available tutorials about doing that. The key is to enable SSH, and add the wireless network's details that the OS needs to connect to after booting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm53tymlqmc1d6u6aor6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm53tymlqmc1d6u6aor6y.png" width="800" height="496"&gt;&lt;/a&gt;The complete setup&lt;/p&gt;

&lt;p&gt;Setting up the IR hat to actually work was a bit harder, as I'm guessing the &lt;a href="https://energenie4u.co.uk/res/pdfs/ENER314-IR_User_guide_V3.pdf" rel="noopener noreferrer"&gt;official manual&lt;/a&gt; was a bit out of date. What ended up working for me was to follow &lt;a href="https://github.com/AnaviTechnology/anavi-docs/blob/master/anavi-infrared-phat/anavi-infrared-phat.md#setting-up-lirc" rel="noopener noreferrer"&gt;this guide from Anavi&lt;/a&gt; and combine that with the hardware specifics of the manual. An hour of tinkering later my terminal window was finally filled with numbers after entering "IR dump" (dump any incoming signals to the console) mode with &lt;code&gt;mode2 -d /dev/lirc1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad6pb7s1nkjmxa17olul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad6pb7s1nkjmxa17olul.png" width="800" height="454"&gt;&lt;/a&gt;mode2 output showing IR light pulses and spaces inbetween in µs (microsecond, one millionth of a second)&lt;/p&gt;

&lt;h2&gt;
  
  
  Time for the masochistic tendencies #
&lt;/h2&gt;

&lt;p&gt;I mean reverse engineering.&lt;/p&gt;

&lt;p&gt;Step zero is checking whether I need to do anything at all. It took me about 10 minutes to realize, I do. My AC (and its rebranded variants) don't seem to exist in any of the online pre-made IR remote databases. &lt;em&gt;So it goes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Step one is data gathering. I tried going the official route, using &lt;a href="https://www.lirc.org/html/irrecord.html" rel="noopener noreferrer"&gt;irrecord from the LIRC suite&lt;/a&gt;. Sadly my remote seems to be doing something definitely non-standard as irrecord became completely broken during recording. Cutting my losses after half an hour, I tried going the "dumb" route: what if I just pipe the mode2 output into a text file?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iteiytsb4ohlhl0sz8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iteiytsb4ohlhl0sz8q.png" width="503" height="240"&gt;&lt;/a&gt;Thousands of lines&lt;/p&gt;

&lt;p&gt;It obviously worked, but it was basically useless. There was too much ambient IR noise in my apartment even when I tried covering my sensor from most angles, so my file was full of random 10-100µs signals/pauses; plus to top it all off, there wasn't an easy way to distinguish separate signal blocks in it.&lt;/p&gt;

&lt;p&gt;Eventually I managed to find an application designed for exactly my needs: &lt;a href="http://www.harctoolbox.org/IrScrutinizer.html" rel="noopener noreferrer"&gt;IrScrutinizer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25or7h1dzwrtf7h28w89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25or7h1dzwrtf7h28w89.png" width="800" height="376"&gt;&lt;/a&gt;IrScrutinizer capture settings&lt;/p&gt;

&lt;p&gt;The tricky bit was to get the signals over to it from the raspberry pi. I solved this with the most glorious command chain I ever used: &lt;code&gt;wsl ssh pi@raspberrypi mode2 -d /dev/lirc1&lt;/code&gt;. WSL sits in front, because the ssh client built into Windows 10 had troubles setting up the keyfile based authentication. Inside WSL we connect via ssh to the raspberry, which in turn launches into mode2 &lt;em&gt;aaaaaaaand&lt;/em&gt; all this is piped back to the JVM running IrScrutinizer. Beautiful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fztp1joh60a99v3elyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fztp1joh60a99v3elyd.png" width="800" height="147"&gt;&lt;/a&gt;IrScrutinizer capture mode&lt;/p&gt;

&lt;p&gt;After getting my capture mode up and running, I started going over all the various features of my AC remote: power, mode, temperature, swing, fan speed etc... and making a note of each change, matching the signals. IrScrutinizer helpfully popped up, that indeed this seems like the &lt;a href="https://techdocs.altium.com/display/FPGA/NEC+Infrared+Transmission+Protocol" rel="noopener noreferrer"&gt;NEC Protocol&lt;/a&gt;, although as we'll find out soon it only seemed like that. After some cleanup, I ended up with the following text file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyqu3nrqao5rli5x4grl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyqu3nrqao5rli5x4grl.png" width="800" height="399"&gt;&lt;/a&gt;Cleaned up IR recording&lt;/p&gt;

&lt;p&gt;Time for step two, making sense of the pulses. In the text file we can already see that we deal with 140 signals per command, and that this is definitely &lt;em&gt;not&lt;/em&gt; standard NEC Protocol, as we have a long pause in the middle and at the end. Thankfully "not standard" doesn't mean "completely different", so I jumped into Visual Studio and whipped up PulseWrangler: a simple c# command line tool that'd help me make sense of the timings. In the NEC IR protocol we have 3 different signal pairs to make sense of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;9000µs long pulse followed up by 4500µs pause: this indicates the beginning of a signal block,&lt;/li&gt;
&lt;li&gt;~600µs long pulse followed up by ~600µs pause: this indicates a binary zero signal,&lt;/li&gt;
&lt;li&gt;~600µs long pulse followed up by ~1650µs pause: this indicates a binary one signal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As I said, my remote has a 600µs pulse followed up by a ~20000µs pause break in the middle, plus the ending seems to have a gigantic pause with even larger variation. Even the values coming from the NEC standard were all over the place, so the code was really lenient - often allowing for ±50% difference from the specification. While debugging I quickly noticed that some of my signal blocks had 142 signals instead of 140 - I cleared these up by hand after finding them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkarww7866pc5ylim7ugf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkarww7866pc5ylim7ugf.png" width="800" height="476"&gt;&lt;/a&gt;PulseWrangler parser, nothing magical just a bunch of ifs&lt;/p&gt;

&lt;p&gt;So after running this parser I ended up with a bunch of lines looking like &lt;code&gt;S00101000...01X100...11000100E&lt;/code&gt;. Much better. I adjusted the output a bit so I got a csv out, and opened it up in Excel for analysis.&lt;/p&gt;

&lt;p&gt;Figuring out the protocol isn't as hard as it looks like - just have to follow two rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make notes of what changed between each signal block,&lt;/li&gt;
&lt;li&gt;only change one thing at a time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it - the whole modus operandi. This way I could quickly identify which bit ranges that were responsible which settings - as these were the only ones changing when I modified said settings. I've also found a Chinese blog post later (that doesn't seem to load anymore 😟) for a slightly similar remote (model YB0F2) compared to mine (YX1F), which had &lt;em&gt;some&lt;/em&gt; usable information on a few of the unidentified values and the checksum calculation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34khk4859lyfilbahwcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34khk4859lyfilbahwcn.png" width="800" height="368"&gt;&lt;/a&gt;Formatted Excel&lt;/p&gt;

&lt;p&gt;Getting closer. I figured out everything I wanted to, except for the power on/off... which later turned out to be an issue created by IrScrutinizer not having the signals in order when I was making my notes. Let's quickly review the table:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;not surprisingly we start with the start signal&lt;/li&gt;
&lt;li&gt;bits 1-3 are for picking the mode: auto, cool, dry, fan or heat&lt;/li&gt;
&lt;li&gt;bits 4 and 23 seems to be power&lt;/li&gt;
&lt;li&gt;bits 5-6 are for fan speed, but apparently on my AC bit 21 indicates fan speed 4&lt;/li&gt;
&lt;li&gt;bits 7, 37 and 41 indicate the air direction swing on/off&lt;/li&gt;
&lt;li&gt;bit 8 is apparently sleep, although I didn't implement this&lt;/li&gt;
&lt;li&gt;bits 9-12 are for the temperature, offset by 16&lt;/li&gt;
&lt;li&gt;bits 13 to 20 are the two timings used by the timer features, didn't implement these either&lt;/li&gt;
&lt;li&gt;in the middle we have the separator signal&lt;/li&gt;
&lt;li&gt;bits 65-68 are the checksum (calculated by the other values to verify this is a correct signal)&lt;/li&gt;
&lt;li&gt;and finally at the end we have the end signal&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An additional "extra hard" look reveals that all the numbers are little endian encoded.&lt;/p&gt;

&lt;p&gt;There are a few other values marked by question marks, these came from that Chinese blog post - I don't think they match my AC as my fan level 4 conflicts with their humidify feature, which isn't even a thing my AC can do. I kept them in nonetheless as they were referenced in the checksum equation, which goes as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calculate &lt;code&gt;(mode - 1) + (temperature - 16) + 5 + swing - ((1-power) * 8)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Convert to binary&lt;/li&gt;
&lt;li&gt;Take the least four significant bits and drop the rest&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That equation above for the similar remote apparently references two unidentified signals which were always zero in my case, so I dropped those. In turn, the power signal was really different in my case: after recording a bunch more signals and some trial and error I figured out that when I want to turn off the AC I need to subtract 8 from the final checksum value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Onto calmer C-s #
&lt;/h2&gt;

&lt;p&gt;With the protocol figured out, it was time to try sending it back to the AC. I thought it'll be cakewalk from this point as I was sure there's a way to just do the reverse of what mode2 does: I supply raw timings to the app and it outputs them.&lt;/p&gt;

&lt;p&gt;I was wrong.&lt;/p&gt;

&lt;p&gt;LIRC's built in tool, &lt;a href="https://www.lirc.org/html/irsend.html" rel="noopener noreferrer"&gt;irsend&lt;/a&gt; is apparently made only for sending signal blocks created by irrecord. No go. It supposedly has a raw mode, but this didn't seem to work for me when I tried to replay a few recorded signals. Back to searching again.&lt;/p&gt;

&lt;p&gt;I managed to come across a library called &lt;a href="http://abyz.me.uk/rpi/pigpio/" rel="noopener noreferrer"&gt;pigpio&lt;/a&gt;, which promised raw GPIO control with tight timings. Because I'm lazy, I kept searching for someone who already implemented a "raw irsend" on top of this. My search efforts paid off soon: Brian Schwind's excellent library &lt;a href="https://github.com/bschwind/ir-slinger" rel="noopener noreferrer"&gt;ir-slinger&lt;/a&gt; was exactly what I was looking for. I cloned the repo and its dependencies, and set up a convenient dev environment on my raspberry pi. Normally to do that on a remote machine you'd install the VSCode's remote extensions but they don't work on ARMv6 CPUs, which my raspberry pi zero w has. Instead I used &lt;a href="https://github.com/billziss-gh/sshfs-win" rel="noopener noreferrer"&gt;sshfs-win&lt;/a&gt; to mount the raspberry's internal storage as a network drive in windows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceukgt3aozt8wsyvtzwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceukgt3aozt8wsyvtzwa.png" width="270" height="87"&gt;&lt;/a&gt;sshfs-win in action, pretty cool&lt;/p&gt;

&lt;p&gt;After creating a copy of the "send raw message" sample from ir-slinger and adjusting its settings to match my GPIO pins I started tinkering. I began with adding a known-good recorded message as a test... and I got a segmentation fault. As I faced an unknown bug, I started doing the trusty "binary bug search" method - commenting out parts of the source code to find the offending piece. This yielded an interesting result: my message's ending values caused the error.&lt;/p&gt;

&lt;p&gt;I did a little math on the the signal lengths and realized that I probably captured the time I spent between issuing the separate commands on the remote. I started cutting the values by a lot, and finally found that a 600µs pulse with a ~40000µs pause did the trick: it worked! 🎉 Took two weekends, but I finally changed a setting on my AC from the terminal. With that morale boost and flashbacks to my 1st programming class in college I started writing my &lt;em&gt;beautiful&lt;/em&gt; (not) C code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5phn8vr0y7qvt47f8mzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5phn8vr0y7qvt47f8mzs.png" width="800" height="486"&gt;&lt;/a&gt;Basic settings and constants, with a pre-set message&lt;/p&gt;

&lt;p&gt;I decided to leave the known good message hardcoded in there with plenty of comments, and just modify that based on the command line arguments coming in. At this point I also learned about the fact that there's no easy way to create enums from strings in C, unless I do templating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1n8j8ubvt3quqnvsa9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1n8j8ubvt3quqnvsa9d.png" width="800" height="393"&gt;&lt;/a&gt;Getting the command line arguments&lt;/p&gt;

&lt;p&gt;The necessary binary representations are created with the &lt;code&gt;toBinary&lt;/code&gt; method which outputs the converted number into the specified array. After said array is filled out, I just need to overwrite the preset message based on its values, while paying attention to the endianness. As both the zero and one values start with the same 600µs long pulse, I only need to adjust the 2nd value - the pause - in the pairs with the &lt;code&gt;signalZero&lt;/code&gt; or &lt;code&gt;signalOne&lt;/code&gt; constants.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyse8gnq2nup33wbvy8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyse8gnq2nup33wbvy8n.png" width="800" height="695"&gt;&lt;/a&gt;Modifying the message&lt;/p&gt;

&lt;p&gt;Now I could supply what parameters I want, and the correct signal gets generated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc70phx8ijzxdanw4tq27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc70phx8ijzxdanw4tq27.png" width="800" height="195"&gt;&lt;/a&gt;Invoking the controls from the command line&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Excellent! Moving right along.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Snakes on a &lt;del&gt;plane&lt;/del&gt; board #
&lt;/h2&gt;

&lt;p&gt;There was only one step remaining, calling my shiny new tool from the network. Following the pattern of doing things I basically never do, I decided to go with python for this. Helpful folks quickly directed me towards &lt;a href="https://flask.palletsprojects.com/" rel="noopener noreferrer"&gt;flask&lt;/a&gt;, a truly simple to use python web framework.&lt;/p&gt;

&lt;p&gt;A StackOverflow question helped me out with the GET parameters, a second one with doing syscalls and a final one with the int/string conversion. Just look at this beauty:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mql340hwrqrwsriodsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mql340hwrqrwsriodsa.png" width="800" height="481"&gt;&lt;/a&gt;The magnificent server code&lt;/p&gt;

&lt;p&gt;Calling it from the browser on my main PC: not much too look at, but the beep from my AC is music to my ears.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4g81ip3cm2tkj6ao0hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4g81ip3cm2tkj6ao0hl.png" width="707" height="141"&gt;&lt;/a&gt;Network invoke in action&lt;/p&gt;

&lt;p&gt;Only one more thing remaining, to start the service on every boot. I think I've heard on twitter that we need to hate systemd or whatever, but everyone on SO said that it's what I need to do this... so systemd it is. I created a service description file - or "unit" as it's apparently called - where I find my python3 binary and launch the tiny script with it. A quick reboot test verified it works!&lt;/p&gt;

&lt;h2&gt;
  
  
  Epilogue #
&lt;/h2&gt;

&lt;p&gt;There are a few things remaining for me: designing and printing a case, and mounting a raspberry somewhere near my AC - I'm thinking about making a curtain rod mount.&lt;/p&gt;

&lt;p&gt;To you dearest reader - as a thank you for making it through -, I leave my github repo: &lt;a href="https://github.com/tomzorz/who-needs-a-sensibo-anyway" rel="noopener noreferrer"&gt;https://github.com/tomzorz/who-needs-a-sensibo-anyway&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In here you can find the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/data/&lt;/code&gt; contains the cleaned up recordings I made, and the reverse engineering excel;&lt;/li&gt;
&lt;li&gt;for good measure &lt;code&gt;/docs/&lt;/code&gt; has the manual for the IR hat I used;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/source/PulseWrangler/&lt;/code&gt; has the C# code that helped me translate the signal pulses into binary;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/source/raspberry/&lt;/code&gt; has the pulse generator C code;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/source/raspberry/api/&lt;/code&gt; has the flask python server code;&lt;/li&gt;
&lt;li&gt;and finally &lt;code&gt;/source/raspberry/service/&lt;/code&gt; has the systemd service definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this post inspires at least a few of you to try something new, and make something "dumb" be a bit "smarter".&lt;/p&gt;




&lt;h2&gt;
  
  
  Footnote(s) #
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;I graciously skipped over the question of wavelength. Apparently most IR remotes operate at 38 kHz - including my remote. So luckily I could just leave every configuration option at the default.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>iot</category>
      <category>reverseengineering</category>
      <category>opensource</category>
      <category>airconditioning</category>
    </item>
    <item>
      <title>Reverse engineering the Unity Network Discovery protocol</title>
      <dc:creator>Tamás Deme 'tomzorz'</dc:creator>
      <pubDate>Tue, 19 Dec 2017 00:00:00 +0000</pubDate>
      <link>https://dev.to/tomzorz/reverse-engineering-the-unity-network-discovery-protocol-167f</link>
      <guid>https://dev.to/tomzorz/reverse-engineering-the-unity-network-discovery-protocol-167f</guid>
      <description>&lt;p&gt;I’m working on a project where a .net core backend is used as a server to provide data for multiple Unity clients. To ease development and usage in Unity it’d great if we could use the built-in network discovery module, so that’s what I did.&lt;/p&gt;

&lt;h3&gt;
  
  
  First steps: how it works in Unity #
&lt;/h3&gt;

&lt;p&gt;Clickety clicks: [New Project], [New &amp;gt; Create Empty], [Add Component], [NetworkDiscovery]. Upon hitting the play button we’re presented with a simple GUI allowing us to start broadcasting or listening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2pguberiask90d2gern.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2pguberiask90d2gern.png" width="359" height="81"&gt;&lt;/a&gt;I like to live dangerously, I didn’t even save the scene&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbt56ukm9fqn1qvyx25c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbt56ukm9fqn1qvyx25c.png" width="424" height="414"&gt;&lt;/a&gt;Why can’t I hold all these fields?&lt;/p&gt;

&lt;p&gt;I noticed that the component says “(Script)” at the end, so let’s check that out before breaking out Wireshark. Clicking the little [cog] icon and selecting [edit script] results in nothing as it’s compiled into the UnityEngine.Networking.dll. Luckily a quick search results in &lt;a href="https://github.com/jameslinden/unity-decompiled/blob/master/UnityEngine.Networking/NetworkDiscovery.cs" rel="noopener noreferrer"&gt;the source&lt;/a&gt; where we can see that the StartAsServer method calls the NetworkTransport.StartBroadcastDiscover method. Again a little searching to find &lt;a href="https://github.com/MattRix/UnityDecompiled/blob/master/UnityEngine/UnityEngine.Networking/NetworkTransport.cs" rel="noopener noreferrer"&gt;the NetworkTransport source&lt;/a&gt; &lt;em&gt;aaaand that how far down the rabbit hole goes.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[GeneratedByOldBindingsGenerator]
[MethodImpl(MethodImplOptions.InternalCall)]
private static extern bool StartBroadcastDiscoveryWithoutData(int hostId, int broadcastPort, int key, int version, int subversion, int timeout, out byte error);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Well, Wireshark it is… #
&lt;/h3&gt;

&lt;p&gt;Select your main network interface, and enter udp.port == 64764 as a filter to match the port specified in the component. If you now start broadcasting in Unity you’ll see the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7yjyz3gb8jgyolnhnoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7yjyz3gb8jgyolnhnoy.png" width="700" height="230"&gt;&lt;/a&gt;Packets, one per second as specified above&lt;/p&gt;

&lt;p&gt;Selecting any packet and opening the Data part shows us the interesting stuff:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvbvu4f0fpe8h6g7dedc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvbvu4f0fpe8h6g7dedc.png" width="665" height="248"&gt;&lt;/a&gt;We don’t care about the first 4 things, except taking note of the subnet broadcast address: 192.168.xxx.255&lt;/p&gt;

&lt;h3&gt;
  
  
  We need more data #
&lt;/h3&gt;

&lt;p&gt;Let’s open up Unity again, and start changing around values to see what happens with the data bytes. After a few variations I ended up with the following text file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfy6aande4brso2j3l3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfy6aande4brso2j3l3g.png" width="800" height="238"&gt;&lt;/a&gt;Key, Version, Subversion, Data (just in case it wasn’t clear)&lt;/p&gt;

&lt;p&gt;Immediately we can see patterns emerging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;all the samples have a bunch of zeroes in the middle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;all the samples start with [0x00 0x00 0x09]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the data field is at the end, as the beginnings look similar enough&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With a little trial and error it’s not hard to figure it all out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We begin with the mentioned [0x00 0x00 0x09] sequence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Two random bytes are inserted that remain the same for the broadcasting session (So if you change nothing except restart the broadcast these will be different)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The key integer appears in 4 bytes using the reversed endianness of the windows default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We see 8 4 byte blocks of zeroes, probably for future fields&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have the version integer on the same reversed endian 4 bytes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have the subversion integer the same way&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And finally we see the data string, ASCII encoded with a 0x00 byte between every character&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Replicating this in C# is simple enough, a few byte arrays, a little LINQ and a BitConverter here and there — &lt;a href="https://gist.github.com/tomzorz/4ee9a03af84d2e83056b6a7acedcd16e" rel="noopener noreferrer"&gt;here is the GitHub Gist for a .net core console app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To make sure that we did everything right let’s see it in action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feygge4m1b6d9922m4naa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feygge4m1b6d9922m4naa.png" width="480" height="237"&gt;&lt;/a&gt;\o/&lt;/p&gt;

&lt;p&gt;I hope this served as a little introduction on network protocol reverse engineering, and is useful for interacting with Unity as well.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>reverseengineering</category>
      <category>csharp</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
