<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Sugar</title>
    <description>The latest articles on DEV Community by David Sugar (@dyfet).</description>
    <link>https://dev.to/dyfet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dyfet"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Sat, 11 Oct 2025 06:48:23 +0000</pubDate>
      <link>https://dev.to/dyfet/-626</link>
      <guid>https://dev.to/dyfet/-626</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/aregtech" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F741067%2F50eefc3d-5d6f-4ffb-a2af-93f1a5176f8e.png" alt="aregtech"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/aregtech/bug-free-multithreading-how-areg-sdk-transforms-concurrency-in-c-4m1d" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;🐛Bug-Free Multithreading: How Areg SDK Transforms Concurrency in C++&lt;/h2&gt;
      &lt;h3&gt;Artak Avetyan ・ Oct 8&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#cpp&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#distributedsystems&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>cpp</category>
      <category>programming</category>
      <category>distributedsystems</category>
      <category>productivity</category>
    </item>
    <item>
      <title>One hash ring to rule them all!</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Tue, 30 Sep 2025 01:52:54 +0000</pubDate>
      <link>https://dev.to/dyfet/one-hash-ring-to-rule-them-all-5bgm</link>
      <guid>https://dev.to/dyfet/one-hash-ring-to-rule-them-all-5bgm</guid>
      <description>&lt;p&gt;A consistent hashing ring is a common element in building distributed computing services. A consistent hash is made of a request identifier and then assigned to a node based on a distributed list of host identifiers which are also hashed and typically kept as a ring of hashes. The ring allows one to add and remove nodes dynamically and keep the distribution of requests balanced among them. Consistent hashing may also be used to shard database tables.&lt;/p&gt;

&lt;p&gt;My implementation of a consistent hash ring first appeared for C++ in the ModernCLI codebase in July. I have since built a "consistent" conforming version in golang, in C# for dotnet, and, why yes, in rust too. There is now even a pure C implementation in minicrypt. As a C++ super-set toolkit, Kakusu naturally has a consistent implementation that lives on digests from other toolkits. Along the way I have learned more about what I both like and hate about each language and its ecosystem, too.&lt;/p&gt;

&lt;p&gt;The basic idea is that each language implementation should produce the same results when using the same input data and digest algorithm, so you can write your distributed components in a mixture of languages and have it all work together. To give the hash digest more data to churn on I am considering introducing an option for salted rings, and I wish to do more extensive and realistic unit testing.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>cpp</category>
      <category>distributedsystems</category>
      <category>algorithms</category>
    </item>
    <item>
      <title>Adopting Areg</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Sun, 28 Sep 2025 10:56:18 +0000</pubDate>
      <link>https://dev.to/dyfet/adopting-areg-3kl7</link>
      <guid>https://dev.to/dyfet/adopting-areg-3kl7</guid>
      <description>&lt;p&gt;I have been trying to adopt #Areg this weekend. It really is easy to setup and get initially working, even on #AlpineLinux. &lt;/p&gt;

&lt;p&gt;It is very rate that I ever have adopted a C++ framework. I am so picky, I usually just roll my own as needed. In fact, I only really have done so once before, and that was #Qt some 15 years ago. One result of that was the Wickr messenger.&lt;/p&gt;

&lt;p&gt;But what to do with Areg? Well, I had been planning out a successor to Coventry, which was a successor to SIPWitch. Work became really hard on Coventry as I was becoming blind over the past few years, and this was before finally effectively adapting to my situation. But I also have a much clearer idea of what I need in a Coventry server now than I did 5 years ago, too.&lt;/p&gt;

&lt;p&gt;I need timers. Lazy timers for registry expirations that might be 90 seconds to 5 minutes.  Millisecond prevision ring timers that will go off every 4 (or 6) seconds during ring phase for call sequencing for each ringing call.&lt;/p&gt;

&lt;p&gt;I need configuration management, such as for extension properties like call forwarding and speed dial lists.&lt;/p&gt;

&lt;p&gt;I need ipc / rpc to configure and manage the service.&lt;/p&gt;

&lt;p&gt;I need event pipelines and thread pools to collect and dispatch events and keep the sip stack delay free.&lt;/p&gt;

&lt;p&gt;I need shared locking for registry, pubsub, and call data structures which may be accessed from multiple event threads and other services.&lt;/p&gt;

&lt;p&gt;I need logging. I need a generic service startup / shutdown.&lt;/p&gt;

&lt;p&gt;Oh, and, as an on-premise switch, it should be able to build and run on posix systems, windows, and maybe even embedded / rtos.&lt;/p&gt;

&lt;p&gt;That's a shopping list.  Does it seem a familiar one &lt;a class="mentioned-user" href="https://dev.to/aregtech"&gt;@aregtech&lt;/a&gt; ?&lt;/p&gt;

&lt;p&gt;Oh, and I also need a name. But that I already dud have in mind for this application.  Beswitched!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>cpp</category>
      <category>linux</category>
    </item>
    <item>
      <title>The future of HPC is C++</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Sun, 21 Sep 2025 03:25:48 +0000</pubDate>
      <link>https://dev.to/dyfet/the-future-of-hpc-c-i2h</link>
      <guid>https://dev.to/dyfet/the-future-of-hpc-c-i2h</guid>
      <description>&lt;p&gt;I had been working some on HiTycho lately. I see strong use cases for HPX clustering in Geospatial work. For example, one could pass off map tiles in a large map for merging satellite imagery to independent tasks running on separate nodes and machines in a HPC cluster. Processing video feeds by splitting frames, VR simulations, and forms of AI processing may all be valid use case for this kind of scaling. Naturally I think about carrier scale telephony, too.&lt;/p&gt;

&lt;p&gt;I have also seen the future, the promised land for true scalable processing, both in horizontal distributed computing and vertical clustering in C++. This is why I am interested in combining HPX with Areg. This to me could make existing things like Temporal feel like legacy code.&lt;/p&gt;

&lt;p&gt;The main features in this weekends drop of HiTycho for HPC was in-memory buffer streaming and fork injections I borrowed from Busuto. Busuto has also been better aligned with HiTycho. This will make it easier to test and develop locally with Busuto and later migrate applications to HiTycho for true massive parallel scaling. That is my deeper vision for both of these libraries and why I do want to eventually make Areg work with both.&lt;/p&gt;

&lt;p&gt;For further reading:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aregtech/areg-sdk" rel="noopener noreferrer"&gt;https://github.com/aregtech/areg-sdk&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/dyfet/busuto.git" rel="noopener noreferrer"&gt;https://github.com/dyfet/busuto.git&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/dyfet/hitycho.git" rel="noopener noreferrer"&gt;https://github.com/dyfet/hitycho.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a class="mentioned-user" href="https://dev.to/aregtech"&gt;@aregtech&lt;/a&gt; &lt;/p&gt;

</description>
      <category>hpc</category>
      <category>cpp</category>
      <category>distributedsystems</category>
      <category>programming</category>
    </item>
    <item>
      <title>Blind coding</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Wed, 10 Sep 2025 22:30:10 +0000</pubDate>
      <link>https://dev.to/dyfet/blind-coding-70b</link>
      <guid>https://dev.to/dyfet/blind-coding-70b</guid>
      <description>&lt;p&gt;I want to talk about what it is like coding mostly blind. This is especially true for me as my eyesight continues to worsen, my chances for corrective surgery continues to recede, and anyway is now impossibly costly. Of course what constitutes blind, visually impaired, etc, is a very broad range. Being unable to distinguish n from m may be a very minor inconvenience but finding I am longer able to safely cross streets because I cannot see approaching cars, or navigate unfamiliar spaces, was much more profound.&lt;/p&gt;

&lt;p&gt;First, though I want to write about this separately, forget the very idea anyone will want you to work for them. Even modestly impaired, if they figure it out during an interview they will treat you horribly, too. If they find out later, they are quick to fire you, no matter how effective you are. In the US hatred of the disabled has only grown recently. Asking for accommodation is at present like saying "I don't want to work here", and they are very aware federal labor laws are no longer being effectively enforced. Tech companies seem some of the worst, too.&lt;/p&gt;

&lt;p&gt;I find I work most effectively now by visualizing code in my mind. This is easy with languages that correctly represent themselves. I can do it with C/C++ and especially go, as imperative languages, and Haskell as a declarative functional language. That is why I keep emphasizing syntax and correct representation is so very important, whether writing, reading other code, or trying to debug logical errors.&lt;/p&gt;

&lt;p&gt;The biggest issue for me is typing. Having a clear idea of what the code is in my mind, I have to now transfer that to an editor with very marginal eyesight where everything I see, even with monochrome, is still now blurry. Touch-typing helps, but when you make even simple mistakes, they are so hard to see. Editors that do live checking only create interruptions and popups that may also be blurry and distract that work flow. When later correcting code, VSC can actually be more helpful to me, but to get the initial bulk of it down, I have to go from mind to editor with as little interference as possible.&lt;/p&gt;

&lt;p&gt;What I found works for me is actually having AI proof-read and correct my typos for me, or even explain the weird compiler errors I cannot properly see. And with C++, especially, you can get some massive and weird errors. Sometimes it wants to refactor what I write or wants to “suggest changes” that often are actually wrong, but I have found how to prompt it to just give me corrected but otherwise unaltered text. Working this way, between visualizations and proofing, I now code faster and more reliably than I did with more normal sight.&lt;/p&gt;

&lt;p&gt;This desire for simple monochrome editors is why I often choose vim. But I will say Visual Studio Code can also be a godsend for visual impairment once you disable some of the more distracting features. Sometimes having smart typeahead is even useful, too. It knows how to scale its Ui, and it has a high contrast mode that actually works reasonably well for me. It even draws those border lines modern Ui designers seem to have decided to remove because shades of impossible to distinguish greys are all the rage.&lt;/p&gt;

&lt;p&gt;Another great tool that has worked for me is LibreOffice. I am able to effectively draw design diagrams in high contrast mode. However, since it makes everything black and white, its not always clear how what you are drawing will actually look until you are done and export it to something else, like a Png. I also wish in high contrast mode it did the arrows for lines too; it somehow missed fixing these in high contrast mode, and I very much notice that when doing a design document.&lt;/p&gt;

&lt;p&gt;In respect to proper ui design, a platform that works really well is an old X11 desktop. Back then they did proper borders and black really  was flat black, not just dark gray making text even more blurry. Splashes of graphic images like X11 used to do are okay for me, but websites that have images up front and in your face are actually blinding to me, and I am unable to read any text around that now. The X11 xpm format is also text! I can do a desktop ui like that again.&lt;/p&gt;

&lt;p&gt;Windows offers an okay but inconsistent accessibility experience. There are different parts that seem written with different toolkits, some of which know and use themes, high contrast settings,  and font size selections, and some (like the explorer file manager) that do not. But when and were accessibility settings are supported, those applications work very well. This is why I switched from nsis to innosetup for my mingw32 installer builds. &lt;/p&gt;

&lt;p&gt;Some claim MacOS is the "gold standard" for computer accessibility. I do not know because when your blind the last thing you can afford is anything expensive to begin with, and least of all, something that must be re-purchased every five years.&lt;/p&gt;

&lt;p&gt;While older X11 desktops, and even gnome 2, were great for accessibility, and 25 years ago blind users at the American Foundation for the Blind were able to work on computers for the very first time using gnome 2, modern Linux / Wayland desktops absolutely suck at it now. A few years ago, I was dealing with Orca no longer worked on Wayland and that gnome's "accessibility" mode with it's theme breaking adwaita coloring library, when high contrast is enabled, replaced dark themes with a dark text on pure white that, to me, effectively was just a giant blurry grey rectangle,&lt;/p&gt;

&lt;p&gt;I was told flat out by Gnome developers at the time that accessibility was a use case "we don't care about". This of course is how accessibility became broken in Ubuntu 24.04. Unlike Gnome, KDE does care about accessibility, but they have special challenges. The Qt QML core never was designed properly for applications that could offer accessibility modes unless you re-write an entire alternate QML Ui from scratch for each use case. Instead, KDE has been working accessibility into Kirigami, which sits above QML, and in other places like that. This also requires accessibility to be implemented in each application. But at least they really do care!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>a11y</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Wed, 10 Sep 2025 22:14:21 +0000</pubDate>
      <link>https://dev.to/dyfet/-4jpi</link>
      <guid>https://dev.to/dyfet/-4jpi</guid>
      <description></description>
    </item>
    <item>
      <title>Financial coding</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Wed, 02 Jul 2025 09:58:45 +0000</pubDate>
      <link>https://dev.to/dyfet/financial-coding-255l</link>
      <guid>https://dev.to/dyfet/financial-coding-255l</guid>
      <description>&lt;p&gt;In C++ there are always tradeoffs. Let's consider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using std::functional&amp;lt;void()&amp;gt;;
inline void myfunc(task_t task) {
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;vs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;template&amp;lt;typename Func&amp;gt;
inline void myfunc(Func func) {
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both can be used for generically injecting predicates and lamdas with captures. The first is easy. This makes it simple to move "blocks" of code around and may seem particularly convenient for thread queues and pools. Yes, my Ruby experience shows. There is also a lot going on behind the scenes in std::function even without wrapping function arguments. &lt;/p&gt;

&lt;p&gt;Since there is no templating in the first example, only one kind of function is produced, and the resulting binary is small. It is also slow. A nanosecond slower, perhaps. That is enough to give financial trading system developers nightmares and cause market panics!&lt;/p&gt;

&lt;p&gt;The second is less "flexible". However, the compiler will deduce it and optimize your call site. Often it can do so in ways people simply cannot and come up  with minimal code for invocation of a given function under the hood. This is why the C++ std library uses condition variable templated waits with predicates formed this way. It may also generate unique code for each usage, leading to classic C++ code bloat.&lt;/p&gt;

&lt;p&gt;It is important to know when to think like an embedded developer and when to think like a financial one.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>programming</category>
    </item>
    <item>
      <title>Intro to Calypso</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Tue, 06 May 2025 02:00:51 +0000</pubDate>
      <link>https://dev.to/dyfet/intro-to-calypso-221c</link>
      <guid>https://dev.to/dyfet/intro-to-calypso-221c</guid>
      <description>&lt;p&gt;Calypso is a post-quantum secure cryptographic system for conveyance and storage of confidential business information. I thought this would be a good time to give a general impression of what that means and will look like.&lt;/p&gt;

&lt;p&gt;There are two user touch points. The first is calypso command line tooling which has some overall resemblance to things like pgp. This will allow for storage, signing, encrypting documents and email, and similar functionality using Calypso cryptography and secure storage. The second are messaging clients which allow exchange of messages and e2ee secure real-time media (voice and video), with an option to provision for full regulatory compliance (cyclopse, which is a special automated audit client, not shown here) where needed for certain industries.&lt;/p&gt;

&lt;p&gt;The backend for Calypso is ogygia, and this uses a sql server. The backend is also effectively federated; any collection of users are hosted together under a private or hosted administrative domain. The touch point for administration of a given domain will be the QOC.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>privacy</category>
    </item>
    <item>
      <title>I decided to add monads, a templated result wrapper, and a async template that wraps functions to channels in my golang services package.</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Wed, 08 Jan 2025 15:27:25 +0000</pubDate>
      <link>https://dev.to/dyfet/i-decided-to-add-monads-a-templated-result-wrapper-and-a-async-template-that-wraps-functions-to-3ofh</link>
      <guid>https://dev.to/dyfet/i-decided-to-add-monads-a-templated-result-wrapper-and-a-async-template-that-wraps-functions-to-3ofh</guid>
      <description></description>
    </item>
    <item>
      <title>Licensing</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Thu, 26 Dec 2024 15:44:38 +0000</pubDate>
      <link>https://dev.to/dyfet/licensing-2caf</link>
      <guid>https://dev.to/dyfet/licensing-2caf</guid>
      <description>&lt;p&gt;Software licensing is both often and rarely talked about. Services like GitHub and GitLab make this question more important, though. So I am going to cover what is a good software license and why.&lt;/p&gt;

&lt;p&gt;First, per the Bern convention, all things copyrightable is automatically born with maximum copyright protection, and this is true for something with no explicit license at all. If you do nothing, you have software that is technically proprietary by default.&lt;/p&gt;

&lt;p&gt;Copyright itself has different meanings in different countries and cultures. Some speak of the absolute right of authors (such as the French droit d'auteur), and the idea that these rights cannot even be transferred. Some propose copyright as a limited bargain between authors and the general public, with the idea that all works eventually will fall into the public domain. All of these ideas are animated by national copyright laws which have been made more consistent by the Bern convention on copyright.&lt;/p&gt;

&lt;p&gt;The easiest license then is in effect to do nothing. The absolute alternative might seem to be to make something explicitly public domain, but this may not accomplish what one expects. For example, there are various ways corporations can "reclaim" something that is in the public domain and then copyright it for themselves.&lt;/p&gt;

&lt;p&gt;If what you want is to be in what you imagine the public domain to be, then I think your best option, for software, is probably the MIT license. It has all the essential attributes one believes should be true for a public domain work and is thankfully simple to do and maintain.&lt;/p&gt;

&lt;p&gt;There is a family of free and open-source licenses that operate thru conveyance, the essential act that copyright law is involved in. Some are very complex, and are believed to require some effort to effectively use. All are based on the legal powers of a copyright holder to explicitly disclaim some or all of those powers. All are about transferring some, most, or all of your legal rights as a copyright holder to your users, often, as per copyleft, with the specific reciprocal requirement that those who receive such software will permit the same for others.&lt;/p&gt;

&lt;p&gt;The antitheses to an open-source license is a proprietary license, often using contract law, often in addition to copyright laws, perhaps also using patent law as well. One problem is that these licenses are custom drafted and may require extensive (expensive) legal expertise to produce correctly. One reason is that while copyright law is already harmonized internationally, contract law is not.&lt;/p&gt;

&lt;p&gt;If you want many of the legal protections a known open-source license may instead disclaim but cannot afford a lawyer to produce a license for you, there are some other creative possibilities. The two I would suggest are in fact forms of the Creative Commons family of licenses. These are not actually open-source licenses, but they do similarly use international copyright law.&lt;/p&gt;

&lt;p&gt;The two are the CC BY-NC and CC BY-NC-ND licenses. The language is tested, and like open-source licenses, it uses the universal aspects of post Bern convention copyright law. They are also thankfully short, consistent, and rather easy to maintain.&lt;/p&gt;

&lt;p&gt;The CC BY-NC permits non-commercial sharing and creating of personal derivative works. This seems very ideal for an API / library where someone is building something on top of it. The CC BY-NC-ND also prohibits derivative works and seems perfectly good where you are okay having others see your code (the whole purpose of GitHub) and yet do not want others making their own commercial products or running their own commercial services with it. And it accomplishes this with no legal fees.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>software</category>
      <category>programming</category>
    </item>
    <item>
      <title>C++ and ModernCLI</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Sun, 01 Dec 2024 22:43:30 +0000</pubDate>
      <link>https://dev.to/dyfet/c-and-moderncli-517f</link>
      <guid>https://dev.to/dyfet/c-and-moderncli-517f</guid>
      <description>&lt;p&gt;For certain kinds of large scale real-time and resource tight applications, C++ remains highly desirable to me. The key is in part being to break down large applications into some kind of smaller components, as well as to focus on both readability and debuggability of C++ product code. Clarity can be your friend and often matters when working with large scale systems.&lt;/p&gt;

&lt;p&gt;While C++ has language features that can make breaking down large projects, it never really adapted these fully into the standard library. There are containers, which are useful for data structures and generic typing thru templating, but there are many things that have never been standardized (networking and cryptographic operations, for example) because different platforms went their own ways for these and nobody seems able to agree.&lt;/p&gt;

&lt;p&gt;To get a model of common code, to have consistent code patterns, practices, and cross-platform functionality, I always have produced my own auxiliary library for C++ that I then re-use internally everywhere. The earliest form of this was what became GNU Common C++, which originated in the 90's. The latest iteration is moderncli, found at &lt;a href="https://gitlab.com/tychosoft/moderncli" rel="noopener noreferrer"&gt;https://gitlab.com/tychosoft/moderncli&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;For various reasons, I chose to baseline current development around C++17. I also chose to do a library of stand-alone header files only. Header only libraries have the advantage that type inference (for using auto) works much better when the entire code body is present (inline) with the header. It also eliminates lots of issues with dll's, linking of yet another library, etc. By being stand-alone, I can also shave off parts of the library and vendor it directly in other software without difficulty. In fact, moderncli began life simply as headers I had copied into other projects in the past.&lt;/p&gt;

&lt;p&gt;Moderncli also proposes that attaching code blocks thru lambdas can be a powerful and convenient class extension model than classic C++ inheritance or templating. Some of this is influenced by my experience with Ruby, which often favored closures for wide ranging uses, including basic file processing. Of course there is a generic model for tcp and ssl streaming that is based on directly using C++ iostreaming, as well as low level socket, resolver, and network address support based on traditional BSD socket functionality.&lt;/p&gt;

&lt;p&gt;A concept I brought in from Rust is the idea of combining a container with the locking mechanism as a single (templated) unit. This means that acquiring a lock is the only means to access the contained object while guaranteeing all access and modifications to the underlying object is automatically safe and scoped to the life of the associated guard object being used.&lt;/p&gt;

&lt;p&gt;Other functionality I considered important to standardize for my uses include daemon service support, logging, and argument parsing. Most of the C++ code I write is some kind of system daemon, and, as a practical matter, probably would never be deployed outside of posix systems, though I do have portable implementation code.&lt;/p&gt;

&lt;p&gt;I also have function and timer queues. To me a timer queue is a lambda bundled with arguments that may be executed periodically, and acts as a kind of "internal" cron. The function queue allows me to drive a queue of lambda functions to split a component up with, and invoke with arguments. Each function executes in order in a single thread context. This makes it possible to eliminate the need for thread locking when manipulating private objects within the component itself. This concept of component execution I have also ported to C# and Go.&lt;/p&gt;

&lt;p&gt;Presently moderncli is on track for a 1.0 release, perhaps early next year. I believe it is now likely "header" complete, if some of the header functionality for things like x509 certificates remains sparse. It is hard to determine what the future of C++ may yet become, but that seems a more distant 2.0 problem.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Tyranny of CI testing</title>
      <dc:creator>David Sugar</dc:creator>
      <pubDate>Tue, 29 Oct 2024 17:31:11 +0000</pubDate>
      <link>https://dev.to/dyfet/the-tyranny-of-ci-testing-3njd</link>
      <guid>https://dev.to/dyfet/the-tyranny-of-ci-testing-3njd</guid>
      <description>&lt;p&gt;Everyone seems to think ever more unit tests and Continues Integration are good things. This is often a fallacy. It is part of the often-mistaken Silicon Valley culture that gave us Scrum and believes you can somehow iterate yourself to quality by doing ever more incomplete work in ever shorter fixed time sprints, thereby adding incomplete work using ever smaller piles of garbage, rather than in actually solving problems.&lt;/p&gt;

&lt;p&gt;CI is the backbone of this culture. In the CI driven development cycle, this means you not only get to figure out and debug build systems issues in your local development environment, but in an entirely remote system that may have very different behaviors, and to which you have very limited access. Debugging a CI issue often feels like the blind man trying to understand the elephant. And all you can do is create ever more commits to try and hope to fix your CI build when it does break.&lt;/p&gt;

&lt;p&gt;For this reason, I prefer to do all my production / release work for my things locally on a development workstation. It's far easier to resolve broken builds, especially if the number of people who actually do releases is small to begin with. It also means you have better idea how to setup local dev environments for everyone else correctly. Anyone else with a correct / complete setup can then do production releases, too. We have all these local resources that often are far faster as well as far more accessible, whether for running lint, running pre-release tests, etc.&lt;/p&gt;

&lt;p&gt;Where I do find CI handy is for accepting code from other arbitrary external people submitting public merge requests, as a kind of minimal pre-verification, since they probably don't know how I setup production or what my expectations would be in advance. Strangely, for a long time, this one obvious use case (integrating CI with merge requests) was actually rather poorly supported in the CI cycle and even actively discouraged in the cult of CI.&lt;/p&gt;

&lt;p&gt;Instead, we often have this practice of verifying every single commit. This can be rather useless and a big waste of resources. If people have problems locally testing code before submitting, make sure its forced in their commit system locally, rather than exporting their bugs for CI to work out later. &lt;/p&gt;

&lt;p&gt;Do you run deploy on tags from your CI? The horrors of that elephant again, because tagged production workflows are only used during the few releases, and so are rarely used. They usually do get broken with product changes. Then you find yourself hiring devops engineers, and other workers you didn't need because the workflow creates labor needs you wouldn't have had, and all those CI resources also have to be maintained. Yet, it is these useless workflows CI was originally optimized for. This workflow does at least guarantee full employment for all skill levels, though...&lt;/p&gt;

&lt;p&gt;Unit tests are often also part of the CI driven scrum culture and has it's own issues. For old fashioned 20th century applications with well-defined linked API's, they can be very useful to validate and prevent API regressions, and this even remains true for a lot of embedded work today, too. For many real-world things made today, however, for example, that interface over networks, or involve interacting components that have no purely isolated operation, that have human interfacing, etc, they are often rather useless. Proper integration and release testing is what you most often want instead.&lt;/p&gt;

&lt;p&gt;Many that buy into the SV CI cult also go for code coverage. The code coverage subcult tends to reject code if the coverage falls below a certain project manager defined minimum. The problem here is that while it encourages / requires unit tests to be written, they often are made to satisfy management needs. This lends to poor quality and testing practices. This is especially true for systems, as noted, that cannot effectively be unit tested in isolation to begin with.&lt;/p&gt;

&lt;p&gt;There certainly are cases where CI is useful, where unit tests and test driven programming do work, and these are good things, but they are far more cases where they are blindly followed practices where they can at best be useless, and at worst cause real harm. The next thing I do want to talk about is how AI can help bring back the single person software shop of the long lost 70's (and 80's?), but before we can even talk about that we need to regain control over those practices which make that impossible.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>programming</category>
      <category>devops</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
