<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robert Teminian</title>
    <description>The latest articles on DEV Community by Robert Teminian (@teminian).</description>
    <link>https://dev.to/teminian</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/teminian"/>
    <language>en</language>
    <item>
      <title>Selecting local AI models for me and my fellows in network analysis field</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Tue, 17 Feb 2026 04:20:46 +0000</pubDate>
      <link>https://dev.to/teminian/applying-ai-for-me-and-for-out-tech-support-in-network-analysis-5hhm</link>
      <guid>https://dev.to/teminian/applying-ai-for-me-and-for-out-tech-support-in-network-analysis-5hhm</guid>
      <description>&lt;p&gt;I'm developing network analyzer using C++. And sometimes I'm off to the customer site to see what's going on there. A lot of network traffic, arguing, engineering pitfall, ..... happen there.&lt;/p&gt;

&lt;p&gt;And there's LLM, your best friend anytime you want. Because of the customers we deal with most of time we're expected to work offline. So there's no ChatGPT nor Claude or Gemini or whatever.&lt;/p&gt;

&lt;p&gt;Since then, to help myself tech support, I tried to apply a handful of local LLM models. Here comes limited, unprofessional(since I didn't learn AI seriously), and totally personal opinions and memorandum from my experience.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prepare for Test Material
&lt;/h1&gt;

&lt;p&gt;First I needed a prompt. Yes. I've got to make a recipe that I'd like my models to do. I extracted some "rankings" from our analyzers and wrote instructions of what each column means and how to analyze using the tables.&lt;/p&gt;

&lt;h1&gt;
  
  
  From Google to "Open Sesame"
&lt;/h1&gt;

&lt;p&gt;First I tried our good old friend, Google's Gemma3. It was versatile, fast, and lightweight(at least for me). And later I moved to Qwen3 thinking model. Though a LOT slower, the result was far better. After changing some prompts I could get similar results from Gemma3, but later it was abandoned, since the details were either incorrect or the result changed every time I ran with the same prompt.&lt;/p&gt;

&lt;p&gt;During this phase I also tried others, including DeepSeek R1 and LG EXAOne(yes, that LG. Korean company). DeepSeek R1 was like..... a failure to me. Its report didn't make sense. For LG EXAOne, well, the result was decent(at least, on par as Qwen3 Instruct model or so), but it was prohibited to be commercially used. Oops. :P&lt;/p&gt;

&lt;h1&gt;
  
  
  Man it's way tooooooooooo slow - and here comes a new challenger from France - Mistral AI
&lt;/h1&gt;

&lt;p&gt;Qwen3 thinking model was great. But its result took too long to be generated. I learned that I could turn off its thinking via some switch but it didn't work. Later I learned of Qwen3 instruct model, but it was FAR later...... :P&lt;/p&gt;

&lt;p&gt;And I learned of Mistral AI. They produce decent models. Well, frankly speaking when I tried their Mistral Small models I was quite disappointed, but it was years ago. Why not try again now?&lt;/p&gt;

&lt;p&gt;I downloaded &lt;strong&gt;Ministral 3&lt;/strong&gt; and &lt;strong&gt;Devstral Small 3.2&lt;/strong&gt;. At first both gave me decent results. Minsitral 3 did it really well on generating reports, and it didn't miss any details from my instruction(it was quite unlike Gemma3). Devstral 3.2 was handful to generate some code snippet, though it didn't support FIM(Fill-in-the-Middle), which was of no use for me anyway(Qwen3-Coder couldn't suggest anything for my brand new project, so I concluded that it makes no difference).&lt;/p&gt;

&lt;p&gt;One thing interesting was that Devstral 3.2 generated more compact and satisfying code than Qwen3Coder-Next. Before Next I tried Qwen3Coder, and it generated hallucinated code whenever I tried - a STL-ish function call which doesn't exist in any C++ standard at all. So guess what? I just dropped it.&lt;br&gt;
Later I tried Qwen3Coder-Next. Well, frankly speaking the code from Qwen3Coder-Next met all the criteria I gave to it and didn't show the hallucination from Qwen3Coder, but its code was like, some old Chinese literature: full of rhetoric decorations with rich expressions, but its core is like..... void. As far as it can go it doesn't matter - I don't want to install luxury car stereo or nitro boost for my small bike!&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction to Quantization
&lt;/h1&gt;

&lt;p&gt;When I tried Ministral and Devstral Small models, I first tried &lt;strong&gt;Q6_K&lt;/strong&gt; models, but later changed to &lt;strong&gt;Q8_K&lt;/strong&gt; models since my laptop could endure the loads. The difference was not that much but worth noticing. Q8_K-generated codes were more feature-rich and error-resistant. In my situation complicated error handling was not needed but it was worth having some.&lt;/p&gt;

&lt;h1&gt;
  
  
  official vs. Unsloth vs. bartowski
&lt;/h1&gt;

&lt;p&gt;Also, I tried &lt;em&gt;Unsloth&lt;/em&gt; and &lt;em&gt;bartowski&lt;/em&gt; builds hoping that maybe they generate better or run faster than official ones. If they're as popular as official ones, why not trying?&lt;br&gt;
After trying, I concluded &lt;em&gt;Unsloth&lt;/em&gt; builds are quit opinionated. They provide similar results with less bits, but their optimization is so opinionated that sometimes the model doesn't work as expected. For example, though I said that the report will be written in Korean but &lt;em&gt;Unsloth&lt;/em&gt; builds occasionally emitted Chinese and Japanese words, where &lt;em&gt;bartowski&lt;/em&gt; builds didn't do.&lt;br&gt;
At other instance, I just said "introduce yourself" and &lt;em&gt;Unsloth&lt;/em&gt; Ministral replied with French. Later I asked for explanation and it said that " is French". OMG. Though I didn't learn French myself, at least I know there's no words like "yourself" in French.&lt;/p&gt;

&lt;p&gt;Anyway, if you're using Latin-based language I think you can stick to &lt;em&gt;Unsloth&lt;/em&gt; builds with smaller quantization for satisfying result, but if not, use &lt;em&gt;bartowski&lt;/em&gt;(at least to me). For me, &lt;em&gt;bartowski&lt;/em&gt; builds were like pure conversion of models to GGUF without any opinions or tuning so they can reflect the will of original authors more thoroughly.&lt;/p&gt;

&lt;p&gt;Ah, and one thing - not only Ministral, but also its bigger cousin Mistral Small 3.2 was also confused about having consistency in what language it's talking in.&lt;/p&gt;

&lt;h1&gt;
  
  
  Quantization part 2: Qwen3-Next
&lt;/h1&gt;

&lt;p&gt;After running Devstral and Ministral for a while, I learned that when quantized right, the size of Qwen3Coder-Next and Qwen3-Next becomes similar to Mistral small models. Then why not compare them? Since Qwen3-Next models has more parameters, maybe they show better results?&lt;/p&gt;

&lt;p&gt;So I compared &lt;strong&gt;IQ3_M&lt;/strong&gt; models from &lt;em&gt;bartowski&lt;/em&gt;(since I think his quants are better to reflect original author's will) against Q8_0 models of Mistral models. And well, I concluded Qwen3-Next models provide better results if instructed right(=prompt is detailed enough), and it was faster on generating results(parsing my question took more time but answer generation was twice faster).&lt;/p&gt;

&lt;h1&gt;
  
  
  So, what now?
&lt;/h1&gt;

&lt;p&gt;I stick to Qwen3-Next for report generation and Qwen3Coder-Next for code generation, and, hopefully FIM(if it can follow my speed of thoughts. :P).&lt;br&gt;
I'd love to show my fellow tech support team about new environment I built to help them doing consulting tasks and writing reports. but it's New Lunar Year holidays(설날) in Korea I've got to take a rest first.&lt;/p&gt;

&lt;p&gt;Well, I just hope they like it.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>C++ or Rust? I'd stick to my good old C++</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Wed, 15 Jan 2025 06:21:39 +0000</pubDate>
      <link>https://dev.to/teminian/c-or-rust-id-stick-to-my-good-old-c-1g3g</link>
      <guid>https://dev.to/teminian/c-or-rust-id-stick-to-my-good-old-c-1g3g</guid>
      <description>&lt;h1&gt;
  
  
  Disclaimer
&lt;/h1&gt;

&lt;p&gt;This post is not to say which programming language is better for what reason. This is just to share my preference, after thinking and experiencing for months, hoping this may help someone having concerns similar to mine.&lt;/p&gt;

&lt;h1&gt;
  
  
  To C++ or to Rust, that is the question
&lt;/h1&gt;

&lt;p&gt;Yes. That shall be what many programmers, including myself, have for their future technology adoption and career path, especially if you're C++ programmer. I'm also using C++ as my daily driver, and I'm also familiar with "segmentation fault" and "This program has performed an illegal operation and will be closed", both of which mean there are errors in my manual memory management. So Rust caught my eyes to like many others, claiming "this programming language avoids memory corruption without any additional loads-I'm seeing you, garbage collector- by carefully designed language grammar". Yep. That sounds like a magic or charm. I also tried the language, loved the design, ecosystem as well as syntax, and I applied the language to a handful of commerical applications I was expected to develop. Forget CMake and other C++ legacy. No worries for custom move operator for more efficient std::move(). Const is default so that you don't have to worry about changing should-be-immutable values by mistake. Cargo is GOD, borrow check is FUN, and grammar is LOVELY!&lt;/p&gt;

&lt;p&gt;But after launching a few commercial and internal applications by myself, I soon felt something doesn't fit to me.&lt;/p&gt;

&lt;h1&gt;
  
  
  DON'T INSIST on me
&lt;/h1&gt;

&lt;p&gt;Rust is certinaly on mainstream and get backup from many big tech companies. You know them too. Not to mention Mozilla which worked as the incubator, now it is backed by AWS, Microsoft, Google, Facebook, and so many others. People love Rust, its trait(pun intended), and I'm sure that you'll even fall in love with RUST_BACKTRACE=1(maybe).&lt;br&gt;
However, the language is, in my opinion, stubborn. The language insists on specific way of doing if you'd like to do something. The grammar is quite complicated there are a lot of things to consider. Of course I can accomodate myself into the grammar and make myself fit to the language-or, think like the language. However, my instinct, I found out that I don't like that notion, even though in most situation "the Rust way" is reasonable and acceptable and I agree on them too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp44odxr56rh6262ut936.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp44odxr56rh6262ut936.png" alt="Yep. It's not only Stack Overflow Developer Survey" width="800" height="238"&gt;&lt;/a&gt;&lt;em&gt;&lt;center&gt;It's not only Stack Overflow Developer Survey, you know&lt;/center&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There shall be more efficient ways if the restriction is removed and more freedom is given. For example, what shall be the most efficient way to save an unsigned 64 bit integer into SQLite database, which doesn't support unsigned integers at all? If it were PostgreSQL you may consider using bigger integer types(&lt;code&gt;decimal&lt;/code&gt; or &lt;code&gt;numeric&lt;/code&gt;), but it's just our small and cute SQLite without them. In such situation, at least for me, the most efficient way to do is force typecasting unsigned 64 bit integer to signed one and vice versa. You can easily do this on C++ like following: &lt;code&gt;(int64_t)my_awesome_value&lt;/code&gt; and &lt;code&gt;(uint64_t)value_from_sqlite&lt;/code&gt;. But Rust?&lt;/p&gt;

&lt;p&gt;Forget it. I don't even want to count how many steps are needed.&lt;/p&gt;

&lt;h1&gt;
  
  
  I love freedom over rules
&lt;/h1&gt;

&lt;p&gt;I love freedom. Yet still, I agree that you need to limit your behavior to reduce, if you cannot eliminate, any possible errors at least, by design. I also intentionally design some of classes and functions to follow certain rules and procedures so that application will encounter compile time error if not followed. However, I think it shall be developer's choice, rather than language's constraint. During the development with Rust &lt;code&gt;rustc&lt;/code&gt; complained too much on me so that I had to do something extra, which was not a problem in my good old C++ world. For some people they could be life savior or automatic consultant, but for me it sounded like "nagging from your old mom." I wanted to express more on my own way. Thank you for your kind suggestions in the error message, but I don't think that's the most efficient way to do the business.&lt;br&gt;
So, though I loved &lt;code&gt;cargo&lt;/code&gt; over &lt;code&gt;CMake&lt;/code&gt; and being free from "undefined behavior", I took risk of segfaults to do my fun ride again with C++. With C++ compilers, CMake, Ninja, and Neovim with a few lightweight plugins, I'm free and invincible again!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygnqi7zegm2ltc7z88q0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygnqi7zegm2ltc7z88q0.jpg" alt=" " width="640" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  But I'm still (a bit) Rusty
&lt;/h1&gt;

&lt;p&gt;Though I'm fully back to world of C++, I didn't say that I forgot everything I learned from Rust. I use more smart pointers, I prefer references over pointers to avoid any potential to pass &lt;code&gt;nullptr_t&lt;/code&gt; by accidient(so that application will crash) if null pointer is meaningless in the context, got more ideas of better manipulating CMake by reviewing basics of &lt;code&gt;cargo&lt;/code&gt;. Yep. Rust really influenced and changed my programming style to better design in good way, like my good old friend Object Pascal did to me. Like C, C++ also grants programmers more freedom. You're free to do anything if you know what you're doing(tm). But with great power comes great responsibility - this could result in a situation nobody knows why this application crashes. And if it just works fine in debug mode, you just go crazy(argh). Still I love my freedom to do everything in my own way, with good practicves and designs I learned from Rust as I did in the past with Object Pascal and it &lt;code&gt;TObject&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I'm not sure which language will be more mainstream in the future between these two. Maybe Zig(&lt;a href="https://ziglang.org" rel="noopener noreferrer"&gt;https://ziglang.org&lt;/a&gt;) can be some contender in the future, but not now at least - it could be a good contender at least it shows OOP grammar as simple as Python, internalizing vtable. For C++ and Rust, at least for me Rust is more like "you MUST do this" while C++ is like "you CAN do it also in this way." While one is highly opinionated, the other is unopinionated at all(that is to say, at least for me. your opinions are always welcome). And that may be one of the reasons that I don't like Qt? :D&lt;br&gt;
Maybe C++ is still superset of Rust in some way (it's just "in some way", because there are things unique in Rust language itself. For example, Rust trait can be mimicked with template class and combination of C++ enum and template class can behave like class-associated Rust enum, but C++ doesn't have anything equivalent or similar to borrow checker).&lt;/p&gt;

&lt;p&gt;So, that's all from me. Any ideas or comments are welcome. Let's share our thoughts.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>rust</category>
    </item>
    <item>
      <title>AI can't read between lines</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Thu, 24 Oct 2024 01:22:58 +0000</pubDate>
      <link>https://dev.to/teminian/ai-cant-read-between-lines-hc6</link>
      <guid>https://dev.to/teminian/ai-cant-read-between-lines-hc6</guid>
      <description>&lt;p&gt;This article is from my old blog, posted at Aug. 23rd 2024.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2024/08/ai-cant-read-between-lines.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2024/08/ai-cant-read-between-lines.html&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;One of my acquaintances make schedules for his colleague's daily leave schedule. Well, isn't it decided by the individuals themselves? Yet, there was a reason. The workplace has to be open for 7 days a week and employees are expected to take a leave for two days a week. If the selection is left to the individuals, they will certainly select specific weekdays(e.g. Saturday or Sunday), so he had to distribute the leave schedule himself.&lt;/p&gt;

&lt;p&gt;But as you can imagine, the schedule can't be well randomized if it's done by hand. Even simple pseudorandom from your computer will generate far better result. So, he decided to use Microsoft Excel to automate the scheduling and get help from AI. And he tells the conditions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each employee can take a leave of 8 days per month&lt;/li&gt;
&lt;li&gt;Weekday is not important, but the distribution needs some randomization&lt;/li&gt;
&lt;li&gt;The schedule should be overlapped as less as possible&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Well, it seems to be fine until now. Isn't it? AI can generate some Excel spreadsheet. But when he added the following condition, AI goes brutal - its result generate error.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each employee should be able to take a leave at least once per week&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're good at dealing with Excel, you'll quickly understand it can't be accomplished by "pure" Excel without support of VBA macros. To satisfy the above conditions I need to know when others are taking a leave, but for the others the same applies. To know me I've got to know you, but to know you I've got to know myself, ....... Yep. A typical &lt;strong&gt;"recurring reference"&lt;/strong&gt; situation. If you add such conditions, AI will just "give it up" and will tell whatever it wants to say. Forget prompt engineering. The best answer shall be &lt;strong&gt;"it should be done with VBA macros"&lt;/strong&gt;, or it can't make it happen like a magic.&lt;/p&gt;

&lt;p&gt;Did you decide to develop a dedicated application software program? Yet still you've got to consider a lot. Scheduling leave schedule for a month, you need to avoid overlapping as much as possible, consider weekly schedule when building the whole monthly schedule, and decide how to deal with non-full-weeks in front or end of the month. Many points to consider, and many exceptions to handle.&lt;/p&gt;

&lt;p&gt;But, this is only valid only when you deal with the problem as &lt;strong&gt;it is&lt;/strong&gt;. Yes. AS IT IS.&lt;/p&gt;

&lt;p&gt;Thinking a bit further, you can simplify and reassemble the problem and it can be solved quite easily. If you've got to leave once a week, then &lt;strong&gt;you shall have leaves sufficient to leave at least once a week.&lt;/strong&gt; If you divide the full month to number of leaves(in this case 30/8=3.75 days) and spread the leave schedule, all the employees can take a leave at least once a week. Of course in this case the randomization becomes limited when compared with original condition of "leave at least once a week", but from the employer's perspective, leaving only once a day for 3 weeks and leaving the remaining 5 days in week 4 will give him quite a headache. :P&lt;/p&gt;

&lt;p&gt;Large Language Model(LLM) can understand the natural language and understand what he says. But, the core of LLM is statistics and probability, and based on which patterns are shown the most when we divide our language to syntactic word level. In other words, if a human gives a sentence, it has no problem in understanding it as it is, but reading so called "between the lines" or "hidden premise" and restructuring the problem itself-we call it a inference- is beyond the level current technology can achieve.&lt;/p&gt;

&lt;p&gt;And yes, this is when human intervention is needed.&lt;/p&gt;

&lt;p&gt;AI can generate source code, and some people say that developers won't be needed anymore, well, still it's only human who can "truly understand" the depth of the problem and reassemble in a way that the problem can be easily solved. And human acknowledges many conditions which are usually not given to AI. Though I don't know, I think this is related with prompt engineering.&lt;/p&gt;

&lt;p&gt;So, my dear fellow software developers in the world! Now is not the time.&lt;br&gt;
Until AI can "infer properly", your desk is safe!&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>A 5-day journey with Lite-XL</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Sun, 20 Oct 2024 23:50:05 +0000</pubDate>
      <link>https://dev.to/teminian/a-5-day-journey-with-lite-xl-1bdl</link>
      <guid>https://dev.to/teminian/a-5-day-journey-with-lite-xl-1bdl</guid>
      <description>&lt;p&gt;This is a post from my old blog at July 25th, 2024&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2024/07/a-5-day-journey-with-lite-xl.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2024/07/a-5-day-journey-with-lite-xl.html&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;There's an ancient Chinese saying(which is also famous in Korea), 日新又日新, meaning that "renew myself day by day." With the saying in mind, occasionally I try new development environments and tools, as other good developers do.&lt;/p&gt;

&lt;p&gt;During the time I was a bit skeptical about "modal editing" scheme, represented by Vim, changing modes-Insert, Normal, Visual- to do certain stuff. Why would I be concerned with the mode? If I can do everything without changing modes, wouldn't it save both time and my mental resources(e.g. being concerned with mode)?&lt;/p&gt;

&lt;p&gt;With that in mind, I tried to search for a good substitute for my precious Vim and amid the search activity I found Lite-XL. Though I'm back to Vim, the 5-day journey with the Lua-based coding editor was impressive, so I'd like to leave a record to remember the fun during the journey.&lt;/p&gt;

&lt;h1&gt;
  
  
  Too small footprint
&lt;/h1&gt;

&lt;p&gt;Literally. Like Vim, it has small footprint. Though memory consumption was a bit higher than Vim for fresh run, but compared with modern "heavyweight" IDE-like code editors, and considering that it's built in Lua, a versatile scripting language, it was impressive.&lt;/p&gt;

&lt;p&gt;However, it also means that it barely has anything, like vanilla Vim. To be more productive, you need to install and configure plugins, e.g. LSP, indent guide, or highlighting same words of current selection in the document.&lt;/p&gt;

&lt;p&gt;And where there's .vimrc in Vim, there's .config/lite-xl/* in Lite-XL. There are a handful of Lua scripts there, and you can add your own configurations and initializations as needed. Configuration is fully manual and in Lua, but you no worries even though you're not familiar with the language, like I do. Follow the instructions for each plugin and you'll be fine, though I had to be careful to not lose any details when reading them(maybe that's because I'm not an English speaker? ;) ).&lt;/p&gt;

&lt;h1&gt;
  
  
  Fully responsive, always
&lt;/h1&gt;

&lt;p&gt;Lite-XL was always responsive with nice scrolling animations. Whether it be searching among files or dealing with command palette, it was always like shouting "I'm lightweight enough so that I can fly!".&lt;/p&gt;

&lt;h1&gt;
  
  
  LSP: some better, some missing
&lt;/h1&gt;

&lt;p&gt;LSP plugin is satisfactory. Though the official github repository for Lite-XL LSP plugin says that it is WIP, but still most major features are already ready to serve, and the overlay was quite informative, and, most of all, non-interrupting. I'm sure you know what I mean if you saw messages in virtual texts when running Vim LSP plugins. Even showing overloads was better with Lite-XL.&lt;/p&gt;

&lt;p&gt;However, its symbol search dialogs and commands were a bit confusing, and missed some small utilities I enjoyed(e.g. symbol search with full symbol list and their types). But that's fine - maybe I was just not familiar with new interface, and I didn't invest ,much time or efforts on changing my workflow for the unfamiliar interface.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;del&gt;Weapons&lt;/del&gt; hot keys (almost) everywhere
&lt;/h1&gt;

&lt;p&gt;The developers must be concerned with their right hands traveling between keyboard and mouse, and they want to minimize using mouse.&lt;/p&gt;

&lt;p&gt;Where there is a frequently used feature, there's a hot key dedicated for it. I'm sure that Lite-XL developers must use Lite-XL themselves on developing stuff and they're also "speed freaks", to NOT allow any time loss regarding your keystrokes.&lt;/p&gt;

&lt;p&gt;The only thing I was confused was that window splitting - Lite-XL used &lt;code&gt;&amp;lt;Alt&amp;gt; + ijkl&lt;/code&gt; while Vim used &lt;code&gt;Ctrl-W&lt;/code&gt; with its famous &lt;code&gt;hjkl&lt;/code&gt; combination. :P&lt;/p&gt;

&lt;h1&gt;
  
  
  Handling too big files is slow (feat. 4GB.txt)
&lt;/h1&gt;

&lt;p&gt;It's rare but sometimes I have to deal with log files with a few GBs in size. Vim 9 handled them really well. It opened the file in a few seconds and the file is already ready to serve.&lt;/p&gt;

&lt;p&gt;When I tried to open the same file with Lite-XL, it took a few minutes to open, and sometimes the editor lagged. I don't think that's simply a limit for scripting language, as I saw Visual Studio Code, written in Javascript, could handle the same file really well, except for memory consumption(lol).&lt;/p&gt;

&lt;p&gt;Anyway, I think that's not the case for everyone, so I think you can safely ignore if you don't have to handle REALLY BIG text files.&lt;/p&gt;

&lt;h1&gt;
  
  
  Don't do &lt;em&gt;git checkout&lt;/em&gt; on editing (huh?)
&lt;/h1&gt;

&lt;p&gt;While opening some files with Lite-XL, I git checkouted my project, some open files changed, and the editor crashed(oops). I'm not sure whether I was just unfortunate or it was a bug, but anyway it happened.&lt;/p&gt;

&lt;h1&gt;
  
  
  Small, versatile, but a few oops
&lt;/h1&gt;

&lt;p&gt;There's a joke in Korea that while women are enthusiastic about rating and reviewing restaurants, men write the review in only two occasions: it's so bad that you'd announce "don't go there it's enough only for me to be the scapegoat", or it's really great that you think "man it's damn great I want to spread the words so that the restaurant can sustain more."&lt;/p&gt;

&lt;p&gt;For Lite-XL, this is the latter. It's damn great, especially if you're in thirsty for lightweight alternative against Visual Studio Code yet don't want to face the deep valley of learning curve for Vim(one thing: though I'm a Vim user, I don't think everyone needs to learn Vim for their maximum productivity. Rather, I'm against it - there are so many ways to accomplish your goals. Having such in mind, I think VS Code can satisfy and cover quite a lot of use cases and ways to do things, like choosing between &lt;strong&gt;&lt;em&gt;Intellisense&lt;/em&gt;&lt;/strong&gt; and &lt;em&gt;&lt;strong&gt;clangd&lt;/strong&gt;&lt;/em&gt; for assisting C++ development).&lt;/p&gt;

&lt;p&gt;I had to return to Vim because I missed a few things(handling GB-size files was critical), but if you don't have to deal with big files, I strongly recommend to give it a try.&lt;/p&gt;

</description>
      <category>litexl</category>
    </item>
    <item>
      <title>See old Object Pascal from new Rust</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Thu, 17 Oct 2024 22:49:51 +0000</pubDate>
      <link>https://dev.to/teminian/see-old-object-pascal-from-new-rust-44pl</link>
      <guid>https://dev.to/teminian/see-old-object-pascal-from-new-rust-44pl</guid>
      <description>&lt;p&gt;This is a post at Apr. 20th. 2024 from my old blog.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2024/04/see-old-object-pascal-from-new-rust.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2024/04/see-old-object-pascal-from-new-rust.html&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Nowadays I'm working on a new application, and I thought it's a good chance to learn something new, I tried Rust.&lt;/p&gt;

&lt;p&gt;As a newbie(or noob) in this area, I enjoy the time fighting against rust-analyzer that always-whining like Grouchy Smurf...... (lol) And today I found something interesting.&lt;/p&gt;

&lt;p&gt;In Rust, the following is an error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="nf"&gt;.read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To fix, the first line should be changed like following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nn"&gt;String&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="nf"&gt;.read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seeing the code reminds me of the good old days of Object Pascal (Delphi). If it's not an primitive type you've got to declare the variable in var clause and call constructor in implementation, or it'll emit runtime error. And Rust "inherited" the structure as it was, except for catching non-memory-assignment in compile time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight pascal"&gt;&lt;code&gt;&lt;span class="k"&gt;procedure&lt;/span&gt; &lt;span class="n"&gt;function1&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;anObject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TAwesomeClass&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;begin&lt;/span&gt;
&lt;span class="n"&gt;anObject&lt;/span&gt;&lt;span class="p"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;TAwesomeClass&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// or the application will crash
&lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The programming language Pascal is a minor one as it is, it influences to too many other languages, like Java, Python, Javascript, and now Rust....... They all adopted at least some part of Object Pascal.&lt;/p&gt;

&lt;p&gt;As a good follower of the language, I feel dim as I saw this.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>pascal</category>
    </item>
    <item>
      <title>PostgreSQL vs. SQLite: read &amp; write in multithreaded environment</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Tue, 15 Oct 2024 12:32:56 +0000</pubDate>
      <link>https://dev.to/teminian/postgresql-vs-sqlite-read-write-in-multithreaded-environment-1cl2</link>
      <guid>https://dev.to/teminian/postgresql-vs-sqlite-read-write-in-multithreaded-environment-1cl2</guid>
      <description>&lt;p&gt;This post is from my old blog, written at Mar. 7th 2024.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2024/03/postgresql-vs-sqlite-read-write-in.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2024/03/postgresql-vs-sqlite-read-write-in.html&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lk6dz7e4y92ewqng5l7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lk6dz7e4y92ewqng5l7.jpeg" alt="multithreading meme" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The start was humble. I needed to cache some data, and I thought just push them to database table and give index, and the rest will be database's job. There were only 2 TEXT fields, and I needed to refer to only one field to search for specific row - which is some kind of key-value store -, so I thought whatever database engine should be fine.&lt;/p&gt;

&lt;p&gt;And yes. It was a BIG mistake.&lt;/p&gt;

&lt;p&gt;First I tried SQLite, and I found out that, in multithreaded environment some records are evaporated when trying to write to the table simultaneously, even with &lt;code&gt;-DSQLITE_THREADSAFE=2&lt;/code&gt; compile time option. I pushed the same data in same condition, and sometimes I have only 20 records, other times 40, and yet 26 for some others....... What drove me crazier was that the SQLite itself worked fine without any I/O problems. A good moment to shout "WHAT THE HELL?!" in real time.&lt;/p&gt;

&lt;p&gt;So I changed the engine to PostgreSQL. Our trustworthy elephant friend saved all the records without any loss. I was satisfied with that, but...... Though I applied b-tree index to necessary field of the table, it took 100 milliseconds for just running &lt;code&gt;SELECT field2 WHERE field1='something'&lt;/code&gt;. No, the table was small enough. There were only 680 records and data length was at most 30 characters for field 1 and only 4 characters for field 2. I configured the engine with some optimization, so it worked fine for bigger tables so I felt assured for its performance, but I didn't expect something like this, even in my dreams.&lt;/p&gt;

&lt;p&gt;Elephant is tough, but as a side effect it's too slow.......&lt;/p&gt;

&lt;p&gt;So, one last chance: I ran pg_dump to move data from PostgreSQL to SQLite, and with same condition(same index, same table structure, ......), I turned on at &lt;code&gt;.timer&lt;/code&gt; SQLite shell and it took less than 0.001 second. Hooray!&lt;/p&gt;

&lt;p&gt;After some more experiments, SQLite can't fully resist from data loss by itself even with multithread support option enabled, and you need more external support like std::mutex. I guess that it's fread() call doesn't support full serialization in multithread environment, but I have neither time nor abilities to do the proper inspection. :P&lt;/p&gt;

&lt;p&gt;Anyway, now I use the combination of SQLite + WAL mode + more SQLite internal cache + std::mutex. Still the write performance looks good, but if needed, I think I could use more files with load balancing via non-cryptographic hash.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>sqlite</category>
    </item>
    <item>
      <title>Haste makes waste</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Mon, 14 Oct 2024 01:38:54 +0000</pubDate>
      <link>https://dev.to/teminian/haste-makes-waste-1ebp</link>
      <guid>https://dev.to/teminian/haste-makes-waste-1ebp</guid>
      <description>&lt;p&gt;This is a post at Feb.19th 2024 from my old blog.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2024/02/haste-makes-waste.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2024/02/haste-makes-waste.html&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Sometimes I find that the old saying always gives me wisdom. Nowadays I had a chance to reaffirm it. Haste makes waste, so I detoured, and I could save time eventually.&lt;/p&gt;

&lt;p&gt;The detail is as follows: I'm assigned to develop a feature to extract some data, which looks a piece of cake but not actually. I've got to extract both summary and "body" from single raw data, and it should be faster if I do it in one loop. However, considering existing data storage process I had to separate this to two separate thread - or, that was my first impression. Such that, I intentionally delayed the implementation being busy with whatever not associated with this for three days, which was expected to be done in one day, and I found far better alternative: use only one loop, and make subsections.&lt;/p&gt;

&lt;p&gt;Personally I consider "incubation effect" as of the most importance. Already confirmed by the academia of psychology, you can see more and better for the given question when you encountered difficult question by doing something unrelated to the given issue for a while. The actual mechanism is not in agreement yet, but it is the job of the academia, and my job is using it with my thanks to those psychologists. :D&lt;/p&gt;

&lt;p&gt;It's nothing, but I could reinforce my behavior with &lt;strong&gt;"I'm not wrong!"&lt;/strong&gt;, so I drop a line here.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Handshake One: Know Your SYN, source of DoS attack, or client IP profiles at least</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Fri, 11 Oct 2024 14:24:06 +0000</pubDate>
      <link>https://dev.to/teminian/handshake-one-know-your-syn-source-of-dos-attack-or-client-ip-profiles-at-least-1a49</link>
      <guid>https://dev.to/teminian/handshake-one-know-your-syn-source-of-dos-attack-or-client-ip-profiles-at-least-1a49</guid>
      <description>&lt;p&gt;This article was posted at Aug. 9th. 2022 on my old blog.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2022/08/handshake-one-know-your-syn-source-of.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2022/08/handshake-one-know-your-syn-source-of.html&lt;/a&gt;&lt;br&gt;
The application is still available to download, and the source code is intact. If you need binary for other Linux distribution, please let me know.&lt;/p&gt;



&lt;p&gt;It was a hot summer in the middle of COVID-19 renewing its infection top day by day. I saw managers of a small online community defend the server against a Dos(Denial of Service) attack. They were enthusiastic, but I found out the work is quite inefficient. The only thing they could rely on was some web server logs and our good old friend iptables.&lt;/p&gt;

&lt;p&gt;Well, being too small to invest in some security, we know the only viable option we have is either iptables or nftables against those dull and stupid, yet quite efficient attacks. However, when we encounter the attacks, usually we're puzzled and stuck; log is too long and complicated to read so that we have difficulties on distinguishing attacker IPs against user IPs.&lt;/p&gt;

&lt;p&gt;So, I developed a small utility named Handshake One to help server engineers who want to find out sources of DoS attacks as early as possible, or, learn IP profiles for your service at least. This small utility collects SYN packets from clients IPs for past 60 seconds to generate reports as you see below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Handshake One TCP SYN counter report
(C)Copyright 2022 Robert Teminian.
This application is provided free of charge, and provided AS IS: though the developer hopes that this would help the user in any way, the software does NOT guarantee anything at all.

Stop by the developer's blog and leave a comment! Visit http://codenested.blogspot.com

====================
At 1660000053
IP  Hits
192.168.1.26    14
Total   14

====================
At 1660000054
IP  Hits
192.168.1.26    11
Total   11

====================
At 1660000055
IP  Hits
192.168.1.26    1
Total   1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently binaries for three operating systems are supported, but you can contact me to build binaries for other OSes. Click the link below to download the executable binary for each:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://drive.google.com/open?id=12SK-eMwt-hyzf5Ujyyh2dQBMOPXy3F8D&amp;amp;authuser=teminian%40gmail.com&amp;amp;usp=drive_fs" rel="noopener noreferrer"&gt;Windows&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://drive.google.com/open?id=12RwxESzg9d_3nfG9RuhKolf1f8t6b1gT&amp;amp;authuser=teminian%40gmail.com&amp;amp;usp=drive_fs" rel="noopener noreferrer"&gt;Ubuntu 20.04 LTS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://drive.google.com/open?id=12Rfg01WnD4VoiyFNmyiaLen-cRIt6A-2&amp;amp;authuser=teminian%40gmail.com&amp;amp;usp=drive_fs" rel="noopener noreferrer"&gt;CentOS 7&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They're just compressed files. You can just decompress the file in any directory you want to use.&lt;/p&gt;

&lt;p&gt;The usage is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: Windows

&lt;ul&gt;
&lt;li&gt;Decompress the ZIP file in any location you want&lt;/li&gt;
&lt;li&gt;On Windows, Handshake One depends on npcap(&lt;a href="https://npcap.com/" rel="noopener noreferrer"&gt;https://npcap.com/&lt;/a&gt;) for capturing packets. Please install the separate binary, or install Wireshark(&lt;a href="https://www.wireshark.org/" rel="noopener noreferrer"&gt;https://www.wireshark.org/&lt;/a&gt;) which installs both packet analysis tool and npcap.&lt;/li&gt;
&lt;li&gt;After installing npcap, copy Packet.dll and wpcap.dll from C:\Windows\Systems32\Npcap to the executable directory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Install: Linux

&lt;ul&gt;
&lt;li&gt;Decompress the TGZ file in any location you want&lt;/li&gt;
&lt;li&gt;On Linux, Handshake One depends on libpcap. Usually it is installed alongside with tcpdump but if it's not, consult your Linux distribution's package manager(apt, yum, ......) to install the package&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Configuration

&lt;ul&gt;
&lt;li&gt;The only way to configure Handshake One is via its configuration file, HandshakeOne.json. Currently there are only two keys

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;resultpath&lt;/code&gt;: the directory to save report file. Handshake One will automatically generate and update(overwrite) file named &lt;strong&gt;"HandshakeOneReport.txt"&lt;/strong&gt; on that directory every 30 seconds.

&lt;ul&gt;
&lt;li&gt;In Linux, I recommend to set the directory to some RAM disk(e.g. &lt;code&gt;/tmp&lt;/code&gt;) so that you can reduce I/O burden for the server&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;sniffer&lt;/code&gt;: device name to capture the packets. In Linux its the device name shown on commands like ip link or ifconfig, but in Windows it's a bit tricky since the name npcap refers to is NOT the "human readable name" for the interface, but internal device name like \Device\NPF_{12345678-9ABC-DEF0-1234-567890ABCDEF}. To see the device names and corresponding human readable descriptions(e.g. Realtek PCIe GbE Family Controller), run Handshake One with "show" parameter, e.g. &lt;code&gt;HandshakeOne show&lt;/code&gt;
&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;reportsizelimit&lt;/code&gt;: when updating(actually overwriting) the report file, limit the size of report file in bytes. If the file size is bigger than designated size, the application will write until the last data for the timestamp currently being written and finish. For example, if the report size at 10:00:00(including data for 09:00:00~09:00:59) was 1.5MB the report will contain everything, yet at 10:01:00(including data for 10:00:00~10:00:59)the actual size is 2.5MB and it hits the set limit (i.e. 2MB) around writing data for 10:00:47, it'll complete write data up to 10:00:47, finish writing, and refresh the report at 10:00:02&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Run the application

&lt;ul&gt;
&lt;li&gt;In Windows, just run &lt;code&gt;HandshakeOne.exe&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;If you're interested in using Handshake One as Windows Service, I think you can use nssm(&lt;a href="http://nssm.cc/" rel="noopener noreferrer"&gt;http://nssm.cc/&lt;/a&gt;). Though I have no experience using it, I find very positive reviews in many places.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;In Linux, you have two choices

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;HandshakeOne&lt;/code&gt; directly. Caution: since libpcap needs root privilege, &lt;code&gt;HandshakeOne&lt;/code&gt; must be run with commands like sudo or su.&lt;/li&gt;
&lt;li&gt;You can register Handshake One as systemd service. Edit HandshakeOne.service as needed(at least ExecStart and WorkingDirectory must be changed to match exact path for the binary) and refer to following command to register the binary as systemd service
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp systemd.service /etc/systemd/system
sudo systemctl enable HandshakeOne
sudo systemctl start HandshakeOne
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So...... That's all, folks! I hope you enjoy the application. If you have any comments, opinions, or questions, please leave a comment.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Performance Comparison in multithread environment: robin_hood::unordered_map vs. tbb::concurrent_hash_map</title>
      <dc:creator>Robert Teminian</dc:creator>
      <pubDate>Fri, 11 Oct 2024 14:05:10 +0000</pubDate>
      <link>https://dev.to/teminian/performance-comparison-in-multithread-environment-robinhoodunorderedmap-vs-tbbconcurrenthashmap-1i5g</link>
      <guid>https://dev.to/teminian/performance-comparison-in-multithread-environment-robinhoodunorderedmap-vs-tbbconcurrenthashmap-1i5g</guid>
      <description>&lt;p&gt;This article is from my old blog, posted at Oct. 28th, 2021.&lt;br&gt;
&lt;a href="https://codenested.blogspot.com/2021/10/performance-comparison-in-multithread.html" rel="noopener noreferrer"&gt;https://codenested.blogspot.com/2021/10/performance-comparison-in-multithread.html&lt;/a&gt;&lt;br&gt;
For your reference, as of 2024 robin_hood::unordered_map is archived, and I'm against Qt nowadays(lol).&lt;/p&gt;




&lt;h1&gt;
  
  
  robin_hood::unordered_map
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/martinus/robin-hood-hashing" rel="noopener noreferrer"&gt;robin_hood::unordered_map&lt;/a&gt; is said to be among the fastest in replacements for std::unordered_map. Personally I found that both in VS2019 and Ubuntu 20.04 the map reduces about half of the time spent against std::unordered_map. However, considering its birth and origin, in multithreaded environment we need to serialize the access to the hashmap using something like mutex. Or, the angry Lord of Segmentation Fault will fire at you(......).&lt;/p&gt;

&lt;h1&gt;
  
  
  tbb::concurrent_hash_map
&lt;/h1&gt;

&lt;p&gt;Well, if you're in multithreading environment, you can't avoid this: Intel's proud production, tbb::concurrent_hash_map from &lt;a href="https://github.com/oneapi-src/oneTBB" rel="noopener noreferrer"&gt;oneTBB (Threading Building Block)&lt;/a&gt;. In multithreaded environment container classes from TBB is quite decent. Certainly, I'm sure Intel devoted the best efforts to TBB, as its performance is about similar to std::unordered_map, if I compare time spent for the operation.&lt;br&gt;
Fastest in Single Thread vs. Optimized for Multithread&lt;/p&gt;

&lt;p&gt;One shows the best speed in single thread environment, and the other is optimized for multithreading. Then one thing: what if we apply something like std::shared_mutex to robin_hood::unordered_map? Wouldn't it be faster if there are more read than write?&lt;/p&gt;

&lt;p&gt;OK OK. I know it's nerdy, but you know, ....... a lot of programmers are nerds. Right? So I applied that to my current development. The program's read-write ratio is about 3:1, and I applied std::shared_mutex.&lt;/p&gt;

&lt;p&gt;The winner? TIE(oops). The time spent for both libraries was about the same or similar.&lt;/p&gt;

&lt;h1&gt;
  
  
  What's Your Choice?
&lt;/h1&gt;

&lt;p&gt;Anyway it depends(tm), I concluded tbb:concurrent_hash_map is better. I think tbb::concurrent_has_map will show better performance if there are more requests, and you can avoid any mistakes on implementation with with its own syntax for forced data lock.&lt;/p&gt;

&lt;p&gt;For some time being, I'd reside on tbb::concurrent_hash_map.&lt;/p&gt;

&lt;h1&gt;
  
  
  One More Thing: QReadWriteLock vs. std::shared_mutex (C++17)
&lt;/h1&gt;

&lt;p&gt;Oh yeah. Another nerdy test. For this I just had to swap lock/mutex only so I did it too. And unexpectedly, std::shared_mutex won this time. Though the Qt Company and the community did a lot of efforts to optimize QReadWriteLock(e.g. &lt;a href="https://codereview.qt-project.org/c/qt/qtbase/+/140322/" rel="noopener noreferrer"&gt;https://codereview.qt-project.org/c/qt/qtbase/+/140322/&lt;/a&gt; and &lt;a href="https://woboq.com/blog/qreadwritelock-gets-faster-in-qt57.html" rel="noopener noreferrer"&gt;https://woboq.com/blog/qreadwritelock-gets-faster-in-qt57.html&lt;/a&gt;), after some years there seems to be better(and standard) alternative.&lt;/p&gt;

&lt;p&gt;Yet anyway, thanks for your hard work, Qt team!&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>hashmap</category>
    </item>
  </channel>
</rss>
