<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hanswillem</title>
    <description>The latest articles on DEV Community by Hanswillem (@hanswillem).</description>
    <link>https://dev.to/hanswillem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hanswillem"/>
    <language>en</language>
    <item>
      <title>Getting CrewAI to run local, the prep work of the infra always exceeds the actual learning work</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Fri, 08 Aug 2025 14:38:14 +0000</pubDate>
      <link>https://dev.to/hanswillem/getting-crewai-to-run-local-the-prep-work-of-the-infra-always-exceeds-the-actual-learning-work-2hh0</link>
      <guid>https://dev.to/hanswillem/getting-crewai-to-run-local-the-prep-work-of-the-infra-always-exceeds-the-actual-learning-work-2hh0</guid>
      <description>&lt;p&gt;Wanted to play around with agents. Decided to go for one of the more popular frameworks, crewai. &lt;/p&gt;

&lt;p&gt;Stubborn decided to install it local. Took me an afternoon bug fixing and library checking to get all up and working.&lt;/p&gt;

&lt;p&gt;Findings:&lt;br&gt;
CrewAI runs on LiteLLM, this interacts easier with ollama as my preferred LM studio. Running now also Ollama, maxing out my laptop harddisk by having now some models double (luckily apple is so cheap with storage and memory :-D)&lt;/p&gt;

&lt;p&gt;Got down the rabbit hole in debugging, expected the issue to be in LiteLLM. Got strange errors on that one related to a module on calculation which didn't know the model name. Switched to one of the standard models, but only solved part of the problem.&lt;/p&gt;

&lt;p&gt;Faced the biggest challenge with an error of LiteLLM:ERROR: litellm_logging.py RuntimeError: can't register atexit after shutdown . First tried to vibe code and vibe ask myself out of this, but unfortunately my models are not strong enough to have the mighty answer. Went back to 1999 and used good old stack overflow. Found here the answer: &lt;a href="https://stackoverflow.com/questions/65467329/server-in-a-thread-python3-9-0aiohttp-runtimeerror-cant-register-atexit-a" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/65467329/server-in-a-thread-python3-9-0aiohttp-runtimeerror-cant-register-atexit-a&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;not really related to the python version is was running nor liteLLM, but adding the&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Added these two imports
import concurrent.futures.thread
import concurrent.futures.process

# and this is just the standard code from the liteLLM side
from litellm import completion

print("start test")
response = completion(
            model="ollama/llama2",
            messages = [{ "content": "Hello, how are you?","role": "user"}],
            api_base="http://127.0.0.1:11434"
)
print(response.object)
print("done test")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Going back to crewai, it worked again. Learned again a lot of stuff on vent's, python uninstall and installs.&lt;/p&gt;

&lt;p&gt;To be continued for the crewai learnings &lt;/p&gt;

</description>
    </item>
    <item>
      <title>PCB - Assembly flashbacks?</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Thu, 17 Jul 2025 11:19:11 +0000</pubDate>
      <link>https://dev.to/hanswillem/pcb-assembly-flashbacks-eil</link>
      <guid>https://dev.to/hanswillem/pcb-assembly-flashbacks-eil</guid>
      <description>&lt;p&gt;I really really liked my bachelors wrt programming. However there was one course which didn't got me energized; low level programing. This included writing assembly code an inject it direct into PCBs. &lt;/p&gt;

&lt;p&gt;Recently I decided to create my own CO2 censor to steer the air quality in home (air quality is fine, but this is just a fun project to envision). For this I want to create something with ESP32 and sensors. &lt;/p&gt;

&lt;p&gt;Created a following plan of approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with Adruino, first the basics with a breadboard&lt;/li&gt;
&lt;li&gt;Move to raspberry for the software&lt;/li&gt;
&lt;li&gt;create an endpoint based on esp32 and sensor, probably with some folding involved (3d printing for the case?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just received the Strex Adruino clone with tooling, lets see which things we can play with it.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>MCP setup - Huggingface</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Sun, 13 Jul 2025 12:33:43 +0000</pubDate>
      <link>https://dev.to/hanswillem/mcp-setup-huggingface-2gla</link>
      <guid>https://dev.to/hanswillem/mcp-setup-huggingface-2gla</guid>
      <description>&lt;p&gt;there is a tendency to think that every AI company is from either the US or china, but there are some with (semi) European roots. One of them is hugging face.&lt;/p&gt;

&lt;p&gt;They have the MCP course, one often advised as play around course. For my first steps with MCP I don't want to run all local for the first time and first play a bit around in a more sandboxed environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/learn/mcp-course/unit2/introduction" rel="noopener noreferrer"&gt;https://huggingface.co/learn/mcp-course/unit2/introduction&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the fun things is that I again come back to my apple renewably state. Python is not directly installed from the command line, thus learning first all new apple stuff (homebrew, uv). Linux is sometimes so much easier :)&lt;/p&gt;

&lt;p&gt;After installing home-brew it was installing VU &amp;amp; python.&lt;br&gt;
My python skills are a bit rusty, seems the virtual setup of python is not standard installed, thus also python -m venv .venv didn't work. In the end it seems its python3 -m venv .venv which did the trick (thanks to the super python RTFM :D)&lt;br&gt;
&lt;a href="https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/" rel="noopener noreferrer"&gt;https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Setting up a new home lab for arp poisoning</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Thu, 12 Jun 2025 13:14:53 +0000</pubDate>
      <link>https://dev.to/hanswillem/setting-up-a-new-home-lab-for-arp-poisoning-3b5k</link>
      <guid>https://dev.to/hanswillem/setting-up-a-new-home-lab-for-arp-poisoning-3b5k</guid>
      <description>&lt;p&gt;As mentioned earlier I use a Kali version with Websploit for trying out my CEH labs.&lt;/p&gt;

&lt;p&gt;Setting is Kali running with multiple containers to play around.&lt;br&gt;
Do want to try now something new, made a new homelab to play around with things like ARP poisoning.&lt;/p&gt;

&lt;p&gt;[placeholder]&lt;/p&gt;

&lt;p&gt;Setup was easy in VM box with 1 ubuntu and one cloned, but then the network part started.&lt;/p&gt;

&lt;p&gt;While I've chosen clone instead of create new, I was smart enough to check the option " create new Mac"&lt;br&gt;
unfortunately after running ifconfig it showed both of the times the same ip address, with different Macs...&lt;/p&gt;

&lt;p&gt;Lesson learned, there first needed to be a virtual NAT to have both systems in the same network&lt;/p&gt;

&lt;p&gt;[ image virtual nat] &lt;/p&gt;

&lt;p&gt;while there was some tweaking needed the lab succeeded in running ettercap and Wireshark to do an arp poisoning from the attacker box and sniffing the traffic of the victim :-)&lt;/p&gt;

&lt;p&gt;[ image of Wireshark side by side to be added ]&lt;br&gt;
Lab which I am conducting:&lt;br&gt;
&lt;a href="http://www.csc.villanova.edu/%7Eenwafor/cps_security/documents/lab1_mitm_update.pdf" rel="noopener noreferrer"&gt;http://www.csc.villanova.edu/~enwafor/cps_security/documents/lab1_mitm_update.pdf&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>WebSploit containers don't like ARM</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Wed, 04 Jun 2025 17:04:21 +0000</pubDate>
      <link>https://dev.to/hanswillem/websploit-containers-dont-like-arm-35fk</link>
      <guid>https://dev.to/hanswillem/websploit-containers-dont-like-arm-35fk</guid>
      <description>&lt;p&gt;As mentioned in an earlier post, I am preparing for my CEH foundation exam by following the CEH course on Oreilly.&lt;/p&gt;

&lt;p&gt;Previous post explained how I got kali running, all smooth on VirtualBox with an arm image. However, I was cheering too fast. Websploit is using containers which are made for x86 infra, resulting in not able to run on my kali version.&lt;/p&gt;

&lt;p&gt;Some googling learned that more people faced this issue. Seemed some tried to change the install procedure (dunno how that would solve the build issue...), at least no easy fix seemed to be available when googling. Then I was retinking, there is virtualization but also emulation. With emulation I could run x86 on the Mac. Enter UTM.&lt;/p&gt;

&lt;p&gt;[post will be updated later on]&lt;/p&gt;

</description>
    </item>
    <item>
      <title>LLM on MacOS: XLM support</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Fri, 30 May 2025 06:56:11 +0000</pubDate>
      <link>https://dev.to/hanswillem/llm-on-macos-xlm-support-4c93</link>
      <guid>https://dev.to/hanswillem/llm-on-macos-xlm-support-4c93</guid>
      <description>&lt;p&gt;One of the reasons to go for Mac was the unified memory setup. With the RAM shared between CPU and GPU apple silicon has a (theoretical) cost and power efficiency over a nVidia setup.  Not talking about raw power of a 5900 setup, but compared to my earlier lab with a virtualized setup on a Ryzen xxx I do expect grand improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Base case: old lab
&lt;/h2&gt;

&lt;p&gt;[placeholder to set numbers]&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up LM Studio
&lt;/h2&gt;

&lt;p&gt;First things first, I had an old setting with Ubuntu running on AMD with laptop GPU. As bit of a geek I prefer terminal over GUI for hobby projects thus Ollama was a logical starting point. But on Mac there is off course a more user friendly (less geeky) setup possible with LM Studio. For my projects are hobbies I like to play a bit more over ease of use, thus terminal and Ollama it is (possible I add mystify as GUI later on for fun).&lt;/p&gt;

&lt;p&gt;Unfortunately I Ollama was at the time or writing not 100% stable supporting MLX (apple's CUDA to make it oversimplified). As I've read that MLX can give 30% speed bums I decided to not geek and go for LM Studio :-)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwovyifn96z4uyukdoq8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwovyifn96z4uyukdoq8s.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;20 Tokens/Second on deepseek qwen 3 -8b, that's not bad&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Moving (back) to Mac &amp; setting up CEH Labs</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Thu, 29 May 2025 18:53:45 +0000</pubDate>
      <link>https://dev.to/hanswillem/moving-back-to-mac-setting-up-ceh-labs-9il</link>
      <guid>https://dev.to/hanswillem/moving-back-to-mac-setting-up-ceh-labs-9il</guid>
      <description>&lt;p&gt;Moving after years back to a Mac. MacBook Air M4 as it has unified memory, let's see what the output is.&lt;/p&gt;

&lt;p&gt;Gut feeling sooner or later I will move to a separate server with a high end nVidia.&lt;/p&gt;

&lt;p&gt;First challenge was getting my labs up and running the labs for my Certified Ethical Hacker training. Via O'reilly I've started the course of Omar Santos (&lt;a href="https://www.oreilly.com/search/?q=Omar%20Santos&amp;amp;rows=10" rel="noopener noreferrer"&gt;https://www.oreilly.com/search/?q=Omar%20Santos&amp;amp;rows=10&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Forgot apple moved during the years we've got kids from x86 to ARM, thus couldn't use the standard VirtualBox images. When reading a bit on it I stumbled upon some horror stories on getting Kali up and running on apple silicon.... however for me the install went smoothly. Used as reference this blog; &lt;a href="https://kskroyal.com/kali-linux-virtualbox-apple-silicon/" rel="noopener noreferrer"&gt;https://kskroyal.com/kali-linux-virtualbox-apple-silicon/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now happy to have a smooth setting for the labs, next step to see if websploit is able to run &amp;amp; get the coding tools on the main macOS image up and running.&lt;/p&gt;

</description>
      <category>mac</category>
      <category>ai</category>
      <category>ollama</category>
    </item>
    <item>
      <title>Setting up my home lab</title>
      <dc:creator>Hanswillem</dc:creator>
      <pubDate>Mon, 05 May 2025 13:53:57 +0000</pubDate>
      <link>https://dev.to/hanswillem/setting-up-my-home-lab-2fk5</link>
      <guid>https://dev.to/hanswillem/setting-up-my-home-lab-2fk5</guid>
      <description>&lt;p&gt;The Christmas holiday period always brings some opportunity to sneak in some time for hobby projects. Either its because the rest of the family is still sleeping in during the morning or to just sit with the laptop next to the kids while they look a Christmas movie. &lt;/p&gt;

&lt;p&gt;During recent years end period there was a lot of excitement about the deepseek model, released open source, so i decided to give it a try to run local.&lt;br&gt;
Unfortunately i don''t have the state of the art hardware anymore which i had as a kid (3df FX Voodoo in SLI) nor didn’t i continue running everything  on macbooks and mac mini's as during my student time. Currently all I have running is a not that state of the art windows laptop... so i knew it wouldn’t have good results in token processing time, but that shouldn’t spoil the fun.  &lt;/p&gt;

&lt;p&gt;First setting up the environment. From my ethical hacker lab I was already used of using oracle box (with Kali), so I continued in that space.&lt;br&gt;
As Kali was for me a bit too specialized, I decided to spin up an Ubuntu vm and followed the tutorial of Pavan:&lt;br&gt;
&lt;a href="https://dev.to/pavanbelagatti/run-deepseek-r1-locally-for-free-in-just-3-minutes-1e82"&gt;https://dev.to/pavanbelagatti/run-deepseek-r1-locally-for-free-in-just-3-minutes-1e82&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setting up went fast, although there where as usual some extra libraries to upgrade. Also made the rookie mistake to give the VM insufficient disc space :-D &lt;/p&gt;

&lt;p&gt;First impression: slooooooooooooooooooooooooooooooooooow but also what a fun to use! &lt;br&gt;
Didn’t had real experiences with a reasoning model, liked it a lot.&lt;/p&gt;

&lt;p&gt;Planned three homelabs as follow-up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using python scripts to call the local LLM (update May: done, see next blog post)&lt;/li&gt;
&lt;li&gt;Play around with different LLMs to measure performance&lt;/li&gt;
&lt;li&gt;Install stable diffusion to create pictures for this blogpost (if hardware allows it...)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vibecoding</category>
      <category>python</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
