A self-taught C programmer's call for experienced kernel developers to transform a proof-of-concept into production reality
The Honest Starting Point
I'm HejHdiss, and I need to be upfront: I'm a self-taught C programmer who knows pure C, some standard libraries, and some POSIX C—but I have zero deep kernel development experience.
Yet here I am, calling for help on a Linux kernel module project.
Why? Because I have a vision for something that doesn't exist yet, and I've taken it as far as my knowledge allows. Now I need experts to help make it real.
What I've Built (With AI's Help)
I created NeuroShell LKM (github.com/hejhdiss/lkm-for-ai-resource-info)—a Linux kernel module that exposes detailed hardware information through /sys/kernel/neuroshell/. It detects CPUs, memory, NUMA topology, GPUs (NVIDIA, AMD, Intel), and AI accelerators (TPUs, NPUs, etc.).
Full disclosure: The kernel-level code was largely generated by Claude (Anthropic's AI). I wrote the prompts, validated the functionality, and tested the modules on real hardware. It works. But I know my limits—this is a proof-of-concept, not production-ready code.
What It Does Now
$ cat /sys/kernel/neuroshell/system_summary
=== NeuroShell System Summary ===
CPU:
Online: 8
Total: 8
Memory:
Total: 16384 MB
NUMA:
Nodes: 1
GPUs:
Total: 1
NVIDIA: 1
Accelerators:
Count: 0
It's basic. It works. But it's nowhere near what's actually needed.
The Real Vision: NeuroShell OS Boot Design
This module isn't just about querying hardware info. It's the foundation for NeuroShell OS—a complete rethinking of how operating systems boot and configure themselves for AI workloads.
I wrote about the full vision here: NeuroShell OS: Rethinking Boot-Time Design for AI-Native Computing
What's Missing (The Part I Can't Build Alone)
The article describes a boot-time system that:
- Discovers AI hardware during early boot (before userspace even starts)
- Dynamically allocates resources based on what hardware is detected
- Integrates with bootloaders to make hardware-aware decisions before the kernel even fully loads
- Optimizes memory topology specifically for tensor operations and AI workloads
- Provides hardware-aware scheduler hooks that understand GPU/TPU/NPU topology
- Handles hotplug events for dynamic hardware changes in data center environments
- Exposes real-time performance metrics for AI framework optimization
This current module? It just reads some PCI devices and exposes sysfs attributes. It's a toy compared to what the vision requires.
Why I Need Kernel Developers
Here's what I honestly cannot do:
1. Deep Kernel Integration
I don't know how to integrate with the bootloader, init systems, or early-boot sequences. I can write C functions, but I don't understand kernel subsystems well enough to hook into the right places at the right time.
2. Performance & Concurrency
The current code has no locking mechanisms. It's not SMP-safe. I know that's bad, but I don't know enough about kernel synchronization primitives to fix it properly.
3. Security Hardening
There are buffer overflow risks, no input validation, and probably a dozen other security issues I'm not even aware of. I need someone who understands kernel security to audit and fix this.
4. Advanced Hardware APIs
I barely scratched the surface of PCI enumeration. Real hardware introspection needs:
- PCIe topology mapping
- IOMMU configuration awareness
- Cache hierarchy details
- Thermal zone integration
- Power management state tracking
- SR-IOV virtual function detection
I don't even know what half of these APIs are called in the kernel.
5. Production Best Practices
Kernel coding style, proper error handling, memory management patterns, module lifecycle management—I've read the documentation, but reading and truly understanding are different things.
What You'd Be Contributing To
This isn't just about fixing a kernel module. If you help with this, you're contributing to:
A New Class of Operating Systems
Traditional OS boot sequences were designed in the 1970s-1990s when "high-performance computing" meant mainframes and workstations. They weren't designed for:
- Multi-GPU training clusters
- Heterogeneous AI accelerators (GPUs + TPUs + NPUs)
- NUMA-aware tensor memory allocation
- Dynamic resource partitioning for ML workloads
NeuroShell OS reimagines this from the ground up.
Open Source AI Infrastructure
The AI industry is increasingly dominated by proprietary stacks. We need open-source infrastructure that's:
- Vendor-neutral (works with NVIDIA, AMD, Intel, custom accelerators)
- Community-driven
- Transparent and auditable
- Designed for modern AI workloads, not legacy compatibility
A Learning Opportunity
If you're a kernel developer who's interested in AI but hasn't dug into how AI frameworks interact with hardware, this is a chance to explore that intersection. The project sits right at the boundary of systems programming and AI infrastructure.
What Kind of Help I Need
Immediate Needs
- Code Review: Audit the existing module for bugs, security issues, and kernel style violations
- Architecture Guidance: Is this even the right approach? Should this be a kernel module, or something else entirely?
- Locking & Concurrency: Make it SMP-safe and handle concurrent access properly
- Error Handling: Add proper error paths and resource cleanup
Medium-Term Goals
- Advanced Hardware Detection: Implement deeper PCIe topology, IOMMU awareness, cache details
- Hotplug Support: React to dynamic hardware changes
- Performance Optimization: Minimize overhead for frequent queries
- Testing Framework: Set up automated testing with different hardware configurations
Long-Term Vision
- Bootloader Integration: Work with GRUB/systemd-boot to expose hardware info pre-kernel
- Init System Hooks: Integrate with systemd/OpenRC for early hardware configuration
- Scheduler Extensions: Hardware-aware CPU/GPU scheduling hints
- Memory Topology Optimization: NUMA-aware allocation for AI workloads
Why You Should Care
1. It's Legitimately Interesting
How many projects let you rethink fundamental OS design for emerging workloads? This isn't just "fix a bug" work—it's greenfield architecture.
2. Real-World Impact
AI infrastructure is a massive, growing field. Better boot-time hardware discovery and configuration could improve performance for researchers, engineers, and companies running AI workloads.
3. It's Honest
I'm not pretending this is polished production code. I'm being upfront about the limitations and asking for real expertise. No ego, no hidden agendas—just a vision and a request for help.
4. You'd Own Your Contributions
This is GPL v3. Your code stays yours. Your expertise gets proper credit. This is collaborative, not exploitative.
What Success Looks Like
Imagine a world where:
- A researcher spinning up a new AI training node doesn't manually configure CUDA, ROCm, and NUMA settings—the OS does it automatically at boot
- Data centers can hotplug GPUs and have the OS instantly recognize and allocate them without manual intervention
- AI frameworks get real-time hardware topology information without parsing
/proc/cpuinfoand guessing - Boot-time hardware discovery is fast, accurate, and vendor-neutral
That's the goal. This kernel module is step one.
How to Get Involved
You Have Complete Freedom
You don't need to contribute directly to my repository if you don't want to. Here are all your options:
- Fork and modify: Fork the repo and make it your own
- Create a new repo: Start fresh with your own implementation based on the concept
- Upload to your own space: Build your version and host it wherever you want
- Do whatever you want with it: It's GPL v3—take it in any direction you see fit
Just add a note about where your version came from. That's it.
I'm not territorial about this. If you can build a better version independently, please do. The goal is to get this concept working well, not to control who builds it.
Start Small
- Clone the repo: github.com/hejhdiss/lkm-for-ai-resource-info
-
Review the code: Look at
neuroshell_enhanced.cand see what needs fixing - Open an issue: Point out bugs, security issues, or architectural problems
- Submit a PR: Even small fixes help build momentum
Go Big
- Join the design discussion: Read the NeuroShell OS article and share your thoughts
- Propose architecture changes: If the current approach is wrong, let's figure out the right one
- Implement advanced features: Take ownership of a subsystem (PCIe topology, NUMA, hotplug, etc.)
- Become a co-maintainer: If this resonates with you, help drive the project forward
Or Go Completely Independent
- Fork it: Make your own version with your own design decisions
- Rewrite it: If you think it should be built differently, build it differently
- Create something better: Use this as inspiration for your own superior implementation
The only thing I ask: acknowledge where the idea came from, even if your implementation is completely different.
Spread the Word
If you're not a kernel developer but know someone who is—especially someone interested in AI infrastructure—please share this.
The Ask
I'm asking for your expertise, not your charity. I've built what I could with the knowledge I have. Now I need people who actually know kernel development to take this seriously and help make it real.
If you're a kernel developer who:
- Cares about open-source infrastructure
- Is interested in AI/ML systems
- Wants to work on something novel and impactful
- Appreciates honest collaboration over ego
Please consider contributing.
Even if you can only spare a few hours to review the code and suggest improvements, that's valuable. Even if you just point me to the right kernel APIs or design patterns, that helps.
Final Thoughts
I could've stayed in my lane—stuck to C programs I fully understand, avoided kernel development entirely. But I saw a gap: AI infrastructure needs better boot-time hardware discovery, and nobody's building it.
So I did what I could. I learned enough to prototype the idea. I used AI to fill the knowledge gaps. I tested it on real hardware. It works—barely, but it works.
Now I need people smarter than me to make it work well.
If that's you, I hope you'll join me.
Project: github.com/hejhdiss/lkm-for-ai-resource-info
Vision: NeuroShell OS: Rethinking Boot-Time Design for AI-Native Computing
Author: HejHdiss (self-taught C programmer, kernel newbie, but committed to the vision)
Let's build AI-native infrastructure together.
Estimated reading time: 5 minutes
Top comments (0)