Hey, I just wanted to introduce my opensourced project I've been working on -- SigKit. SigKit is basically a toolbox of building-blocks for anyone who wants to play with real-world digitalized analog signals and machine learning without stitching together a dozen custom scripts. Under the hood you get:
-
Core types like
Signal
,Impairment
andModem
so you think in baseband, not in arrays of floats. - NumPy operations for things like AWGN, phase/frequency shifts, filtering and SNR/BER calculators.
-
PyTorch
Transforms
that slot right into yourCompose
pipeline—so adding noise or fading to every sample in your data loader is a one-liner. - A PyTorch Lightning training + evaluation pipeline, complete with a pretrained modulation-classifier. Training your own custom ML model is as simple as running a script.
- Dataset classes and synthetic signal generators so you never have to hand-craft a CSV of complex IQ samples.
- (WIP) GNURadio blocks wrapping all of the above, for dropping into a live SDR flowgraph.
Where it can be used
- Research labs & coursework: Teaching digital-comm concepts? SigKit turns abstract equations into hands-on Jupyter demos—generate, impair, plot, repeat.
- Modulation classification: Training a neural net that actually generalizes over-the-air (instead of “works on simulated data only”).
- SDR prototyping: Need to bounce a signal through realistic channel models before you hit the hardware? Plug in Rayleigh fading, resampling or IQ-imbalance transforms.
- Hackathons & demos: Spin up a quick notebook that shows off “live” impairments and classification at different SNRs—no C++ or gnuradio-block coding required.
- Synthetic data generation: When you need thousands of labeled IQ traces for ML, but you don’t have a tone-generator farm or unlimited SDRs.
In short, if you’ve ever wished for a toolkit that treats signals more like images in PyTorch—letting you compose transforms, datasets, metrics and models in one ecosystem—SigKit has your back.
Top comments (0)