DEV Community

Cover image for Increase Productivity with Neural DSL v0.2.4: Automatic Shape Propagation Explained
NeuralLang
NeuralLang

Posted on

1 1 1 1 1

Increase Productivity with Neural DSL v0.2.4: Automatic Shape Propagation Explained

Explore how Neural DSL’s automatic shape propagation catches dimension errors pre-runtime, alongside fixes that make deep learning development smoother.

Image description

Hey Dev.to folks! 🏊🏽‍♂️

I’ve been pouring my heart into Neural DSL, a domain-specific language (DSL) for crafting, training, and debugging neural networks without the usual headaches.

Our latest drop, v0.2.4 (March 23, 2025), is live, and the killer feature this time is Automatic Shape Propagation.

It’s like a pre-flight check for your tensor shapes,catching mismatches before they crash your runtime. Let’s unpack this, plus some other goodies from the update.


🌟 Automatic Shape Propagation: No More Shape Guessing

Ever debugged a RuntimeError: size mismatch at 2 AM? Me too.

Neural DSL’s ShapePropagator now auto-tracks tensor shapes through every layer, flagging issues before you hit run.

It’s baked into v0.2.4 and makes defining networks like this a breeze:

network MNISTClassifier {
  input: (28, 28, 1)  # Channels-last
  layers:
    Conv2D(filters=32, kernel_size=(3,3))  # Shape: (26, 26, 32)
    MaxPooling2D(pool_size=(2,2))          # Shape: (13, 13, 32)
    Flatten()                              # Shape: (5408)
    Dense(units=128)                       # Shape: (128)
    Output(units=10, activation="softmax") # Shape: (10)
  loss: "sparse_categorical_crossentropy"
}
Enter fullscreen mode Exit fullscreen mode

Run neural visualize mnist.neural --format html, and you get an interactive shape flow diagram.

No more manual math or surprise errors, v0.2.4 fixed in_features calculation (test test_model_forward_flat_input) to compute shapes before propagation overwrites them.

It’s a lifesaver for complex architectures.


🤏🏽 Other v0.2.4 Wins

Shape propagation shines, but the release also polishes rough edges:

  • Conv2D Fix (#427): PyTorch now uses channels-first (None, 1, 28, 28) properly, with TensorFlow data loader support added. Vision models just work.

  • Training Stability (#428): Swapped None loaders for mocked ones, added precision metrics, and optimized device selection (execution_optimization picks CPU/GPU automatically).

  • Optimizer Tests (#429): New MockDataset and MockDataLoader ensure edge cases don’t slip through.

These tackle core pain points—shape mismatches and debugging woes, straight from our Criticality vs. Impact table.


🤖 Get Started

Clone it, play with it, break it:

git clone https://github.com/Lemniscate-world/Neural.git
cd neural
pip install -r requirements.txt
neural run examples/mnist.neural --backend pytorch
Enter fullscreen mode Exit fullscreen mode

 👾 Join Us

Bugs linger (e.g., TensorFlow loader validation), but that’s where you come in.

Star us on GitHub, hit up Discord.

Your feedback drives this.

Comment below or ping me on Twitter @NLang4438.

Let’s make deep learning less painful together!

Full Changelog: v0.2.4

AWS Q Developer image

Your AI Code Assistant

Ask anything about your entire project, code and get answers and even architecture diagrams. Built to handle large projects, Amazon Q Developer works alongside you from idea to production code.

Start free in your IDE

Top comments (0)

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

👋 Kindness is contagious

If you found this post useful, please drop a ❤️ or a friendly comment!

Okay.