DEV Community

Cover image for Building a 2D Lighting System in PyGame: What Broke, What Worked, and What I Learned
Osman
Osman

Posted on

Building a 2D Lighting System in PyGame: What Broke, What Worked, and What I Learned

I didn’t start this project aiming to build a perfect lighting engine.
I was simply curious about how real-time lighting and shadows actually work in practice.

first_ver

To keep things understandable, I intentionally began with a very raw setup: a purely black-and-white scene. No textures, no colors, no visual polish — just solid silhouettes and a single light source. Working this way removed distractions and forced me to focus entirely on geometry and behavior. If something was wrong, it showed up immediately.

This minimal setup also made it easier to reason about cause and effect. When a shadow broke, I knew it was because of geometry or math — not shaders, blending modes, or visual noise hiding the problem.


References That Shaped This Project

While building this system, I also explored existing community experiments and references around 2D lighting in Pygame, including the official Pygame Wiki’s Shadow Effects examples and this YouTube video. Although these implementations take different approaches, reviewing them helped validate core ideas around ray casting, visibility, and the practical challenges of precision and performance in real-time lighting.

This approach was also influenced by studying existing visibility-based lighting experiments, especially projects like Sight & Light, which frame lighting as a geometry and visibility problem rather than a visual effect. Seeing these ideas explained visually helped reinforce the decision to keep my own system as minimal and transparent as possible.


When “It Looks Right” Isn’t Enough

When I switched from abstract shapes to real image silhouettes, the results almost looked correct. Rays were casting, shadows were forming, and shapes reacted to light in a believable way but the errors were impossible to ignore. Tiny gaps, jagged edges, and incorrect intersections stood out instantly.

bugs_ver

That’s when it became clear that visual correctness can be deceptive. In graphics programming, something can look acceptable while being mathematically wrong underneath and those issues always surface once the system is pushed a little further.

One of my early mistakes was assuming that casting rays directly toward segment endpoints was sufficient:

for seg in segments:
        for p in (seg["a"], seg["b"]):
            a = math.atan2(p["y"] - ly, p["x"] - lx)
            angles.append(a)
Enter fullscreen mode Exit fullscreen mode

I added small ±epsilon offset for every angle.

angles.extend([a - 0.00001, a + 0.00001])
Enter fullscreen mode Exit fullscreen mode

That tiny offset prevented rays from slipping between edges or collapsing due to floating point precision. A microscopic change in math made a very visible difference on screen.


Geometry Doesn’t Tolerate Approximations

Working with rays and line segments forced precision. Parallel edges, floating-point inaccuracies, and division-by-zero situations weren’t rare edge cases they showed up constantly. Even extremely small angle differences could noticeably affect shadow edges.

Keeping everything monochrome helped expose these problems. Without color or shading to soften the result, the relationship between geometry and light became very explicit. Every mathematical mistake translated directly into a visible artifact.

One of the clearest examples of this problem appeared in the ray–segment intersection logic:

cross = r_dx * s_dy - r_dy * s_dx
if abs(cross) < 1e-8:
    return None  # treated as parallel
Enter fullscreen mode Exit fullscreen mode

In theory, this check filters out parallel lines. In practice, floating-point imprecision meant that almost-parallel rays were also discarded. This caused entire light rays to vanish, producing visible gaps in shadows that were hard to diagnose at first.


Another fragile point was here:

T1 = (s_px + s_dx * T2 - r_px) / r_dx
Enter fullscreen mode Exit fullscreen mode

When r_dx became very small, the result could explode numerically or fall into a division-by-zero path. What looked like a minor numerical instability ended up becoming a very obvious lighting glitch on screen.


Subtle but painful issue appeared when I realized I hadn’t defined any boundary for the world itself.

Early versions of the system had no screen borders acting as obstacles. As a result, rays that didn’t hit any geometry would continue infinitely, producing glitchy shadow shapes and unpredictable lighting artifacts near the edges of the screen.

The fix wasn’t visual, it was architectural. I treated the screen itself as geometry and added border segments so every ray always had something to intersect with.

def screen_border_segments(self):
    """
    Creates segments for the screen borders
    so light does not escape outside the screen.
    """
    w, h = self.width, self.height
    return [
        {"a": {"x": 0, "y": 0}, "b": {"x": w, "y": 0}},
        {"a": {"x": w, "y": 0}, "b": {"x": w, "y": h}},
        {"a": {"x": w, "y": h}, "b": {"x": 0, "y": h}},
        {"a": {"x": 0, "y": h}, "b": {"x": 0, "y": 0}},
    ]
Enter fullscreen mode Exit fullscreen mode

Once the borders became part of the segment system, shadow calculations stabilized immediately. The glitches disappeared not because the math changed, but because the world was finally fully defined.


Performance Becomes a Problem Before You Expect It

Early on, performance felt irrelevant. With a small number of rays and simple silhouettes, everything ran smoothly. But as accuracy increased more rays, more segments, tighter outlines the cost became obvious very quickly.

This forced me to think more carefully about how often calculations should run, how much precision was actually necessary, and where simplifications could be made without breaking the illusion. Building on a minimal visual base made these trade-offs easier to evaluate without guessing.


Code Structure Makes Debugging Possible

At some point, I stopped touching visuals entirely and focused only on architecture. I reorganized the code so the flow from input → geometry → lighting → rendering was easier to follow. The output barely changed, but debugging suddenly became manageable.

In systems where visuals emerge from math, structure isn’t just about clean code it’s what allows you to reason about complex behavior without getting lost.


Visual Debugging Is Its Own Skill

Debugging this project felt very different from typical logic bugs. Print statements rarely helped, and stepping through code didn’t always explain what I was seeing on screen. Instead, I relied heavily on visual debugging drawing rays, highlighting intersections, and watching how silhouettes reacted in real time.

final_ver

Starting from a high-contrast black-and-white base made this incredibly effective. If something was wrong, there was nowhere for it to hide.


A New Respect for “Simple” Features

Dynamic lighting often looks like a small feature in games. After building and integrating a real-time lighting system into my own game engine, I now understand why established engines invest so much effort into optimizing it — and why many games choose to fake lighting instead of simulating it accurately.

What appears simple on screen is usually supported by a surprising amount of geometry, math, and careful compromise. Seeing this firsthand while extending my engine made it clear that lighting is less about visual flair and more about managing complexity without breaking performance.


Closing Thoughts

This lighting system didn’t stay as an isolated experiment.

After stabilizing the math and architecture, I integrated it directly into my own game engine. That step forced me to think beyond the demo - how the system interacts with input, rendering order, performance constraints, and future features.

Seeing the lighting work inside a larger engine context changed how I evaluated it. Bugs that were tolerable in isolation became unacceptable, and architectural decisions suddenly mattered much more than visual tricks.

It reminded me that one of the most effective ways to learn complex systems is to strip them down to their core, study how they break, and rebuild understanding layer by layer.

While building this system, I also drew inspiration from existing visibility-based lighting experiments, especially Sight & Light by Marcus Møller. These resources didn’t provide direct solutions, but they strongly influenced how I approached the problem conceptually and helped shape my understanding of light as a geometry and visibility challenge rather than just a visual effect.

And if you’re curious how this experiment evolved inside a game engine, you can take a look at the final integrated version here:

GitHub logo Osman-Kahraman / PyGameEngine

This project helps to create new game with helpful tools on PyGame

PyGameEngine

PyGameEngine is a custom game engine built in Python, focusing on modularity, reusability, and clean architectural design And I want to suffer in PyGame It provides a structured foundation for developing 2D games while keeping core engine logic separate from gameplay rules.

This repository also serves as a long-term reference for game engine design decisions and performance considerations in Python.


Features

  • Modular Architecture – Engine components are easy to extend and reuse, such as Animation, Camera, Light, Physic and UI
  • Core Game Mechanics – Physics, event handling, and motor functions
  • Structured Gameplay Logic – Clear separation between engine and game logic
  • Built with PyGame – Uses PyGame for rendering, input, and timing

Requirements

Library Version
pygame 2.6.1
Pillow 10.4.0
PyQt5 5.15.11

Installation

Clone the repository and navigate into the project directory:

git clone https://github.com/Osman-Kahraman/PyGameEngine.git
cd PyGameEngine
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Project Structure


PyGameEngine/
├── game_1/                     #

Top comments (0)