DEV Community

Cover image for Infrastructure Before Features: Building the Right Things First
CodelDev
CodelDev

Posted on

Infrastructure Before Features: Building the Right Things First

In my first post related to this project, I'd just finished the foundational layer: authentication, passkeys, 2FA, sessions, billing and account management. I said I was about to start on the actual product features.

Three weeks and 47 commits later, I've built more than I expected, but not quite what I predicted. I thought I'd be knee-deep in feeding logs by now. Instead, I ended up building out a chunk of the supporting infrastructure that the core tracking features will depend on: a contacts system, a full data import/export engine, a support ticket system with email integration, a polymorphic notes system, an admin panel, user preferences with timezone support, and comprehensive in-app documentation. Here's where things stand.

The data model

Before building any UI, I laid down the domain models for the core product. The Animal model is the centrepiece. As an idea you can see the casting on the Animal model: (some casts have been omitted for brevity)

final class Animal extends Model implements HasNotes
{
    protected $casts = [
        // ...
        'nickname'               => 'string',
        'common_name'            => 'string',
        'genus'                  => 'string',
        'species'                => 'string',
        'sex'                    => AnimalSexEnum::class,
        'life_stage'             => AnimalLifeStageEnum::class,
        'status'                 => AnimalStatusEnum::class,
        'born_at'                => 'immutable_datetime',
        'acquired_at'            => 'immutable_datetime',
        'acquisition_source'     => 'string',
        'acquisition_contact_id' => 'string',
        'enclosure_id'           => 'string',
        'location'               => 'string',
        'is_colony'              => 'boolean',
        'population_estimate'    => 'integer',
        'feeding_interval_days'  => 'integer',
        'is_premolt'             => 'boolean',
        // ...
    ];
}
Enter fullscreen mode Exit fullscreen mode

Note: Yes I cast all model properties, even strings, to ensure that they are stored and handled properly regardless of the underlying database type.

The fields reflect what hobbyist keepers actually need. There are four ways to identify an animal: nickname, common_name, genus and species. All four are nullable because different keepers work differently. Some people name every tarantula. Some only care about the Latin binomial. Some have "that sling from the expo" and won't know the species until after the next molt. The model doesn't force a convention. You use whichever combination makes sense for each animal.

sex, life_stage and status are backed by enums and are always set. born_at and acquired_at are optional dates because you often won't know exactly when an animal was born, but you'll usually remember roughly when you acquired it. acquisition_source is a free-text field for context like "Bristol Reptile Expo 2026", and acquisition_contact_id links to an optional contact record for the person or business you got it from. enclosure_id links to where the animal currently lives, and location is a free-text fallback for animals that aren't assigned to a tracked enclosure.

The is_colony flag with population_estimate handles isopod and springtail colonies where you're not tracking individuals. You don't count individual springtails. You just know you've got a tub of them and the population is roughly "lots". feeding_interval_days defaults to 7 and is_premolt flags animals that should be skipped at feeding time, because offering a cricket to a tarantula in premolt is a waste of a cricket and a stress for the spider.

The acquisition contact and enclosure relationships drove the build order for the whole app, which I'll get to shortly.

Life stages

The life stage enum covers the full range of invertebrate development:

enum AnimalLifeStageEnum: int
{
    case EGG      = 1;  // Universal starting point
    case NAUPLIUS = 2;  // Earliest larval form (crustaceans)
    case LARVA    = 3;  // Early mobile stage (insects)
    case PUPA     = 4;  // Transformation stage (follows larva)
    case SLING    = 5;  // Early post-hatch (arachnids)
    case NYMPH    = 6;  // Immature form (incomplete metamorphosis)
    case JUVENILE = 7;  // Growing stage
    case SUBADULT = 8;  // Nearly mature
    case ADULT    = 9;  // Fully mature
}
Enter fullscreen mode Exit fullscreen mode

This is deliberately broad. Not every animal will pass through every stage, but covering nauplii for isopods, slings for tarantulas, nymphs for mantids and pupa for beetles means one enum handles the lot without needing per-taxon logic. A tarantula goes from egg to sling to juvenile to subadult to adult. A beetle goes from egg to larva to pupa to adult. The UI will guide users through the stages that make sense for their animal rather than showing all nine options every time.

Events

Animal events track the key moments in an animal's life:

enum AnimalEventTypeEnum: int
{
    case MOLTED            = 1;
    case STATUS_CHANGE     = 2;
    case LIFE_STAGE_CHANGE = 3;
    case SEX_CHANGE        = 4;
}
Enter fullscreen mode Exit fullscreen mode

Each event records a type, optional notes, and an occurred_at timestamp. A molt event is the most common. You check on your tarantula and find a crumpled exoskeleton in the corner of the enclosure. You want to log that it happened, roughly when it happened (you might not have noticed for a day or two), and maybe note that it successfully went from 5th instar to 6th.

The animal status covers the outcomes we'd rather not think about but need to track: Active, Deceased, Sold, Rehomed, Escaped, Loaned. "Escaped" might seem dramatic, but anyone who keeps invertebrates knows that a determined tarantula or centipede will find a gap you didn't know existed.

Housing

Animals need somewhere to live. The housing system uses a rack-and-enclosure model:

final class Rack extends Model implements HasNotes
{
    // name, reference, rows, columns
    public function enclosures(): HasMany
    {
        return $this->hasMany(Enclosure::class);
    }
}

final class Enclosure extends Model implements HasNotes
{
    // name, reference, rack_id, row, column
    public function rack(): BelongsTo
    {
        return $this->belongsTo(Rack::class);
    }
}
Enter fullscreen mode Exit fullscreen mode

A rack has a grid of rows and columns. Think of those IKEA Kallax shelving units that every keeper ends up buying, each cubby holds an enclosure. An enclosure can sit in a rack at a specific grid position, with a unique constraint on rack_id + row + column to prevent two enclosures occupying the same slot. Enclosures can also exist independently for standalone terrariums or display enclosures that aren't racked.

The migration indexes are designed around the queries I know I'll be running: user_id + status for "show me all my active animals", user_id + life_stage for filtering by maturity, and genus + species for taxonomic grouping.

The models, migrations, factories, observers and tests are all in place for both animals and housing. The housing UI is what I'm building next, followed by animals.

Contacts

The first full CRUD feature I built was contacts. Not animals. Not housing. Contacts. Here's why: when you add an animal, you often want to record where you got it. "Bought from Dave at the Bristol expo" is useful information. It helps you trace lineage, remember who had good stock, or know who to contact if something turns out to be a different species than advertised. So contacts needed to exist first because the animal model has a foreign key pointing at them.

Contact types cover the usual sources: Breeder, Vendor, Trader, Expo, Friend, and Other. Each has helper text explaining the distinction. A breeder produces their own animals. A vendor resells. A trader is someone you swapped with. An expo is an event where you picked something up. These categories are useful for remembering the context of an acquisition months or years later.

The contacts index page is where I invested time building a reusable table system that every listing page in the app now shares. Sortable columns, pagination, type and status filters, full-text search and bulk actions. The table infrastructure uses three Livewire concerns that compose into any listing component:

class Index extends Component
{
    use HasBulkActions;
    use HasNotifications;
    use HasPagination;

    public function mount(): void
    {
        $this->setSortBy('name');

        $this->setFilters([
            'types'    => ['field' => 'type', 'data' => []],
            'statuses' => ['field' => 'is_active', 'data' => []],
        ]);

        $this->setBulkDeleteModel(Contact::class);
    }
}
Enter fullscreen mode Exit fullscreen mode

HasPagination is the core engine. It requires the component to define a tableData() method returning a query builder, then layers on search, filtering and sorting through a chain of tap() calls. Search runs a LIKE query across all allowed fields with OR conditions. Filtering supports both array-based (whereIn for multi-select dropdowns) and scalar constraints. Sorting validates against a whitelist of allowed fields so users can't sort by columns they shouldn't access, and toggles direction when you click the same column twice. On any filter or search change, it resets to page one and clears bulk selections.

This pattern paid off immediately. Every table-based page since, from data transfer listings to support tickets to notes, uses the same three concerns and gets all that behaviour for free.

All the business logic lives in action classes: CreateContactAction, UpdateContactAction, DeleteContactAction. The Livewire components stay thin, just handling UI state and delegating to actions. Every action takes callable $success and callable $failure parameters rather than returning values or throwing exceptions. This keeps control flow in the Livewire component where the notification logic lives, and makes the actions easy to test in isolation. This pattern runs through the whole app.

Data portability

This is the feature I'm most pleased with, and probably the most architecturally complex part of the system so far. The number one complaint I found when researching existing apps was data loss. People with collections of 50+ animals losing everything after an app update. So data portability isn't an afterthought here. It's a first-class feature built into the core of the application.

The principle is simple: your data is yours. You should be able to export everything at any time in a format that's actually useful, and you should be able to import data from a CSV without needing a computer science degree to format it correctly.

Exports

Exports support two formats:

  • Readable: human-friendly with translated column headers and formatted dates. Good for printing, sharing, or keeping a local backup that you can actually read.
  • Importable: machine-friendly with raw column names and values. This format can be imported straight back into Exoden, or into another system if you ever want to leave.

The architecture uses an abstract BaseExporter built on Maatwebsite Excel that branches on format:

abstract class BaseExporter implements Exporter, FromQuery, WithCustomChunkSize, WithHeadings, WithMapping
{
    public function headings(): array
    {
        return $this->export->format === TransferFormatEnum::READABLE
            ? $this->headingsAsReadable()
            : $this->headingsAsImportable();
    }

    public function map(mixed $row): array
    {
        return $this->export->format === TransferFormatEnum::READABLE
            ? $this->mapAsReadable($row)
            : $this->mapAsImportable($row);
    }
}
Enter fullscreen mode Exit fullscreen mode

Each concrete exporter implements four methods: mapAsReadable() and mapAsImportable() for the row data, plus headingsAsReadable() and headingsAsImportable() for column headers. The readable format uses localized labels and formatted dates. The importable format uses raw column names and enum values. Same exporter class, two completely different CSV outputs.

Export headers are localized using the user's preferred locale at export time, so if the app later supports French, a French user's readable export will have French column headers.

Imports

Imports were the trickier half to get right. The goal was to make it forgiving. People will have messy CSVs. They'll have empty rows, typos in enum values, missing optional fields. The system needs to handle that gracefully rather than failing on the first bad row.

The BaseImporter reads 1,000 rows at a time and inserts in batches of 500:

abstract class BaseImporter implements
    Importer, SkipsOnError,
    SkipsOnFailure, ToModel,
    WithBatchInserts, WithChunkReading,
    WithHeadingRow, WithValidation
{
    public function chunkSize(): int { return 1000; }
    public function batchSize(): int { return 500; }
}
Enter fullscreen mode Exit fullscreen mode

The key decisions here are SkipsOnError and SkipsOnFailure. Validation failures and insertion errors are collected but don't halt the import. If 95 out of 100 rows are valid, those 95 get imported and the 5 failures are stored as structured JSON with row numbers, column names and error messages so you can fix your CSV and try again. No one wants to fix one typo and re-import an entire file.

The import wizard walks users through three steps: select what you're importing (contacts, notes, etc.), upload your CSV and let the system validate the headers against expected columns, then confirm and queue the job. The validation step is important because it catches structural problems (wrong columns, empty files) before any processing begins, so you get immediate feedback rather than waiting for a queued job to fail.

There's also a template download feature that cleverly reuses the exporter infrastructure. It creates a temporary export with the IMPORTABLE format and just grabs the headers. Users get a CSV with the correct column names already filled in. No guessing.

Convention-based resolution

One piece of the architecture I'm particularly happy with is how new importable and exportable types are added. The HasDataTransfer trait on a model uses PHP reflection to automatically resolve exporter and importer classes by naming convention. If the model lives at App\Models\Contacts\Contact, it looks for App\Services\DataTransfer\Exports\Contacts\ContactExport and App\Services\DataTransfer\Imports\Contacts\ContactImport. No registration, no configuration, no service provider bindings. Follow the convention and it works.

Queue processing with real-time feedback

Both imports and exports run as queued jobs. This matters because a user with hundreds of contacts shouldn't have to stare at a loading spinner. You trigger the export, navigate to another page, and get a toast notification when it's ready.

The import job follows a clean pipeline pattern:

$this
    ->importStarted()
    ->importProcessing()
    ->addImportFailures()
    ->addImportErrors()
    ->importComplete()
    ->deleteImportFile();
Enter fullscreen mode Exit fullscreen mode

Each stage fires a broadcast event over Laravel Reverb on a private user channel. A GlobalNotifications Livewire component sits in the app layout and listens for these events across all pages:

public function getListeners(): array
{
    $userId = currentUser()->id;

    return [
        "echo-private:user.{$userId},.export.started"   => 'handleTransferNotification',
        "echo-private:user.{$userId},.export.completed" => 'handleTransferNotification',
        "echo-private:user.{$userId},.export.failed"    => 'handleTransferNotification',
        "echo-private:user.{$userId},.import.started"   => 'handleTransferNotification',
        "echo-private:user.{$userId},.import.completed" => 'handleTransferNotification',
        "echo-private:user.{$userId},.import.failed"    => 'handleTransferNotification',
    ];
}
Enter fullscreen mode Exit fullscreen mode

All six event types (three for imports, three for exports) implement ShouldBroadcastNow rather than being queued, so the notification hits the browser instantly via WebSocket rather than waiting for a queue worker to pick it up. The notification service resolves the actual import or export record to build a human-readable message like "Contacts export completed" with a fallback if the record has since been deleted.

Import files are automatically cleaned up after processing, including the user's directory if it's now empty. There's also an artisan command to sweep orphaned export files that might accumulate if something goes wrong.

Currently contacts and notes support import/export. The system is designed to be extended, and as I build out animals and housing, those will get import/export support too. Adding a new exportable/importable type means implementing the Exportable and Importable interfaces on the model and writing the exporter/importer class. The queuing, real-time notifications, wizard UI, template downloads, and file cleanup all just work.

Users can also bulk-export selected rows directly from any table. Unlike the queued full export, bulk exports run synchronously and stream immediately as a download. It reuses the same exporter class but filters the query to only the selected row IDs.

Support tickets

I hadn't planned on building a support ticket system this early, but it quickly became obvious that I'd need one. When users encounter issues, I don't want them sending emails to a personal address that I might lose track of. I want a proper thread with context, status tracking, and a paper trail.

Users can create tickets from within the app, categorise them (General, Billing, Technical, Account, Feature Request, Bug Report), set a priority (Low, Normal, High, Urgent), and attach files. Messages are threaded under the ticket with timestamps and clear indicators of who said what.

The status flows through Open, Awaiting Reply, In Progress, Resolved and Closed. When a user replies to a ticket that was previously closed, it automatically re-opens. When admin replies, the status moves to Awaiting Reply. These transitions happen automatically through the action classes rather than requiring manual status updates.

Email-based replies

The interesting piece is the email integration. Every ticket gets a unique reply_token, a 64-character random string generated at creation. When admin sends a reply, the notification email includes a reply-to address in the format support+{token}@domain.com. Users can hit reply in their email client and their response gets threaded straight into the ticket without needing to log back in.

The InboundMailService handles the messy work of parsing inbound email:

final class InboundMailService
{
    public function getTicket(array $payload): ?Ticket
    {
        $replyToken = $this->extractReplyToken($payload);

        return Ticket::query()
            ->where('reply_token', $replyToken)
            ->firstOrFail();
    }

    private function extractBody(array $payload): string
    {
        $body = $payload['text_body'] ?? $payload['plain'] ?? '';

        $filtered = Str::of($body)
            ->explode("\n")
            ->reject(fn (string $line) => Str::startsWith(mb_trim($line), '>'))
            ->takeUntil(fn (string $line): bool => $this->isQuotedHeader($line))
            ->implode("\n");

        return Str::of($filtered)->trim()->toString();
    }
}
Enter fullscreen mode Exit fullscreen mode

It extracts the reply token from the to address using a regex, then strips quoted content (lines starting with >) and reply headers (On ... wrote: or --- Original Message --- delimiters) from the email body. What you're left with is just the user's new text, clean and ready to store as a message.

The ticket detail page uses Laravel Echo to listen on the user's private channel for real-time updates. If admin replies while a user is viewing the ticket, the new message appears immediately without a page refresh. Same for status changes and ticket closures.

The notification system sends email alerts in both directions: users get notified when admin replies or closes their ticket, admin gets notified when users create tickets, reply, or close them. The action that handles adding a reply is a good example of the compositional approach I use throughout the app. It injects five other actions via its constructor: one to create the message, one to store attachments, one to re-open the ticket if it was closed, one to dispatch the real-time event, and one to send the email notification. Each action does one thing, and they compose together cleanly.

File attachments are stored in a nested path structure ({ticket_id}/{message_id}/{uuid}.{extension}) so the actual stored filename is a UUID, preventing collisions and directory traversal issues, while the original filename is preserved in the database for display and downloads.

Admin panel

The ticket system needed somewhere to live on the admin side. Rather than bolting admin features into the existing user interface with role checks everywhere, I built a separate admin panel with its own authentication guard, login page, layout and dashboard.

The admin dashboard shows key metrics at a glance: total users, open tickets, and tickets awaiting reply. From there, admins can drill into user management (viewing individual users and their associated tickets) and ticket management (full CRUD with the ability to update categories, priorities, statuses, and reply to tickets from dedicated components for each field).

It's a separate Admin model with its own users_admins table, authenticated via an EnsureIsAdmin middleware that checks the admin guard. This separation means the admin panel can evolve independently of the user-facing app. There's an artisan command to create admin users since there's no public registration for admins.

Polymorphic notes

Early on I realised that almost everything in the app would benefit from free-text notes. You want to note that a contact was helpful and had good stock. You want to note that an enclosure needs a vent hole drilled. You want to note that an animal refused food three times in a row. Rather than building a separate notes table for each entity, I built one polymorphic notes system that attaches to anything.

The system uses Laravel's polymorphic relationships with a HasNotes contract:

interface HasNotes
{
    /** @return MorphMany<Note, covariant Model> */
    public function notes(): MorphMany;
}
Enter fullscreen mode Exit fullscreen mode

Any model implementing this interface gets full note support: create, read, update, delete, plus import and export via CSV. The NoteableTypeEnum handles the mapping between string type identifiers and model classes, with methods for resolving in both directions, getting labels, and loading the actual model instance with ownership verification.

Notes have an optional observed_at date, which is a small but important distinction. When you spot something noteworthy, you might not log it until later. The observed_at date records when the observation actually happened, as opposed to when you got around to writing it down. If I notice mites on a tarantula on Tuesday but don't log it until Thursday, I want the record to reflect Tuesday.

The notes index is a reusable Livewire component that embeds into any entity's detail page. It has the same sorting, pagination and bulk actions as the contacts table, reusing the same concerns.

Note imports and the N+1 problem

The note importer deserved special attention because of the polymorphic relationship. Each row in a note CSV has a noteable_type and noteable_id. To validate that each noteable_id actually exists and belongs to the importing user, you'd naively run a database query per row. For a CSV with 500 notes, that's 500 queries during validation alone.

Instead, the note importer uses a BeforeImport hook to pre-load all valid noteable IDs before any rows are processed. It reads the entire spreadsheet as a raw array, groups rows by noteable_type, extracts the unique IDs per type, and runs one whereIn query per type. The results are stored as a hash map. During row validation, it's a simple O(1) lookup instead of a database query. For a CSV with 200 contact notes and 300 animal notes, that's 2 queries instead of 500.

User preferences and timezone support

This was one of those features where I had to make a decision early that would be painful to change later. All dates in the database are stored as UTC. This is non-negotiable: it's the only sane approach for a web app with users across multiple timezones. But displaying UTC to a user in Melbourne or New York is useless, so every date needs to be converted to the user's preferred timezone before it hits the screen.

The user model stores a timezone field as an integer-backed enum. The TimezoneEnum has 419 cases covering every timezone PHP recognises, from UTC through to Pacific/Wallis. Each case knows its PHP timezone string, its continent, and its translated label.

The preferences page uses a two-step timezone selection: pick your continent first, then pick from the filtered list of timezones within that continent. This avoids presenting 419 options in a single dropdown, which would be unusable. The dropdowns are searchable, and changes save automatically without a submit button using Livewire's wire:model.live.

On the display side, a SetUserTimezoneMiddleware runs on every web request and sets a runtime config value to the authenticated user's timezone string. Then two mechanisms handle the actual conversion: a localDateTime() helper function for use in PHP, and a custom @datetime Blade directive for use in templates:

@datetime($animal->acquired_at, 'jS M Y')
Enter fullscreen mode Exit fullscreen mode

Both do the same thing: take a UTC date, shift it to the user's timezone, and format it. The Blade directive compiles down to a setTimezone() call with a sensible fallback chain: user's timezone, app timezone, UTC.

Language preference is also stored here, defaulting to English. The infrastructure for multi-language support is in place, all strings go through trans(), there are 22 language files. Adding a new language is a translation task, not a code change.

Geo-blocking

A small but practical addition. The GeoBlockMiddleware uses MaxMind's GeoIP database to block requests from configured countries:

class GeoBlockMiddleware
{
    public function handle(Request $request, Closure $next): Response
    {
        if ($this->isBlocked())
        {
            abort(403);
        }

        return $next($request);
    }

    private function isBlocked(): bool
    {
        $country = IPGeolocation::getCountryCode();
        $blocked = config('ip-geolocation.blocked');

        return $country !== null && in_array($country, $blocked, true);
    }
}
Enter fullscreen mode Exit fullscreen mode

It gracefully handles lookup failures by allowing the request through rather than blocking legitimate users because of a GeoIP hiccup. Local development bypasses it entirely. There's an artisan command to update the MaxMind database.

Two-factor authentication

I mentioned in the first post that 2FA was custom-built. Since then I've finished the recovery code system, which rounds out the feature.

Recovery codes are generated as 8 codes, each 8 characters long, using an ambiguity-free character set: 23456789ABCDEFGHJKLMNPQRSTUVWXYZ. No zero/O confusion, no one/I/L confusion. This matters because people write these down on paper.

The recovery flow is strict by design. When you use a recovery code to bypass 2FA, it is single-use. All 8 codes are immediately regenerated after a successful recovery, a notification is sent to alert you that a recovery code was used (in case it wasn't you), and you're prompted to save your new codes. This prevents someone who obtained one recovery code from using it repeatedly.

The QR code for authenticator setup is a custom SVG with circular modules instead of squares and a rainbow gradient fill. A small vanity, but it looks better than a generic black and white grid.

Offline detection

An OfflineNotification component uses Livewire's wire:offline directives to show a banner when the browser loses connectivity. Simple, but important for a tool people will use on their phones while doing rounds in a bug room where WiFi might not reach every shelf.

User guide

I'm continuing to write the documentation as features land rather than leaving it until the end. The in-app guide now covers:

  • Authentication and account management (expanded with 2FA recovery codes)
  • Billing and subscriptions
  • Contacts (overview, CRUD, form details, types, importing)
  • Exports (requesting, formats, statuses, downloading, troubleshooting)
  • Imports (the wizard, requirements, error handling, troubleshooting)
  • Support tickets (creating, replying, closing, notifications)
  • Notes (availability, CRUD, importing with downloadable CSV templates)

Each guide page is a Livewire component with a sidebar navigation system and reusable documentation components for headings, sections, code snippets, table references and filter demonstrations. The sidebar builds from a service class that caches the structure, and each page supports deep-linking to specific sections.

The approach of documenting as I go has already proven its worth. Twice now, writing the documentation for a feature forced me to reconsider the UX. If the steps are awkward to explain, the feature is probably awkward to use.

Localization

Every string in the application goes through trans(). There are now 22 language files covering every domain. The app currently only ships in English, but the architecture is ready for other languages without any refactoring. Export headers are also localized, using the user's preferred locale at export time.

Testing

As of writing the app has:

  • 313 test files
  • 2159 unit tests
  • 5688 assertions

All passing with 100% code coverage. I'm still writing tests alongside the code, not after.

The test suite includes:

  • Architecture tests: enforcing declare(strict_types=1) across all files, strict equality (===/!==) everywhere, and a stress test ensuring responses come back in under 100ms.
  • Livewire component tests: every component has full test coverage for rendering, authorization, validation, and user interactions.
  • Unit tests: actions, concerns, DTOs, enums, events, exceptions, helpers, middleware, jobs, listeners, models, notifications, observers, policies, providers, and services.

The tooling remains the same as before: Pest V4, PHPStan (level 6), Rector, Pint and Peck.

What's next

I spent three weeks building out the infrastructure that the core features depend on. Contacts exist so animals can reference acquisition sources. The data transfer engine exists so nothing is ever locked in. The notes system exists so any entity can have observations attached. The table infrastructure exists so every listing page works consistently.

The immediate next step is housing: the full Livewire UI for managing racks and enclosures. The data model is ready, but enclosures need their own CRUD, rack grid views, and the ability to assign enclosures to rack positions. Housing needs to be built before animals because the animal form will have an enclosure picker. You need somewhere to put them before you can add them.

After housing, the focus shifts to what the whole app exists for: the animal management UI. Adding animals, viewing your collection, logging feedings, recording molts. The data model is ready. The table patterns are proven. The notes system is waiting to attach. It's mostly assembly at this point, connecting the infrastructure I've been building to the domain models that have been sitting in the database patiently.

Then the dashboard. Right now it's a placeholder that says "Welcome". It needs to show who's due a feed, who's in premolt, recent activity, and collection stats at a glance.

I'm still targeting May/June 2026 for launch.

Following along

I'll keep posting updates as things progress. If you keep invertebrates and want to try Exoden when it launches, or if you have thoughts on what a tracker should do, I'd like to hear from you.

Top comments (0)