Last Monday, CIR was running on my laptop. By Friday, it was deployed to DigitalOcean and generating cryptographic attestations remotely. Here’s what that journey looked like.
Why does production matter?
While a local demonstration is an essential first step to validate the core logic of your software, it ultimately serves as a controlled experiment that fails to account for the complexities of the real world. Relying solely on a “works on my machine” assurance suggests that the solution is potentially fragile, untested against external variables, and incapable of surviving outside a pristine development environment. In the high-stakes arena of B2B infrastructure, where stability is paramount, a local demo simply cannot provide the necessary evidence that a system is robust enough to handle the friction of actual usage.
In contrast, a full production deployment serves as the definitive proof of concept, demonstrating that the software is portable, reproducible, and capable of seamless enterprise integration. It signals to potential clients and stakeholders that the engineering has matured beyond a mere prototype and is ready to withstand the rigors of a live ecosystem. By showcasing a deployed solution, you shift the conversation from theoretical functionality to tangible reliability, providing the concrete assurance B2B buyers require to trust your infrastructure with their critical operations.
The plan
The initial strategy focused on leveraging Microsoft Azure’s advanced confidential computing capabilities, specifically targeting a deployment on AMD SEV-SNP hardware. To facilitate this, an application was submitted to the Microsoft Founders Hub, which offers $1,000 in Azure credits and would have provided a robust environment for testing secure enclaves at scale. This route was the ideal path for validatiing the infrastructure on enterprise-grade hardware without incurring immediate overhead costs.
However, the approval process for the Founders Hub introduced a waiting period that threatened to stall momentum. In an attempt to bridge the gap and start testing immediately, a secondary attempt was made using the Azure for Students program, which provides a $100 credit. Unfortunately, this path hit a dead end due to strict age and identity verification hurdles, effectively locking out access to the necessary Azure resources and forcing a re-evaluation of the deployment strategy.
To avoid further delays, the focus pivoted to DigitalOcean, utilizing the $200 credit available through the GitHub Student Pack. This decision proved to be the breakthrough needed; the credits were successfully activated on Friday morning, clearing the way for immediate infrastructure provisioning. With the barriers removed, the deployment process moved rapidly, transitioning from a funded account in the morning to a fully deployed live instance by Friday night.
The deployment process
The deployment process began with the provisioning of the cloud infrastructure on DigitalOcean. I spun up a fresh Droplet in the Frankfurt region, selecting the latest Ubuntu 24.04 LTS image to ensure a modern and secure foundation. The instance was configured with 2GB of RAM, a specification chosen to balance cost-efficiency with enough memory to handle the compilation overhead without hitting swap space.
With the server active, the focus shifted to migrating the codebase from the local environment to the cloud. I utilized SCP (Secure Copy Protocol) to transfer the project files, ensuring a secure and direct handoff of the source code. This method avoided the need for intermediate git pulling or credential management on the server side, keeping the initial setup clean and focused solely on the artifacts needed for the build.
Once the code was successfully transferred, we initialized the build environment by installing the Rust toolchain directly on the remote instance. I then triggered the compilation process using the release profile to ensure the resulting binary was fully optimized for performance. despite the modest hardware resources, the compilation was efficient, completing the build of the project in approximately 5 to 10 minutes.
Finally, I moved to the validation phase by executing the compiled CIR binary. I monitored the standard output and logs to verify that the application initialized correctly and processed data as intended. The output was cross-referenced with local results to confirm that the behavior was identical, successfully marking the transition from a local prototype to a functioning remote deployment.
The results
The deployment of the CIR Proof of Concept (PoC) to the remote infrastructure yielded immediate performance gains, with the secure inference engine executing the constant-time calculation on a 200x200 matrix in just 16 milliseconds. This represents a significant improvement over the local development environment, leveraging the superior CPU architecture of the cloud instance to reduce latency. Crucially, the test confirmed that the strict constant-time execution properties required for security were preserved during the migration, validating that the system remains resistant to timing attacks even when running on shared, remote hardware.
Following the calculation, the system successfully generated a specific cryptographic fingerprint, outputting a result hash to verify the integrity of the computation. The process concluded with the automatic generation and export of the attestation_report.json file. This successful export demonstrates that the remote instance is fully functional, capable not only of performing secure calculations but also of producing the necessary cryptographic artifacts to prove the validity and privacy of the execution to external verifiers.
--- CIR PoC: Secure Inference Engine (200x200) ---
[Step 1] Running Constant-Time Calculation...
> Done in 16ms
[Step 2] Generating Cryptographic Fingerprint...
> Result Hash: 704dc3569d50486d5b01f77aac85e961320ed4bf33cd611d555cc513b5cdc96a
[Step 3] Exporting Attestation Report...
> SAVED: 'attestation_report.json' created successfully.
What this proves
A critical advantage demonstrated by this deployment is the inherent portability of the architecture. Because the codebase compiles and runs seamlessly across different environments, transitioning effortlessly from local development machines to standard cloud instances, it establishes that the solution is not brittle or vendor-locked. This flexibility ensures that the secure inference engine can be adopted and scaled across diverse infrastructure setups without requiring significant refactoring or specialized handling.
This successful software-level validation paves the way for the ultimate goal: hardware-enforced security. The system is now fully primed for integration with Trusted Execution Environments (TEEs), specifically targeting the upcoming migration to Azure’s AMD SEV-SNP nodes. By validating the application logic first, we have de-risked the transition to these hardware-isolated enclaves, ensuring that when the Azure deployment occurs, the focus can remain on hardening the security boundaries rather than debugging the application itself.
Week 1–4: Built it. Week 5: Shipped it. Week 6: Pitching it to enterprise platforms. Building in public at 19. Follow for updates
Top comments (0)