So it's been almost a year since I published the article on Stacks. I even presented on Stacks on two occassions with HashiCorp engineers. The promise of delivering that multi-region/account/environment experience in a native Terraform language was something exciting. So with HashiConf coming up in 4 weeks, where is it now ? What changes have I seen in the recent days ?
Before we jump into the details, lets pull up the definition of Stacks.
Terraform Stacks lets you group and manage your Terraform configurations as repeatable units of infrastructure. Each stack defines what to deploy, where to deploy it, and how to keep it in sync—making it easier to manage complex environments, reuse modules, and automate deployments.
Spoiler alert ; below seems to be a subtle way to say Stacks is going GA soon around HashiConf. Yaaay !!!
Starting September 25, 2025, Stacks usage will count toward your HCP Terraform resource usage limits
Terraform stacks subcommand
Terraform binary recently released support for stacks
which was available via the custom tfstacks cli.
This did give the impression that Stacks
is possibly coming to the community edition, but the usage details indicate the opposite.
Unlike other CLI utilities or subcommands, terraform stacks
uses a usage
option than help. You could very well type terraform stacks
whcih will give you the same output in your terminal.
terraform stacks -usage
Usage: terraform stacks [global options] <command> [args]
The available commands for execution are listed below.
Primary Commands:
init Prepare the configuration directory for further commands
providers-lock Write out dependency locks for the configured providers
validate Check whether the configuration is valid
version Show the current Stacks Plugin version
fmt Reformat your Terraform Stacks configuration to a canonical format
Sub-commands:
Global options (use these before the subcommand, if any):
-chdir=DIR Switch to a different working directory before executing the
given subcommand.
-plugin-cache-dir=DIR Override the default directory where the stack plugin binary is cached.
-stacks-version An alias for the "version" subcommand.
-no-color=BOOL Disable color output. Must be explicitly set to true or false.
Usage help:
-usage Show this usage output or the usage for a specified subcommand.
Use after the command: terraform stacks <command> -usage
Let's see how this translates to the tfstacks-cli options.
-
tfstacks init
->terraform stacks init
-
tfstacks validate
->terraform stacks validate
-
tfstacks providers lock
->terraform stacks providers-lock
-
tfstacks fmt
->terraform stacks fmt
I am not really sure what the stacks version
means per se. I was initially thinking there was a corelation between this and terraform version. But the output makes me think this has to do with the Stacks plugin version.
terraform stacks -stacks-version
Terraform Stacks Plugin version: v1.0.0
on darwin_arm64
The one missing option is in regards to the plan
tfstacks plan -organization=REQUIRED_ORG_NAME -stack=REQUIRED_STACK_ID -deployment=REQUIRED_DEPLOYMENT [-hostname=hostname]
HCP Terraform
Interstingly there seems to be a manual upload option which is coming to HCP Terraform which could be a replacement of the terraform plan
workflow we are familiar with. Though , it seems like the user interface is ahead of the command implementation on the stacks API. The linked documentation reverts you back to the same page in the Stacks setup (for now).
HCP Terraform limits the number of deployments up to a maximum of 20 deployments.
* One of the usecases I felt Stacks would be great for was multi account deployments for platform specific infrastructure. So, if a deployment is a map to an account; this limit might be a hurdle in some cases.
Terraform support
I might have overlooked this or never really found a use as I wasn't using Stacks as much as I would have wanted to. tfe_stack seems to support setting up a Stack in HCP Terraform on Stacks enabled projects or organizations using Terraform itself.
Linked stacks
- Reference : Pass data between Stacks
The idea is similar to what we had with run trigers on HCP Terraform workspaces. What if you had infrastructure stacks which depended on each other ? Linked stacks give you an option to do exactly that.
Declare an
upstream_input
block in your Stack’s deployment configuration to read values from another Stack'spublish_output
block. Adding an upstream_input block creates a dependency on the upstream Stack.
The example they have is around an application stack depending on the networking infrastructure which is provisioned using a different stack. Rather than just being able to use a datasource to reference existing resources, any changes to the networking stack retriggers the downstream stack , in case that update to the networking stack somehow affects the infrastructure in the downstream. This helps with the situation where a redeployment trigger would help get the updated values of networking infra in the application Stack.
Output reference in Stacks UI:
Downstream link:
We will dive into this a little bit more in another post. If you are looking to use the linked stacks in the mean time, keep these in mind:
- The output definition in your upstream stacks needs the type defined. The type definition is not something I am used to adding on an output.
- An empty value in upstream stack which coincided with a missing definition on my end kept failing on the downstream stack with a
missing metadata
with no details in the diagnostics. Updating the type input on the outputs fixed the issues, but it didn't retrigger my downstream_stack.
Changes in the file
Though the documentation still references tfstack.hcl
extensions for anything outside of the deployment configurations, terraform stacks
command says otherwise . I think it is a matter of time before this makes into the documentation. Does it make much difference to the user experience ? Probably not.
When you run terraform stacks validate
on a Stacks configuration similar to one below; you get the error which indicates that the file names are not matching what it expects it to be.
tree .
.
├── components.tfstack.hcl
├── deployments.tfdeploy.hcl
├── infrastructure
│ └── main.tf
├── modules
│ └── random
│ └── main.tf
├── providers.tfstack.hcl
└── variables.tfstack.hcl
Deprecation message:
│ Warning: Deprecated filename usage
│
│ This configuration is using the deprecated .tfstack.hcl or .tfstack.json file extensions. This will not be supported in a future version of Terraform, please update your files to use the latest .tfcomponent.hcl or .tfcomponent.json file extensions.
Considering this is still a beta, I expect we will see more changes on these and the APIs.
Open questions
- Will there be a migrate or convert operation from existing workspsace based deployment to Stacks based deployment ?
- Will we start getting the same ecosystem built around Stacks like with workspaces?
- Policy As Code : For folks using tools outside of the HCP Terraform, that could continue to work. If an organization is invested in Sentinel, how would that work going forward ?
- Day 2 operations, health checks and notifications on workflow operations.
- Deployments to regulated or private environments using Cloud agents or alike.
- Language server support for the Stacks blocks : component, deployment and named provider blocks.
Top comments (0)