In general you need to think hard on the structure of Terraform, I propose you do these 3 things;
use modules, keep the general resources code separate from your solution code. Later on you can even put the modules in separate git repo's and use versioning to avoid changes from breaking other solutions.
use a solutions/services directory, this is where you wire up all the modules together into a solution or a service. Basically your application.
use an environment directory, this is where you define the exact parameters of your deployment per environment (dev, test, production). There are differences and you should keep them separated at all times.
This could look like this:
- modules
- ec2
- vpc
- rds
- apigateway
- solution
- api
- application_a
- environment
- dev
- test
- prod
Example of how you can use OpenAPI with AWS API Gateway, Also includes integrations with AWSLambda, AWS Cognito, AWS SNS and CloudWatch logs
OpenAPI with AWS API Gateway, Lambda, Cognito, SNS and CloudWatch logs
This repo only deploys the infrastructure via Terraform. The source code, this repo, will be deployed automatically via AWS CodePipeline. By default, it's configured to automatically deploy at every push to the master branch.
Check the companion articles series 'OpenAPI' on dev.to.
Thank you for answer the questions, was really great help to make my mind clear!
If you could also know your opinion on this as well?
which one is better to create a bunch of s3 buckets for multiple environment? write a module to create them once or use simple module and apply multiple times?
Do they have the same functionality across these environments (like development, test etc) then you would have 1 S3 module and in your application directory you create the single bucket. It will be named slightly different per environment (include the name environment name, dev, test,prod).
Then in your environment directory you supply the correct naming for each environment, then you have to deploy each of them separate with their respective tfstate file per environment.
That is probably the easiest way to do it and then make sure you store the Terraform state in each environment where it belongs to as well.
To target each of them, you can create profiles in your ./.aws/config file and reference them in the Terraform tfstate and your remote state configuration.
Well, there are five buckets at least which will repeat in dev, stage and prod! but as far as they contain their environment name in their name, should be change per environment.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
See here for some good guidelines:
github.com/ozbillwang/terraform-be...
In general you need to think hard on the structure of Terraform, I propose you do these 3 things;
This could look like this:
You can check out my public repo for an example:
rpstreef / openapi-tf-example
Example of how you can use OpenAPI with AWS API Gateway, Also includes integrations with AWSLambda, AWS Cognito, AWS SNS and CloudWatch logs
OpenAPI with AWS API Gateway, Lambda, Cognito, SNS and CloudWatch logs
This repo only deploys the infrastructure via Terraform. The source code, this repo, will be deployed automatically via AWS CodePipeline. By default, it's configured to automatically deploy at every push to the master branch.
Check the companion articles series 'OpenAPI' on dev.to.
Get started
The essentials
To get the API running
If you meet all the pre-requisites, do the following
Thank you for answer the questions, was really great help to make my mind clear!
If you could also know your opinion on this as well?
which one is better to create a bunch of s3 buckets for multiple environment? write a module to create them once or use simple module and apply multiple times?
Do they have the same functionality across these environments (like development, test etc) then you would have 1 S3 module and in your application directory you create the single bucket. It will be named slightly different per environment (include the name environment name, dev, test,prod).
Then in your environment directory you supply the correct naming for each environment, then you have to deploy each of them separate with their respective
tfstate
file per environment.That is probably the easiest way to do it and then make sure you store the Terraform state in each environment where it belongs to as well.
To target each of them, you can create profiles in your ./.aws/config file and reference them in the Terraform tfstate and your remote state configuration.
I hope that helps
Well, there are five buckets at least which will repeat in dev, stage and prod! but as far as they contain their environment name in their name, should be change per environment.