Create a VM on Vultr using Terraform on GitLab
-
Goal: Create a Vultr VM using Terraform. Get an idea of what would be required to manage this setup just using GitLab. (Next will be do the same with Salt Cloud and compare)
- Create a git repo on GitLab to store the Terraform config
- Get Docker container for Terraform
I created my own based off of the Terraform image but with the Vultr plugin. It would be possible to use use the stock image and then download the plugin each time, but I figured that would be unnecessary.
Dockerfile
FROM alpine:3.8 AS downloader RUN apk --no-cache add wget RUN wget --no-check-certificate https://github.com/squat/terraform-provider-vultr/releases/download/v0.1.9/terraform-provider-vultr_v0.1.9_linux_amd64.tar.gz RUN tar xvzf terraform-provider-vultr_v0.1.9_linux_amd64.tar.gz FROM hashicorp/terraform:0.11.11 WORKDIR /vultr/ COPY --from=downloader /terraform-provider-vultr_v0.1.9 ./.terraform/plugins/linux_amd64/ RUN terraform init
I built the image locally (I don't think GitLab.com has a way of building images using the shared runners) and pushed it up to my GitLab Docker Registry
- Add my Terraform config file to the repo
vultr.tf
provider "vultr" {} data "vultr_region" "toronto" { filter { name = "name" values = ["Toronto"] } } data "vultr_os" "centos" { filter { name = "name" values = ["CentOS 7 x64"] } } data "vultr_plan" "starter" { filter { name = "price_per_month" values = ["5.00"] } filter { name = "ram" values = ["1024"] } } resource "vultr_instance" "example" { name = "example" region_id = "${data.vultr_region.toronto.id}" plan_id = "${data.vultr_plan.starter.id}" os_id = "${data.vultr_os.centos.id}" hostname = "example" }
- Setup GitLab CI/CD
Terraform itself has to store information about the state of a setup, otherwise it can't make the proper changes when the config changes. This state file might end up with sensitive information in it, so instead of storing it in the same GitLab repo, I created a new one in order to keep it separate from the config and created an SSH key for that repo that my main repo stores as an environment variable. The Vultr API key is also stored as an environment variable within the repo.
.gitlab-ci.yml
shared_prod: image: name: registry.gitlab.com/username/terraform-vultr entrypoint: [""] tags: - docker stage: deploy script: - mkdir ~/.ssh - touch ~/.ssh/known_hosts - echo "$GITLAB_FINGER" >> ~/.ssh/known_hosts - touch ~/.ssh/id_ed25519 - echo "$STATE_KEY" >> ~/.ssh/id_ed25519 - chmod 400 ~/.ssh/id_ed25519 - git clone [email protected]:username/terraformstate.git /state - cp -R /state/. /vultr - export VULTR_API_KEY=$VULTR_API_KEY - cp -R /vultr/. ./ - terraform init - terraform apply -auto-approve -input=false after_script: - cp ./terraform.tfstate /vultr/ - cd /vultr - git config --global user.email "[email protected]" - git config --global user.name "Pipeline" - git commit terraform.tfstate -m "Update state" - git push environment: name: shared_production only: - prod
When run, it commits the terraform.tfstate file to the state repo when everything is finished. Ideally there should be some kind of locking mechanism in place so that it's not possible to run terraform when at the same time.
Also, I created a prod branch in my main repo for the config to be run off of, rather than master.