Create a VM on Vultr using Terraform on GitLab

  • Goal: Create a Vultr VM using Terraform. Get an idea of what would be required to manage this setup just using GitLab. (Next will be do the same with Salt Cloud and compare)

    1. Create a git repo on GitLab to store the Terraform config
    2. Get Docker container for Terraform
      I created my own based off of the Terraform image but with the Vultr plugin. It would be possible to use use the stock image and then download the plugin each time, but I figured that would be unnecessary.


    FROM alpine:3.8 AS downloader
    RUN apk --no-cache add wget
    RUN wget --no-check-certificate
    RUN tar xvzf terraform-provider-vultr_v0.1.9_linux_amd64.tar.gz
    FROM hashicorp/terraform:0.11.11
    WORKDIR /vultr/
    COPY --from=downloader /terraform-provider-vultr_v0.1.9 ./.terraform/plugins/linux_amd64/
    RUN terraform init

    I built the image locally (I don't think has a way of building images using the shared runners) and pushed it up to my GitLab Docker Registry

    1. Add my Terraform config file to the repo
    provider "vultr" {}
    data "vultr_region" "toronto" {
      filter {
        name   = "name"
        values = ["Toronto"]
    data "vultr_os" "centos" {
      filter {
        name   = "name"
        values = ["CentOS 7 x64"]
    data "vultr_plan" "starter" {
      filter {
        name   = "price_per_month"
        values = ["5.00"]
      filter {
        name   = "ram"
        values = ["1024"]
    resource "vultr_instance" "example" {
      name              = "example"
      region_id         = "${}"
      plan_id           = "${}"
      os_id             = "${}"
      hostname          = "example"
    1. Setup GitLab CI/CD
      Terraform itself has to store information about the state of a setup, otherwise it can't make the proper changes when the config changes. This state file might end up with sensitive information in it, so instead of storing it in the same GitLab repo, I created a new one in order to keep it separate from the config and created an SSH key for that repo that my main repo stores as an environment variable. The Vultr API key is also stored as an environment variable within the repo.
        entrypoint: [""]
      - docker
      stage: deploy
      - mkdir ~/.ssh
      - touch ~/.ssh/known_hosts
      - echo "$GITLAB_FINGER" >> ~/.ssh/known_hosts
      - touch ~/.ssh/id_ed25519
      - echo "$STATE_KEY" >> ~/.ssh/id_ed25519
      - chmod 400 ~/.ssh/id_ed25519
      - git clone [email protected]:username/terraformstate.git /state
      - cp -R /state/. /vultr
      - cp -R /vultr/. ./
      - terraform init
      - terraform apply -auto-approve -input=false
      - cp ./terraform.tfstate /vultr/
      - cd /vultr
      - git config --global "[email protected]"
      - git config --global "Pipeline"
      - git commit terraform.tfstate -m "Update state"
      - git push
        name: shared_production
      - prod

    When run, it commits the terraform.tfstate file to the state repo when everything is finished. Ideally there should be some kind of locking mechanism in place so that it's not possible to run terraform when at the same time.
    Also, I created a prod branch in my main repo for the config to be run off of, rather than master.

Log in to reply