Post

Journey into Terraform - Doom Co-Op Server

banner

Update 6/16/2020: got to some of the items in the retrospective; updated content herein.

Introduction

Recently, I’ve taken to Terraforming all the things. What better way to learn Terraform than committing to developing Terraform code for existing workloads and the endless lab environments that I build over and over. This includes red/blue team environments for testing, this blog, and now a Doom Co-Op server powered by Zandronum.

Is it a little impractical or excessive to use Terraform to spin up, configure, and deploy a DigitalOcean instance for a Doom server? Probably, but since it’s an instance that’s short lived as I only spin one up during a weekly session with a buddy of mine, I figured it fit the bill and was a good way to help me get comfortable with Terraform.

What is Terraform?

Infrastructure as code, Dev[Sec]Ops, Agile, and things like that. Let me get the snippet from Google…

Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language, or optionally JSON. One of the key use cases for Terraform is to allow developers to build and change infrastructure efficiently and programmatically.

For example; rather than clicking around a cloud providers web interface, manually provisioning and configuring resources and services, developers can use Terraform to put those resources and steps into code, to which Terraform will do some RESTful magic and set it up for you.

Terraform relies on what are called providers which is an interface between the infrastructure code and the API of the target (i.e.: AWS, DigitalOcean) allowing Terraform to speak the same language and actually do the things you’re asking. You’re also not limited to provisioning just virtual machines in the cloud using Terraform, there are providers for a lot of different things, down to Cisco ASA’s.

There are tons of resources and tutorials around Terraform and what you can do with it. Check out their documentation to get started!

Zandronum… what’s that?

The short of it, Zandronum is a Doom server forked from Skulltag. It runs on macOS, Windows, and Linux. It allows you to host different instances of different gameplay types, such as Cooperative, CTF, and Deathmatch.

I settled on Zandronum because it seems to be one of the more actively maintained Doom server platforms with active members and resources.

Get on with it already!

The Terraform code is setup to provision resources in DigitalOcean, but isn’t limited to it - AWS is on the TODO.

I have three Terraform files; DigitalOcean steps and Cloudflare steps are broken into two different files because of load order.

  • variables.tf - holds variable definitions that I can pass to Terraform via command line or environment variables (like API keys)
  • doom_00.tf - DigitalOcean steps
  • doom_01.tf - Cloudflare steps

In my repository are additional files such as a manifest of URL’s to WAD files, Zandronum configuration files, and scripts to automate the steps on the server-side. All of which are copied to the server via Terraform (with the exception of the .tf and README.md files!)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|____terraform
| |____doom_01.tf
| |____doom_00.tf
| |____variables.tf
|____zandronum
| |____configs
| | |____brutaldoom64-coop.cfg
| | |____doom2-coop.cfg
|____wads
| |____README.md
| |____wadlist
|____README.md
|____scripts
| |____server.sh
| |____bootstrap.sh

I’ll walk through the pseudocode of what the Terraform is doing to provision and configure the Doom server.

  • Provision and launch an Ubuntu Droplet in DigitalOcean
  • Install OS updates and Zandronum
  • Create directory structure to hold Zandronum configurations and Doom WAD’s
  • Copy Doom WAD’s and Zandronum configuration files to server
  • Start the Zandronum server using the configuration of choice (i.e.: Cooperative, Deathmatch, etc.)
  • Return the public IP address of the server
  • Create DNS A record in Cloudflare for doom.mgior.com pointing to IP of server

The first step is to make a few definitions in the variables.tf file. I want to define a few variables for Terraform to use when provisioning from the doom_00.tf file and the doom_01.tf file, some of which I can pass the values for via the command line, such as API keys (inb4 .bash_history) These values include:

  • DigitalOcean Access Token
  • Cloudflare Zone ID, API Access Key, etc.
  • SSH public key ID to add from DigitalOcean during provisioning

These are defined in the variables.tf file as seen below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# DigitalOcean Token
variable "do_token" {
    type = string
}

# DigitalOcean SSH Key ID
variable "do_ssh_id" {
    type = string
}

# Cloudflare API Key
variable "cf_api_key" {
    type = string
}

# Cloudflare Account ID Key
variable "cf_account_id" {
    type = string
}

# Cloudflare Zone ID
variable "cf_zone_id" {
    type = string
}

# Cloudflare Account Email
variable "cf_email" {
    type = string
}

# DigitalOcean Droplet Region
variable "region" {
    default = "nyc1"
}

# DigitalOcean Droplet Size
variable "size" {
    default = "s-1vcpu-2gb"
}

I also found an excellent and frequently updated list of Terraform DigitalOcean variables on GitHub which can either be included in full and called as needed, or you can reference it and pull out what you need.

Then we get into the meat of things with doom_00.tf which contains all the steps for the DigitalOcean side.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
provider "digitalocean" {
    token = var.do_token
}

data "digitalocean_ssh_key" "doom-server" {
    name = var.ssh_key_id
}

resource "digitalocean_droplet" "doom-server" {
    ssh_keys                = [ data.digitalocean_ssh_key.doom-server.id ]
    image                   = "ubuntu-18-04-x64"
    region                  = var.region
    size                    = var.size
    private_networking      = false
    backups                 = false
    ipv6                    = false
    name                    = "MyAwesomeDoomServer"

    provisioner "file" {
        source      = "../scripts/bootstrap.sh"
        destination = "/tmp/bootstrap.sh"
    }

    provisioner "file" {
        source      = "../scripts/server.sh"
        destination = "/tmp/server.sh"
    }

    provisioner "file" {
        source      = "../wads/wadlist"
        destination = "/tmp/wadlist"
    }

    provisioner "remote-exec" {
        inline = [
            "chmod +x /tmp/bootstrap.sh",
            "/bin/sh /tmp/bootstrap.sh"
        ]
    }

    provisioner "file" {
        source      = "../zandronum/configs/"
        destination = "/opt/zandronum/configs/"
    }

    provisioner "remote-exec" {
        inline = [
            "chmod +x /tmp/server.sh",
            "/bin/sh /tmp/server.sh"
        ]
    }

    connection {
        host        = digitalocean_droplet.doom-server.ipv4_address
        type        = "ssh"
        private_key = file("~/.ssh/id_rsa")
        user        = "root"
        timeout     = "2m"
    }
}

And to point doom.mgior.com to the IP address of the DigitalOcean Droplet created, in comes doom_01.tf to make that happen on the CloudFlare side of things.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
provider "cloudflare" {
    version = "~> 2.7"
    email   = var.cf_email
    api_key   = var.cf_api_key
    account_id = var.cf_account_id
}

resource "cloudflare_record" "doom" {
    zone_id    = var.cf_zone_id
    name       = "doom"
    value      = digitalocean_droplet.doom-server.ipv4_address
    type       = "A"
    ttl        = 3600
}

The steps during provisioning such as creating directories, updating the OS, downloading WAD’s from a DigitalOcean Space, and installing Zandronum is handled by the bootstrap.sh script that gets copied to the server and executed by Terraform.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/sh

# create directories to store assets and logs
mkdir /opt/zandronum/
mkdir /opt/zandronum/logs
mkdir /opt/zandronum/wads
mkdir /opt/zandronum/configs

for WAD in $(cat /tmp/wadlist); do;
	cd /opt/zandronum/wads/ && curl $WAD -O
done;

wget -O - http://debian.drdteam.org/drdteam.gpg | apt-key add -
apt-add-repository 'deb http://debian.drdteam.org/ stable multiverse'

apt-get -y update

apt-get install -y zandronum doomseeker-zandronum

It’s important to note that Terraform is grabbing my SSH public key ID that I have stored in DigitalOcean and adding it to the servers authorized_keys file so that Terraform can transfer files from my computer such as the scripts, Zandronum configurations, and list of WAD files, to the server in DigitalOcean automagically. This also allows me to SSH directly into the server from my computer in the event I need to interact with the Zandronum server.

The server.sh script that was copied to the server is then executed as a last step which simply executes zandronum-server using screen along with the relevant parameters such as which game configuration file to use, what TCP port to listen on, and which WAD’s to load based on which line is not commented:

1
2
3
4
5
6
7
8
9
10
11
#!/bin/sh

# define Zandronum asset directories
WADS="/opt/zandronum/wads"
CONFIGS="/opt/zandronum/configs"

# COOP - DOOM2 + BrutalDoom, MapsOfChaos
screen -dmS zandronum bash -c 'zandronum-server -host -port 10677 -iwad $WADS/DOOM2.WAD -file $WADS/brutalv21.pk3 $WADS/mapsofchaos.wad +exec $CONFIGS/doom2-coop.cfg'

# COOP - BrutalDoom64
#screen -dmS zandronum bash -c 'zandronum-server -host -port 10677 -iwad $WADS/DOOM2.WAD -file $WADS/... +exec $CONFIGS/brutaldoom64-coop.cfg'

I can test the Terraform code without actually creating any infrastructure, but first I need to initialize Terraform by issuing the terraform init command. By initializing, Terraform will review the .tf files and determine which Provider plugins are required and then subsequently download them.

1
terraform init

Now, by issuing a terraform plan command, Terraform will perform a check of the code to make sure syntax is correct and then actually show you the resources that will be added, updated, or removed without making any changes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
terraform plan \
-var='do_token=XYZ' \
-var='do_ssh_id=MyKeyIdName' \
-var '[email protected]' \
-var='cf_api_key=XYZ' \
-var='cf_account_id=XYZ' \
-var='cf_zone_id=XYZ'

Terraform will perform the following actions:

  # digitalocean_droplet.doom-server will be created
  + resource "digitalocean_droplet" "doom-server" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + id                   = (known after apply)
      + image                = "ubuntu-18-04-x64"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + ipv6_address_private = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "MyAwesomeDoomServer"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = false
      + region               = "nyc1"
      + resize_disk          = true
      + size                 = "s-1vcpu-2gb"
      + ssh_keys             = [
          + "XXXXXXXX",
        ]
      + status               = (known after apply)
      + urn                  = (known after apply)
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Given everything is OK based on the output, I can begin to deploy everything by issuing the terraform apply command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
terraform apply \
-var='do_token=XYZ' \
-var='do_ssh_id=MyKeyIdName' \
-var '[email protected]' \
-var='cf_api_key=XYZ' \
-var='cf_account_id=XYZ' \
-var='cf_zone_id=XYZ'
...
...
...
digitalocean_droplet.doom-server: Provisioning with 'remote-exec'...
digitalocean_droplet.doom-server (remote-exec): Connecting to remote host via SSH...
digitalocean_droplet.doom-server (remote-exec):   Host: X.X.X.X
digitalocean_droplet.doom-server (remote-exec):   User: root
digitalocean_droplet.doom-server (remote-exec):   Password: false
digitalocean_droplet.doom-server (remote-exec):   Private key: true
digitalocean_droplet.doom-server (remote-exec):   Certificate: false
digitalocean_droplet.doom-server (remote-exec):   SSH Agent: true
digitalocean_droplet.doom-server (remote-exec):   Checking Host Key: false
digitalocean_droplet.doom-server (remote-exec): Connected!
digitalocean_droplet.doom-server: Creation complete after 5m45s [id=XXXXXXXX]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

public_ip = X.X.X.X

After about 3 minutes, it has finished!

doomgif

Once the day is done and there’s no more Doom to play, I can tear down the environment during the terraform destroy command.

1
2
3
4
5
6
7
8
terraform destroy \
-var='do_token=XYZ' \
-var='do_ssh_id=MyKeyIdName' \
-var '[email protected]' \
-var='cf_api_key=XYZ' \
-var='cf_account_id=XYZ' \
-var='cf_zone_id=XYZ'
...

Retrospective

This was an exercise that I did while I was learning more about Terraform and looking to Terraform all the things I typically do over and over. Since I don’t need a server running 24x7x365 as I only play with my buddy once a week, it’s cool to be able to spin up the environment for us to play, tear it down once done, and do it all over again.

Some things I would like to improve on:

  • Port it to work with AWS for fun and profit
  • Add variables for gameplay types to pass via command line during terraform deploy rather than commenting/uncommenting form the shell script that starts the Zandronum server
  • Add a step to create/update a DNS A record to add the IP address of the server so I can connect to something like doom.mgior.com rather than having to share a new IP address weekly
  • Create a DigitalOcean space to house all Doom WAD’s and have Terraform pull from it rather than copy from host computer
This post is licensed under CC BY 4.0 by the author.