Nomad
Deploy HashiCups
In this tutorial, you will deploy the initial containerized version of the HashiCups application.
HashiCups is a demo application that showcases product operations in our tutorials. It has several components that have already been converted to microservices that run in Docker containers. If you want to follow these steps with your own application, the minimum prerequisite is a Docker container that runs the monolithic application with Docker Compose.
The six components that make up the application are database
, payments-api
, product-api
, public-api
, frontend
, and nginx
.
database
stores the application data including products and orders. It does not have any upstream dependencies.payments-api
processes payments and does not have any upstream dependencies.product-api
serves product information and depends on thedatabase
service.public-api
serves as an interface for both theproduct-api
andpayments-api
services. It is dependent on bothproduct-api
andpayments-api
.frontend
renders and serves the application's web pages. It does not have any upstream dependencies.nginx
serves as the public entry point to the application. It is dependent on bothfrontend
andpublic-api
.
Containerized application services
Each of the services run as a Docker container. HashiCups runs with Docker Compose, which is a good starting point for migrating an application to Nomad.
A Nomad job specification file, also known as a jobspec, has many of the same configuration attributes as a Docker Compose file. These attributes share names as well, which simplifies the process to convert a Docker Compose file to a Nomad jobspec.
The file snippets and chart below demonstrate how attributes map between Docker Compose and Nomad jobspec files for HashiCups.
hashicups.yaml
services:
...
public-api:
image: 'hashicorpdemoapp/public-api:v0.0.7'
environment:
- BIND_ADDRESS=:8081
- PRODUCT_API_URI=http://product-api:9090
- PAYMENT_API_URI=http://payments:8080
links:
- 'product-api:product-api'
- 'payments:payments'
ports:
- '8081:8081'
Docker Compose | Nomad jobspec | Definition |
---|---|---|
public-api | task "public-api" | Defines the name of the service. |
image | config > image | Defines the image location and version. |
environment | env | Defines environment variables for the container. |
links | upstreams | Defines service dependencies. |
ports | config > ports | Defines the ports for the container. |
Infrastructure overview
At the beginning of this tutorial you have a Nomad and Consul cluster with three server nodes, three private client nodes, and one publicly accessible client node. Each node runs a Consul agent and a Nomad agent.
The initial version of HashiCups represents a conversion from a Docker Compose file to a Nomad jobspec. It has the following attributes:
- All services run on the same node: the Nomad public client node
- Services are configured to use the Nomad client IP address or
localhost
- No service health monitoring
- No scaling of services
- No secure connection (HTTPS)
Prerequisites
This tutorial uses the infrastructure set up in the previous tutorial of this collection, Set up the cluster. Complete that tutorial to set up the infrastructure if you have not done so.
Deploy HashiCups
To deploy HashiCups, you will review the jobspec that describes it, submit the job to Nomad, and then verify that the job ran successfully.
Review the jobspec
In your terminal, change to the jobs
directory.
$ cd ../shared/jobs
Open the 01.hashicups.nomad.hcl
jobspec file to view the contents.
The first section defines variables for the region and datacenter, Docker image versions, database configurations, and service port numbers.
/shared/jobs/01.hashicups.nomad.hcl
variable "datacenters" {
description = "A list of datacenters in the region which are eligible for task placement."
type = list(string)
default = ["*"]
}
# ...
variable "db_port" {
description = "Postgres Database Port"
default = 5432
}
# ...
The top level job "hashicups"
block has a nested group "hashicups"
block. The group
includes each service definition as a separate task
.
These blocks help organize a jobspec. A job can contain one or more groups and each group can contain one or more tasks. Nomad schedules tasks that are in the same group on the same client node.
/shared/jobs/01.hashicups.nomad.hcl
job "hashicups" {
# ...
group "hashicups" {
# ...
task "db" { # ... }
task "product-api" { # ... }
task "payments-api" { # ... }
task "public-api" { # ... }
task "frontend" { # ... }
task "nginx" { # ... }
}
}
The constraint
block instructs Nomad to schedule the job on a node that has a meta attribute nodeRole
set to ingress
. Nodes with this ingress
value are publicly accessible, which is necessary for the nginx
service. The constraint
block can be placed at the job, group, or task level.
/shared/jobs/01.hashicups.nomad.hcl
job "hashicups" {
# ...
# Constrain everything to a public client so nginx
# is accessible on port 80
constraint {
attribute = "${meta.nodeRole}"
operator = "="
value = "ingress"
}
# ...
}
Services use the client node IP address (NOMAD_IP_<label>
) or the full address with port (NOMAD_ADDR_<label>
) and the label of the service to get a resolvable address. Nomad provides these runtime environment variables to each task when the scheduler places them on the node.
/shared/jobs/01.hashicups.nomad.hcl
job "hashicups" {
# ...
group "hashicups" {
# ...
task "product-api" {
driver = "docker"
meta {
service = "product-api"
}
config {
image = "hashicorpdemoapp/product-api:${var.product_api_version}"
ports = ["product-api"]
}
env {
DB_CONNECTION = "host=${NOMAD_IP_db} port=${var.db_port} user=${var.postgres_user} password=${var.postgres_password} dbname=${var.postgres_db} sslmode=disable"
BIND_ADDRESS = ":${var.product_api_port}"
}
}
# ...
task "public-api" {
driver = "docker"
meta {
service = "public-api"
}
config {
image = "hashicorpdemoapp/public-api:${var.public_api_version}"
ports = ["public-api"]
}
env {
BIND_ADDRESS = ":${var.public_api_port}"
PRODUCT_API_URI = "http://${NOMAD_ADDR_product-api}"
PAYMENT_API_URI = "http://${NOMAD_ADDR_payments-api}"
}
}
# ...
}
}
Run the job
Submit the job to Nomad.
$ nomad job run 01.hashicups.nomad.hcl
==> 2024-11-04T12:46:13-05:00: Monitoring evaluation "533adf2b"
2024-11-04T12:46:13-05:00: Evaluation triggered by job "hashicups"
2024-11-04T12:46:13-05:00: Evaluation within deployment: "7e9334a3"
2024-11-04T12:46:13-05:00: Allocation "463be7ae" created: node "b12113ef", group "hashicups"
2024-11-04T12:46:13-05:00: Evaluation status changed: "pending" -> "complete"
==> 2024-11-04T12:46:13-05:00: Evaluation "533adf2b" finished with status "complete"
==> 2024-11-04T12:46:13-05:00: Monitoring deployment "7e9334a3"
✓ Deployment "7e9334a3" successful
2024-11-04T12:46:46-05:00
ID = 7e9334a3
Job ID = hashicups
Job Version = 0
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
hashicups 1 1 1 0 2024-11-04T17:56:44Z
Verify deployment
After the job is deployed, you can verify that it is running.
Use the nomad job
command to retrieve information about the hashicups
job.
$ nomad job allocs hashicups
ID Node ID Task Group Version Desired Status Created Modified
31dac61a 30b5f033 hashicups 0 run running 1m20s ago 49s ago
View the application by navigating to the public IP address of the nginx
service. The following compound command finds the node on which the hashicups
allocation is running (nomad job allocs
) and uses the ID of the found node to retrieve the public IP address of the node (nomad node status
). It then formats the output with the HTTP protocol.
$ nomad node status -verbose \
$(nomad job allocs hashicups | grep -i running | awk '{print $2}') | \
grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \
awk '{print "http://"$1}'
Output from the above command.
http://3.15.17.40
Copy the IP address and open it in your browser to view the HashiCups application. You do not need to specify a port because nginx is running on port 80
.
You can click coffees and add them to a cart to test the application.
Cleanup
Before proceeding with the next tutorial, clean up the job in Nomad using the nomad job
command.
$ nomad job stop -purge hashicups
==> 2024-11-12T21:02:41+01:00: Monitoring evaluation "81d04e2f"
2024-11-12T21:02:41+01:00: Evaluation triggered by job "hashicups"
2024-11-12T21:02:41+01:00: Evaluation status changed: "pending" -> "complete"
==> 2024-11-12T21:02:41+01:00: Evaluation "81d04e2f" finished with status "complete"
Next steps
In this tutorial, you deployed the initial containerized version of the HashiCups application.
In the next tutorial, you will integrate Consul service discovery into the HashiCups application.