This article was originally published as a guest post on the official Kubestack blog
Introduction
This walkthrough assumes you have already followed parts one, two, and three of the official Kubestack tutorial and at least have a local development cluster running via kbst local apply
.
In this walkthrough we will explore using the Kubestack Catalog to install a Prometheus Operator and collect metrics from an example Go application.
Major Disclaimer: In the interest of time and reducing technical complexity there is a very strong anti-pattern present in this walkthrough.
Best practice dictates that infrastructure and application manifests be stored in separate repositories so they can be worked on and deployed independently.
In this walkthrough both the Go application and the Kubestack infrastructure manifests we create will be stored in the same repository for simplicity sake while deploying to the local development environment (see the Conclusion for an explanation of how this would look following best practices).
1 - Configure Local Development Environment
Before we install the Prometheus Operator, we need a few additional tools installed in our local development environment in order more easily verify that our configuration is working.
1.1 - Install Go Locally
Follow the instructions at go.dev/doc/install to install Go for your local development environment. We need this to build our example application as well as to install another tool below.
1.2 - Install and Configure kubectl
kubectl
will be used primarily for two reasons. First, to verify which resources are deployed in our k8s cluster. Second, to forward ports and access resources inside the k8s cluster from our local development environment.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
At this point if kubectl
is successfully installed you should see output similar to the following:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
If you're having issues installing kubectl
please refer the offical documentation.
1.3 - Install and Configure kind
kind
(Kubernetes in Docker) will be used to export the cluster configuration file that kubectl
needs to access the cluster.
GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1
kind version
kind get clusters
kind export kubeconfig --name <CLUSTER_NAME>
kubectl get namespaces
At this point if kind
has been installed correctly and the kubectl config
has been exported successfully, you should see something similar to the following:
NAME STATUS AGE
default Active 6h27m
ingress-nginx Active 6h26m
kube-node-lease Active 6h27m
kube-public Active 6h27m
kube-system Active 6h27m
local-path-storage Active 6h27m
If you're having issues installing kind
please refer the official documentation. Otherwise, congrats! You're ready to move on to the next part of the tutorial.
NOTE:
kind
is only needed for local development environments since that is how Kubestack deploys your environment locally. If you want to follow the rest of the tutorial using your cloud environment instead, you will need to download thekubectl config
file from that cluster and import it locally sokubectl
can access that cluster instead. Here are some resources about exporting that configuration file: EKS, AKS, GKE.
2 - Deploy an Example Go Application
Now we're going to create an example application to emit some metrics for us to collect with Prometheus.
2.1 - Create a Go Application
Create a new GitHub repository to host your Example Go Application.
Create a new Go Module in the repository with go mod init github.com/<account>/<repo>
.
Create a main.go file in the repository with the following content:
package main
import (
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func recordMetrics() {
go func() {
for {
opsProcessed.Inc()
time.Sleep(2 * time.Second)
}
}()
}
var (
opsProcessed = promauto.NewCounter(prometheus.CounterOpts{
Name: "app_go_prom_processed_ops_total",
Help: "The total number of processed events",
})
)
func main() {
println("Starting app-go-prom on port :2112")
recordMetrics()
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":2112", nil)
}
With main.go created run go mod tidy
to create go.mod
and go.sum
(these files should be commited to the repo).
2.2 - Create a Dockerfile that Runs Go Application
Create a Dockerfile in the repository with the following content:
FROM golang:1.17-alpine
WORKDIR /app
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY *.go ./
RUN go build -o /app-go-prom
EXPOSE 2112
CMD [ "/app-go-prom" ]
2.3 - Create a GitHub Action Pipeline to Build and Publish Docker Image
Create a .github/workflows/docker-publish.yml file in the repository with the following content:
name: Docker Publish
on:
push:
branches: [ main ]
tags: [ 'v*' ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
# github.repository as <account>/<repo>
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v2
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v3
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
This will build and publish a docker image to the GitHub Registry associated to the repository you created. This image can be used inside your k8s cluster.
In order to trigger this to build an image tagged with latest
(rather than the current branch) you will need to tag the commit similar to the following:
git checkout main
git pull
git tag v0.1.1
git push origin v0.1.1
Once the pipeline completes you should be able verify by pulling the image with docker pull ghcr.io/<account>/<repo>:latest
.
2.4 - Create an Application Manifest for Kubestack to Deploy in the Cluster
This requires two files to be created in your Kubestack IAC repository:
eks_zero_applications.tf
module "application_custom_manifests" {
providers = {
kustomization = kustomization.eks_zero
}
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration = {
apps = {
resources = [
"${path.root}/manifests/applications/app-go-prom.yaml"
]
common_labels = {
"env" = terraform.workspace
}
}
ops = {}
loc = {}
}
}
manifests/applications/app-go-prom.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-go-prom
namespace: default
labels:
app: app-go-prom
spec:
replicas: 1
selector:
matchLabels:
app: app-go-prom
template:
metadata:
labels:
app: app-go-prom
spec:
containers:
- image: ghcr.io/<account>/<repo>:latest
name: app-go-prom
ports:
- containerPort: 2112
---
apiVersion: v1
kind: Service
metadata:
name: app-go-prom-svc
namespace: default
labels:
app: app-go-prom
spec:
selector:
app: app-go-prom
ports:
- name: metrics
port: 2112
targetPort: 2112
protocol: TCP
type: LoadBalancer
With these files created you may need to ^c out of the kbst local apply
and run it again if watch doesn't pick up changes in custom manifest files.
2.5 - Verify that the Go Application is Running and Emitting Metrics
Note about this section: If you ever destroy and re-apply the local cluster you will need to run the
kind export kubeconfig --name <CLUSTER_NAME>
command from above again to get a freshkubectl config
in order forkubectl
to work. The cluster name will probably be the same, but if you need to find it you can always runkind get clusters
to figure it out.
Once kbst
has finished applying the changes let's verify that the pod is running and that it is emitting metrics as we expect.
kubectl get pods
You should see output similar to the following if everything is working correctly:
NAME READY STATUS RESTARTS AGE
app-go-prom-6f9576879d-hvdr9 1/1 Running 0 32h
If STATUS is not "Running" there is an error. You can use kubectl logs <go-app-pod-name>
to check the pod logs and fix any errors.
Once your Go application pod is running, lets forward the port from the associated service to our localhost and check for metrics:
kubectl get services
With the service name in hand run:
kubectl port-forward service/app-go-prom-svc 2112
curl localhost:2112/metrics
You should see a bunch of metrics spit out at this point, including the one we created in our Go application.
# HELP app_go_prom_processed_ops_total The total number of processed events
# TYPE app_go_prom_processed_ops_total counter
app_go_prom_processed_ops_total 25058
...
If you run the curl command a few more times you should see our metric increasing steadily as we expect.
You can stop the port forward and move on to the next section now.
3 - Install Prometheus Operator via the Kubestack Catalog
Now we are going to install the Prometheus Operator following the instructions in the Kubestack Catalog.
This consists of 3 steps:
- Adding the Prometheus Operator module to the cluster
- Configuring read-only access policies to monitoring targets
- Specifying which target services to monitor
3.1 - Adding the Prometheus Operator module to the cluster
Create an eks_zero_services.tf file in the root of the repo with the following content:
module "eks_zero_prometheus" {
providers = {
kustomization = kustomization.eks_zero
}
source = "kbst.xyz/catalog/prometheus/kustomization"
version = "0.51.1-kbst.0"
configuration = {
apps = {
additional_resources = [
"${path.root}/manifests/services/prometheus-default-instance.yaml",
"${path.root}/manifests/services/prometheus-service-monitors.yaml"
]
}
ops = {}
loc = {}
}
}
This file specifies to add the eks_zero_prometheus
module to the eks_zero
kustomization provider. If you have customized the name of your provider make sure to update that here as well.
3.2 - Create a Default Instance with permissions to monitor targets
Now we will create first file referenced as additional_resources
above.
Create manifests/services/prometheus-default-instance.yaml with the following contents:
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: default-instance
namespace: default
labels:
prometheus: default-instance
spec:
serviceAccountName: prometheus-default-instance
serviceMonitorSelector:
matchLabels:
prometheus-instance: default-instance
resources:
requests:
memory: 2Gi
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: prometheus-default-instance
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-instance
subjects:
- kind: ServiceAccount
name: prometheus-default-instance
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-default-instance
namespace: default
This file contains 3 key components:
- The Prometheus default-instance
- The RoleBinding permissions
- The Service Account
The default-instance is our Prometheus server that collects the metrics and serves the Prometheus UI. The other components are used to grant the needed permissions to read the metrics we'll specify in the next section.
3.3 - Specify which targets to monitor
Finally, we will create a ServiceMonitor that ties everything together.
Create the manifests/services/prometheus-service-monitors.yaml file with the following contents:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-go-prom-monitor
namespace: default
labels:
prometheus-instance: default-instance
spec:
selector:
matchLabels:
app: app-go-prom
endpoints:
- port: metrics
There are 3 important pieces here to connect everything:
metadata.labels
here needs to exactly match thespec.serviceMonitorSelector.matchLabels
from the default-instancespec.selector.matchLables
here needs to exactly match themetadata.labels
of the Deployment from the Go application manifestspec.endpoints.port
needs to match thespec.ports.name
of the Service from the Go application manifest
Once all those pieces are in place you can once again run kbst local apply
to pickup the new manifests.
3.4 - Verify all Prometheus components are Running
Note about this section: If you ever destroy and re-apply the local cluster you will need to run the
kind export kubeconfig --name <CLUSTER_NAME>
command from above again to get a freshkubectl config
in order forkubectl
to work. The cluster name will probably be the same, but if you need to find it you can always runkind get clusters
to figure it out.
Similar to how we verified our Go application was working let's check our Prometheus Operator.
Once kbst
has finished applying the changes let's verify that the Prometheus Operator and supporting components have been successfully deployed:
kubectl get all --namespace operator-prometheus
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE
pod/prometheus-operator-775545dc6b-qffng 1/1 Running 0 40h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-operator ClusterIP None <none> 8080/TCP 40h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-operator 1/1 1 1 40h
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-operator-775545dc6b 1 1 1 40h
Now let's check that the default-instance pod is running:
kubectl get pods
You should see there is now a prometheus-default-instance-0
:
NAME READY STATUS RESTARTS AGE
app-go-prom-6f9576879d-hvdr9 1/1 Running 0 33h
prometheus-default-instance-0 2/2 Running 0 33h
If STATUS is not "Running" there is an error. You can use kubectl logs prometheus-default-instance-0
to check the pod logs and fix any errors.
Lastly, let's verify that our ServiceMonitor was created:
kubectl get ServiceMonitors
You should see something similar to:
NAME AGE
app-go-prom-monitor 33h
If you don't have any errors, proceed to the final section and let's verify that Prometheus is correctly collecting our metrics.
4 - View the Metrics in the Prometheus UI
Now that we've got everything deployed and Running let's take one final step to verify that everything is working.
Like we did before with our Go application, we now need to forward the Prometheus port to access the UI:
kubectl port-forward prometheus-default-instance-0 9090
Now from your local development environment open a web browser and navigate to localhost:9090
.
If everything has been successful to this point you should be greeted with the Prometheus dashboard.
Enter the metric name into the search box that we created in our Go application; app_go_prom_processed_ops_total
, and click "Execute".
You will see the metric metadata and count displayed below the search box, similar to the following:
app_go_prom_processed_ops_total{container="app-go-prom", endpoint="metrics", instance="10.244.1.4:2112", job="app-go-prom-svc", namespace="default", pod="app-go-prom-6f9576879d-hvdr9", service="app-go-prom-svc"} 474
Conclusion
Congratulations you have successfully deploy the Prometheus Operator, created an example service emitting metrics, and configured everything to collect those metrics. That is the backbone you'll need to have visibility into the metrics of your new cluster.
From here you could extend your metrics infrastructure by;
- adding additional applications / services and their associated ServiceMonitors,
- adding a Grafana deployment to create dashboards of your metrics,
- configuring an external instance of Prometheus to collect all your metrics,
- etc.
The next steps if you'd like would be to go beyond metrics and browse the Kubestack Catalog to install additional helpful services into your cluster.
Initial Disclaimer Explained:
As mentioned in the beginning, we introduced a very strong anti-pattern in this walkthrough by placing our application and infrastructure manifests in the same repository.
This should NEVER be done when deploying to your real cloud infrastructure. Instead, for micro-service architectures such as this, each application would have their own code repo (in whatever language is appropriate). There would also be one additional "deployment repository" containing the Kubernetes manifests of all the applications. A service such as ArgoCD or Flux would then be configured to monitor the deployment repository and deploy changes to the Kubernetes cluster as needed when the applications are updated.
The Prometheus Operator should be deployed as part of the Kubestack infrastructure. The Prometheus Instance and ServiceMonitor (all explained in more details later) should be deployed along side each application. The only exception would be if you plan to have a single Instance monitor all your services, in that case it can be deployed as part of the Kubestack infrastructure.
For more information you can refer to the official documentation regarding infrastructure environments.