Prerequisites
Docker Desktop
kubectl
Kind (for local cluster)
AWS CLI + eksctl (for production)
direnv (for environment management)
Phase 1: Local Development Setup
1. Create Local Kind Cluster
kind create cluster --name rwml-34fa #(this is the name you need to choose)
kubectl config use-context kind-rwml-34fa
2. Build and Push to Local Registry
You will need to build your images from docker already, I have my image built from the docker and referred to that in the code scripts below
#!/bin/bash
specifies that this script should be executed using the Bash shell.The script's purpose is to:
Build a Docker image from a specified
Dockerfile
.Push the image to a registry (for
prod
).Load the image into a local Kubernetes cluster (for
dev
).The script expects two arguments:
image_name
→ Name of the Docker image (e.g.,myapp
).env
→ Environment (dev
orprod
).[ -z "$image_name" ]
checks if$image_name
is empty.If missing, it prints the correct usage and exits with status
1
(error).Case 1:
env = "dev"
(Development Build)Builds a local Docker image tagged as
image_name:dev
.-t
→ Tags the image.-f
→ Specifies theDockerfile
path (docker/${image_name}.Dockerfile
).
Loads the image into a local Kubernetes cluster using
kind
(Kubernetes in Docker).kind load docker-image
→ Makes the image available to the cluster namedrwml-34fa
.
Builds a multi-platform (Linux/AMD64) image using
docker buildx
(for cross-platform compatibility).Tags the image with:
A version (
0.1.5-beta
) + timestamp (BUILD_DATE
).Pushes to GitHub Container Registry (
ghcr.io
).
Adds Open Containers Initiative (OCI) labels for metadata:
revision
→ Git commit hash.created
→ Build timestamp.url
,title
,description
,source
→ GitHub repo details.
Pushes the image to the registry (
--push
).
#!/bin/bash
# Builds a docker image for the given dockerfile and pushes it to the docker registry
# given by the env variable
image_name=$1
env=$2
# Just checking that the user has provided the correct number of arguments
if [ -z "$image_name" ]; then
echo "Usage: $0 <image_name> <env>"
exit 1
fi
if [ -z "$env" ]; then
echo "Usage: $0 <image_name> <env>"
exit 1
fi
# Check that env is either "dev" or "prod"
if [ "$env" != "dev" ] && [ "$env" != "prod" ]; then
echo "env must be either dev or prod"
exit 1
fi
if [ "$env" = "dev" ]; then
echo "Building image ${image_name} for dev"
docker build -t ${image_name}:dev -f docker/${image_name}.Dockerfile .
kind load docker-image ${image_name}:dev --name rwml-34fa
else
echo "Building image ${image_name} for prod"
BUILD_DATE=$(date +%s)
docker buildx build --push \
--platform linux/amd64 \
-t ghcr.io/silsgah/${image_name}:0.1.5-beta.${BUILD_DATE} \
--label org.opencontainers.image.revision=$(git rev-parse HEAD) \
--label org.opencontainers.image.created=$(date -u +%Y-%m-%dT%H:%M:%SZ) \
--label org.opencontainers.image.url="ghcr.io/silsgah/${image_name}.Dockerfile" \
--label org.opencontainers.image.title="${image_name}" \
--label org.opencontainers.image.description="${image_name} Dockerfile" \
--label org.opencontainers.image.licenses="" \
--label org.opencontainers.image.source="ghcr.io/silsgah" \
-f docker/${image_name}.Dockerfile .
fi
# Build image
docker build -t trades:dev -f docker/trades.Dockerfile .
# Load into Kind
kind load docker-image trades:dev --name rwml-34fa
3. Deploy to Local Cluster
To deploy to either a local or production Kubernetes cluster, it's important to verify which context you're currently working in. You can easily check this using the following kubectl
command:
kubectl config get-contexts
Once your context is confirmed, you can proceed to apply your configuration for deployment. The script I’ve set up allows deployment to either the local or production cluster by specifying the env
variable accordingly.
Phase 2: Production AWS EKS Setup
1. Configure AWS EKS Cluster
In our case, we chose to deploy to AWS EKS. To achieve this, you need to configure your local environment to communicate with your EKS cluster on AWS.
You can now provision your cluster in the cloud by this command (AWS), again you need to have AWS CLI installed. It takes some time to have it provisioned 15-20 minutes
eksctl create cluster \
--name prod \
--region us-east-1 \
--node-type t3.large \
--nodes 3 \
--managed
2. Set Up Production Namespace
You’ll need to create a dedicated namespace for your production environment, which should then be referenced in your deployment .yaml
files. You can create the namespace using:
Next, set the Kubernetes context to use this namespace:
For streamlined environment context management (especially when switching between local, staging, or production), tools like direnv
can be very helpful. direnv
automatically loads environment variables from a .envrc
file based on your working directory, simplifying context management across projects.
kubectl create namespace rwml
kubectl config set-context --current --namespace=rwml
3. Configure AWS Authentication
The command aws eks --region us-east-1 update-kubeconfig --name prod
performs a critical function for working with Amazon EKS (Elastic Kubernetes Service). Here's what it does and why it's important:
Connects Your Local
kubectl
to AWS EKSUpdates your
~/.kube/config
file with:Cluster API endpoint
Certificate authority data
Authentication credentials
Creates a New Context
Adds an entry like this to your kubeconfig:Sets Up Authenticated Access
Uses AWS IAM credentials to generate temporary Kubernetes access tokens
aws eks --region us-east-1 update-kubeconfig --name prod
You can confirm it is communicating with your prod environment with these commands as illustrated in the image below
Phase 3: CI/CD Pipeline Setup
1. Environment Configuration
Depending on the directory you have your .envrc set up, export the following to have the variables available to be used, deployment/prod/.envrc
:
export KUBE_CONTEXT="arn:aws:eks:us-east-1:<ACCOUNT_ID>:cluster/prod"
export KUBE_NAMESPACE="rwml"
export DOCKER_REGISTRY="ghcr.io/your-org"
2. Deployment Script
#!/bin/bash
service=$1
env=$2
# Load environment
cd "deployment/${env}" && direnv allow . && cd -
# Build and push
docker build -t $DOCKER_REGISTRY/$service:prod -f docker/$service.Dockerfile .
docker push $DOCKER_REGISTRY/$service:prod
# Deploy
kubectl --context=$KUBE_CONTEXT apply -f deployments/$env/$service.yaml
#!/bin/bash
This tells the system to run the script using the Bash shell.
service=$1
env=$2
These lines take the first and second command-line arguments and assign them to variables:
service
: The name of the microservice or app you're deploying.env
: The environment you're targeting (e.g.,local
,prod
,staging
).
cd "deployment/${env}" && direnv allow . && cd -
Changes directory into the environment-specific deployment folder.
Runs
direnv allow .
to load environment variables defined in that folder.Then returns (
cd -
) back to the original directory.
docker build -t $DOCKER_REGISTRY/$service:prod -f docker/$service.Dockerfile .
Builds a Docker image for the specified service using the provided Dockerfile.
Tags the image with
prod
and the target Docker registry.
docker push $DOCKER_REGISTRY/$service:prod
Pushes the built Docker image to the remote Docker registry.
kubectl --context=$KUBE_CONTEXT apply -f deployments/$env/$service.yaml
Uses
kubectl
to deploy the service to a Kubernetes cluster.It applies the YAML configuration file specific to the service and environment.
The cluster is chosen based on the
KUBE_CONTEXT
environment variable.
Phase 4: Deployment Workflow
With all this setup, it becomes easier to push to local or production environment with this command
Local Development
make deploy service=trades env=dev
# Switch context
kubectl config use-context arn:aws:eks:us-east-1:<ACCOUNT_ID>:cluster/prod
# Deploy
make deploy service=trades env=prod
The workflow as discussed in the write can be summarized in the diagram below.
Conclusion
Building an efficient Kubernetes deployment pipeline is crucial for scaling modern applications reliably and securely.
In this guide, we've established a comprehensive pipeline that takes your application from local development all the way to production on AWS using Kubernetes. The workflow begins with environment setup and context management, enabling seamless switching between local and cloud clusters. It progresses into containerization and culminates in a CI/CD pipeline that handles automated building, pushing, and deploying of services.
Through the use of tools like direnv
for environment isolation, Docker for image builds, and Kubernetes for scalable deployments, we've created a robust system that supports both iterative development and reliable production rollouts. The deployment script and .envrc
configurations streamline the process and reduce manual overhead, making it easier to manage multiple services and environments.
This solid foundation ensures consistency, reliability, and scalability of deployments. In the next phase, we will integrate monitoring tools like Grafana, powered by real-time data from RisingWave, to provide visibility into system performance and operational health.
What’s Next? Real-Time ML Deployment with MLflow
Want to see this pipeline in action with machine learning models?
In my upcoming deep dive, we’ll extend this Kubernetes infrastructure to:
Track ML experiments with MLflow’s Model Registry
Auto-deploy PyTorch/TensorFlow models via GitOps
Monitor drift using Prometheus metrics
A/B test with Istio traffic splitting
👉 Subscribe to get the complete MLflow+EKS guide delivered to your inbox.
References
1. Official Kubernetes Documentation
Production-Grade Container Orchestration
Covers core concepts used in the pipeline
2. AWS EKS Best Practices
Amazon EKS Deployment Guide
Validates our production cluster setup approachEKS IAM Authentication
Documents theupdate-kubeconfig
command we used
3. CI/CD Pipeline Design
Kubernetes CI/CD Patterns
Google's patterns matching our implementationGitOps with Kubernetes
For readers wanting to extend the pipeline
4. Core Kubernetes Concepts
Kubernetes Production Best Practices (Official Docs)
Validates our namespace isolation and resource limits strategy.
Key Takeaway: "Always define memory limits to prevent pod evictions."kubectl Cheat Sheet
All commands used in the deployment scripts.