Skip to content

Deployment: Zero Trust Identity for Workloads

Overview

This guide covers the deployment of Lane7 Blueprints into local Kubernetes environments. While the steps below generally apply to all blueprints, specific configurations will vary based on the complexity of the blueprint (e.g., Bi-Pod vs. Quad-Pod vs. Multi-Cluster). This guide covers single cluster and multi-cluster deployments.

!!! warning "Zero Trust Architecture Note"
Hopr Blueprints do NOT use PKI identity certificates or Mutual TLS (mTLS).
Standard cloud automation for PKI/mTLS relies on static trust anchors and centralized Certificate Authorities that do not meet strict **Zero Trust** principles. Hopr utilizes **Automated Moving Target Defense (AMTD)** with high-frequency credential rotation to verify identity at every session, not just at connection time.

1. Single Cluster Local Environment Setup

We recommend K3d for its speed, low footprint, and Docker-in-Docker capabilities, which align perfectly with the multi-pod simulation required for these blueprints. Minikube is also supported but may require different drivers for ingress/load balancing.

Prerequisites

  • Docker Desktop (or Engine) running.
  • kubectl installed.
  • K3d (Recommended) or Minikube installed.

Setting up K3d

If you haven't installed K3d yet:

wget \-q \-O \- \ [https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh](https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh) | bash

2. Cluster Configuration & Port Mapping (CRITICAL)

The most common failure point in deploying kubernetes services is a mismatch between the Cluster Ports and the Node Ports.

To avoid this failure point, Blueprints for single cluster deployements (E-W traffic) are preconfigured with port mapping. When you create your local cluster, you must expose the ports that the Blueprint's manifests are configured to use. If the blueprint expects traffic on port 30011, then your cluster must map localhost 30011 to the cluster's load balancer.

Single-Cluster Example (Bi-Pod)

For simple blueprints, standard HTTP ports are often sufficient.

\# Creates a cluster exposing port 8080 and 8443  
k3d cluster create hopr-cluster \-p "8080:80@loadbalancer" \-p "8443:443@loadbalancer" \--agents 2

Blueprint Example (NodePorts)

For larger blueprints where pods require multiple specific NodePorts, you must map them explicitly during creation.

\# Example mapping for a blueprint requiring ports 30011 and 30012  
k3d cluster create hopr-complex \-p "30011:30011@loadbalancer" \-p "30012:30012@sloadbalancer"
!!! danger "Port Mismatch"
If the service.yaml in your blueprint specifies a nodePort: 30001 but your K3d cluster was not created with the matching -p flag, external traffic will fail to reach the pod.

3. Deployment Architecture

The deployment architecture defines how the workloads are distributed. Blueprints are generally categorized into two types:

Single-Cluster (Standard)

  • Use Case: Bi-Pod, Quad-Pod, and most standard application scenarios involving E-W traffic.
  • Topology: All pods (Initiators and Responders) reside in the same Kubernetes cluster, but with their own Namespaces.
  • Networking: Relies on standard intra-cluster DNS (service.namespace.svc.cluster.local).
  • Complexity: Low. Pre-configuration abstracts the complexity for early-career DevOps.

Multi-Cluster (Advanced)

  • Use Case: Hybrid Cloud, Multi-Region, or "Edge-to-Cloud" scenarios.
  • Topology: Workloads are split between completely separate clusters (e.g., North-South).
  • Networking Requirements:
  • L3 Connectivity: The clusters must be able to route IP packets to each other. In a local k3d environment, this requires creating a Shared Docker Network before creating the clusters.
  • Ingress/Egress: The WoSP sidecar proxies traffic securely across the boundary, but the underlying "pipe" (Ingress Controller or NodePort) must be open.
\# Multi-Cluster Setup Example for K3d  
docker network create hopr-net  
k3d cluster create cluster-north \--network hopr-net ...  
k3d cluster create cluster-south \--network hopr-net ...

4. Sincle Cluster Deployment Steps

Step 1: Import Images (Local Only)

Your blueprint came with a custmizable python app. If you built your custom app image locally (as described in the Getting Started guide), import it into the cluster so the nodes can see it.

k3d image import local/hopr-demo-app:latest \-c hopr-cluster

Step 2: Apply Manifests

Deploy the entire bundle. This typically includes the Namespace, Secrets, and Deployment manifests.

kubectl apply -f pod-1/ -f pod-2/

Step 3: Verification

Wait for all pods to enter the Running state, then check if they are running

kubectl get pods \-A

Success Criteria:

  1. Status: Running
  2. Ready: 3/3 (This confirms the web-app, xtra-wasm (WoSP), and web-retriever (WoSP dynamic elements retriever) containers are all up).
  3. Logs: Check the Initiator pod web-app container logs to confirm the "Baton" or traffic is flowing.

kubectl logs \-f \-l app=pod-1 \-c web-app

Troubleshooting

Issue Cause Fix
Pod stuck at ⅔ Ready xtra-wasm container cannot authenticate to the hopr repo. Check if the Secrets were applied correctly.
Connection Refused Cluster Port Mapping mismatch. Re-create K3d cluster with correct \-p flags matching the Blueprint.
"Baton" not passing DNS/Namespace issue. Ensure you did not change the namespace in the YAML files.

Multi-cluster Deployments

Key Differences from Single-Cluster Blueprints:

Aspect Single-Cluster Multi-Cluster
Clusters Required 1 (e.g., local k3d) 2 (e.g., AWS EKS + GCP GKE)
Service Type NodePort LoadBalancer
DNS Resolution .svc.cluster.local External IP/DNS
Envoy Cluster Type STRICT_DNS LOGICAL_DNS
Image Registry Local (imagePullPolicy: Never) Remote (GitLab/DockerHub)
Network Configuration Static (in YAML) Dynamic (script discovers IPs)
Deployment Process kubectl apply Automated script
Application Code ✅ Identical ✅ Identical

Technology Stack

  • Kubernetes: Any managed service (EKS, GKE, AKS) or self-hosted
  • Application: Python 3.10 with aiohttp (unchanged from single-cluster)
  • Sidecar: Hopr WoSP (Envoy with xtra-wasm-filter and web-retriever)
  • Container Registry: GitLab Container Registry (or Docker Hub)
  • Automation: Bash deployment script

Prerequisites

1. Two Kubernetes Clusters

You can use any combination of:

  • AWS EKS (Elastic Kubernetes Service)
  • GCP GKE (Google Kubernetes Engine)
  • Azure AKS (Azure Kubernetes Service)
  • Self-hosted (kubeadm, Rancher, etc.)

Minimum Requirements per Cluster:

  • 1 node with 4 CPU / 8GB RAM
  • LoadBalancer support (cloud provider or MetalLB)
  • Ability to create LoadBalancer services
  • Outbound internet access for image pulls

2. kubectl Installed and Configured

# Verify kubectl installation
kubectl version --client

# Expected output:
# Client Version: v1.27.0 or higher

Configure kubectl contexts:

After creating your clusters, ensure you have two contexts configured:

# List available contexts
kubectl config get-contexts

# Expected output should show both clusters:
# CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
# *         cluster-1   cluster-1   user-1      
#           cluster-2   cluster-2   user-2      

# Test connectivity to both
kubectl --context cluster-1 cluster-info
kubectl --context cluster-2 cluster-info

Important: The deployment script assumes contexts named cluster-1 and cluster-2. If your contexts have different names, you'll need to edit the script variables:

# In deploy-multicloud-bipod.sh, modify these lines:
CLUSTER_1_CONTEXT="your-cluster-1-name"
CLUSTER_2_CONTEXT="your-cluster-2-name"

Key Differences: Single-Cluster vs. Multi-Cluster

For Junior DevOps: What You Need to Know

1. Application Code: NO CHANGES

# The app Python code is IDENTICAL in both single cluster or multi-cluster deployements
url = f"http://localhost:{egress_port}/"  # Always localhost!

The application never needs to know about LoadBalancers, external IPs, or cross-cloud routing.

2. Service Type: NodePort → LoadBalancer

The nodePort specification in a single cluster blueprint becomes a loadBalanceer specification in a multi-cluster blueprint

Why this matters:

  • NodePort: Exposes service on a port on each node (30000-32767)

  • Only works within cluster network

  • Requires manual port forwarding for external access
  • Static configuration

  • LoadBalancer: Cloud provider creates external load balancer

  • Automatically gets public IP/DNS

  • Accessible from anywhere (firewalls permitting)
  • Dynamic IP assignment

Cost Impact:

  • NodePort: Free
  • LoadBalancer: ~$16-20/month per LoadBalancer (varies by cloud)

3. DNS Resolution: Cluster-Internal → External

Single-Cluster:

address: web-app-pod-2-ingress.web-app-2.svc.cluster.local
# Kubernetes resolves this within the cluster

Multi-Cluster:

address: 35.123.45.67  # or a1b2.elb.amazonaws.com
# External DNS/IP that works from anywhere

Why type changes from STRICT_DNS to LOGICAL_DNS:

  • STRICT_DNS: Optimized for Kubernetes cluster DNS (looks for SRV records)
  • LOGICAL_DNS: Standard DNS resolution for external addresses

4. Deployment: Manual → Automated

Single-Cluster: uses command line for deployment of blueprint application networks

# Simple manual steps
docker build -t app:latest .
k3d image import app:latest
kubectl apply -f pod-1/ -f pod-2/

Multi-Cluster: uses a script to reduce complexity and errors.

# Automated with script (handles complexity)
docker push registry.gitlab.com/.../app:v1.0.0
./deploy-multicloud-bipod.sh  # One command!

Your blueprint will include detailed multi-cluster deployment instructions in the IMPLIMENTATION_GUIDE.md