Deploying Ziti-Tunnel as a side car proxy to a containerized app - NetFoundry Cloud for Kubernetes

Introduction

Kubernetes, while revolutionizing application deployment and management, introduces complexities in securing access to underlying workloads. Traditional network security perimeters struggle to adapt to the dynamic nature of containerized environments, leaving applications vulnerable to threats.

NetFoundry enables Cloud Native Applications to enforce granular access controls ensuring that only authorized users/microservices can interact with specific applications/microservices in a cluster.

Getting started

What you need to get started:

  1. A NetFoundry Cloud account - Go through the steps to create a free trial account if you don't have one.
  2. A network in your account with at least one public router. The below articles will guide you through this process.

Firewall policy requirements to provide outbound only access to the NetFoundry network.

Intercepting and hosting services for a containerized workload using ziti tunnel as a side car proxy:

This document provides a step-by-step guide for deploying a Ziti-tunnel container as a sidecar in the same pod alongside the containerized application in a NetFoundry Cloud network. We'll also discover how to secure network communication from the app hosted in GKE to an API hosted in AWS and from a device to the containerized app hosted in Google Kubernetes Engine (GKE) via the ziti -tunnel.

 

sidecar.png

Our Lab Setup

AWS

The 'PetStore' API is deployed in AWS, Singapore. The NetFoundry edge router is spun up in the same VPC as that of the 'PetStore' API in AWS. The 'PetStore' API is reachable from the customer edge router at AWS via the interface VPC endpoint API URI.

GCP - GKE

A Pod that runs a non-Ziti NginX Web server and ziti-tunnel as a sidecar proxy, is deployed in the GKE Cluster 'lab-cluster' within the default namespace.

 

1. Service Config for AWS 'PetStore' API

The NetFoundry service is configured with an intercept address and the host address set as the interface VPC endpoint API URI. The identity used is that of the customer edge router, which is provisioned in the same VPC as the interface VPC endpoint and the 'PetStore' API.

2. Deploy a NetFoundry Ziti-tunnel container as a sidecar in GKE Cluster

A. Create Kubernetes Secrets

1. Enroll the jwt obtained from the console and generate the json -

python -m openziti enroll --jwt identityname.jwt --identity identityname.json

2. Upload the Ziti Identity json File:

    • Go to GCP Console > Kubernetes Engine > Configurations > Secrets.
    • Create a new secret:
      • Name: ziti-identity-secret (or any preferred name).
      • Key: Upload the sidecar-client.json file
sheik_ahamed@cloudshell:~ (sa-project-170814)$ gcloud secrets list
NAME: sidecar-client-identity
CREATED: 2024-11-26T11:46:43
REPLICATION_POLICY: automatic
LOCATIONS: -

B. Add the Secret to Kubernetes

Retrieve the secret version data from Google Secret Manager and store it as a Kubernetes secret.

This will create a Kubernetes secret named sidecar-client-identity from the contents of your ziti-identity-secret secret.

sheik_ahamed@cloudshell:~ (sa-project-170814)$ gcloud secrets versions access latest --secret=sidecar-client-identity > sidecar-client.json
sheik_ahamed@cloudshell:~ (sa-project-170814)$ kubectl create secret generic sidecar-client-identity --from-file=sidecar-client.json
secret/sidecar-client-identity created
sheik_ahamed@cloudshell:~ (sa-project-170814)$

C. Deploy the Pod

For our lab, we have deployed a Pod that runs a non-Ziti demo client application and ziti-tunnel as a sidecar proxy. The client application deployed is the NginX Web server.

You need to update the deployment manifest with the CoreDNS CLUSTER-IP address before you deploy. This is because the ziti-tunnel sidecar provides an override nameserver for the pod, so we need to inject the CoreDNS nameserver as fallback resolver for non-Ziti names because ziti-tunnel itself will not answer queries for non-Ziti DNS names.

sheik_ahamed@cloudshell:~ (sa-project-170814)$ kubectl --namespace kube-system get services kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 34.118.224.10 <none> 53/UDP,53/TCP 2d20h
sheik_ahamed@cloudshell:~ (sa-project-170814)$

Save the following to a file named /tmp/sidecar-demo.yaml

You'll notice that the ziti-tunnel sidecar container has a few requirements:

  1. The basename (sans suffix) of the identity that is assumed by ziti-tunnel must be passed into the container with the ZITI_IDENTITY_BASENAME environment variable.
  2. The secret that we created above for the enrolled identity must be mounted into the container at "/netfoundry".
# /tmp/sidecar-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ziti-tunnel-sidecar-demo
spec:
replicas: 1
selector:
matchLabels:
app: ziti-tunnel-sidecar-demo
strategy:
type: Recreate
template:
metadata:
labels:
app: ziti-tunnel-sidecar-demo
spec:
containers:
- image: nginx:latest
name: nginx
- image: openziti/ziti-tunnel
name: ziti-tunnel
args: ["tproxy"]
env:
- name: ZITI_IDENTITY_BASENAME
value: sidecar-client # the filename in the volume is sidecar-client.json
volumeMounts:
- name: sidecar-client-identity
mountPath: /netfoundry
readOnly: true
securityContext:
capabilities:
add:
- NET_ADMIN
dnsPolicy: None
dnsConfig:
nameservers:
- 127.0.0.1 # used by the tunneler during startup to verify own DNS for the pod
- 34.118.224.10 # change to CoreDNS cluster service address
restartPolicy: Always
volumes:
- name: sidecar-client-identity
secret:
secretName: sidecar-client-identity

Once the manifest YAML is saved, we can deploy the pod with kubectl

kubectl apply -f /tmp/sidecar-demo.yaml

D. Verify the Deployment

Ensure that the pod is running:

sheik_ahamed@cloudshell:~ (sa-project-170814)$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
ziti-tunnel-sidecar-demo-85f65db57c-6hd24 2/2 Running 0 5d11h
sheik_ahamed@cloudshell:~ (sa-project-170814)$

Review the logs from both the ziti-tunnel container to verify that the Ziti tunneler is using the json identity:

sheik_ahamed@cloudshell:~ (sa-project-170814)$ kubectl logs ziti-tunnel-sidecar-demo-85f65db57c-6hd24 -c ziti-tunnel
INFO: setting NF_REG_NAME to ${ZITI_IDENTITY_BASENAME} (sidecar-client)
DEBUG: waiting 1s for /netfoundry/sidecar-client.json (or token) to appear
INFO: found identity file /netfoundry/sidecar-client.json
DEBUG: evaluating positionals: tproxy
INFO: running "ziti tunnel tproxy --identity /netfoundry/sidecar-client.json "
{"file":"github.com/openziti/ziti/tunnel/intercept/tproxy/tproxy_linux.go:94","func":"github.com/openziti/ziti/tunnel/intercept/tproxy.New","level":"info","msg":"udpIdleTimeout is less than 5s, using default value of 5m0s","time":"2024-11-26T16:35:52.789Z"}

3. Service Config for GCP NginX web server

The NetFoundry service is configured with an intercept address of your choice ( IP or host name) and the internal service IP of the containerized 'NginX Web server' as the host address. The identity is that of the ziti-tunnel deployed as a sidecar proxy in the same pod as that of the 'NginX Web server'.

4. Create your Service Policy

A. Create your service policy to allow your ziti-tunnel deployed as a sidecar proxy in the same pod as that of the 'NginX Web server' to access 'Pet-Store' API in AWS over the highly secure NetFoundry cloud network.

B. Create your service policy to allow access to your containerized non-ziti 'NginX Web server' over the highly secure NetFoundry cloud network.

Accessing PetStore API from the NginX web server in the GKE Cluster with the side-car Ziti-tunnel over the highly secure NetFoundry Cloud network

Accessing NginX web server hosted with the side-car Ziti-tunnel in the GKE Cluster from the ZDE client over the highly secure NetFoundry Cloud network

Was this article helpful?
0 out of 0 found this helpful