Intercepting Services in Routers using Proxy Tunnel Mode - NetFoundry for Kubernetes / Docker

Introduction

As organizations increasingly adopt containers for apps and workloads, managing and securing  communication from and to containerized apps has become critical. Unlike hosting apps on virtualized or physical infrastructure, containerized applications have dependencies on pod and node level networking. 

By leveraging the Ziti Edge Router in Proxy Tunnel Mode, organizations can deploy the Edge Router to intercept traffic from the containers to reach destinations over the NetFoundry cloud. The container workloads can be across different nodes and namespaces within a cluster.

In this article, we will explore the necessary configurations and best practices for deploying the Ziti Edge Router in Proxy Tunnel Mode, for applications or workloads in GKE (as an example) to reach a NetFoundry service ( apps or workloads) in any public / private cloud or branch or within GKE.

GKE ER - Proxy.png

Our Lab Setup

AWS:

The 'hello world' web app is deployed in AWS, Singapore. The NetFoundry edge router is spun up in the same VPC as that of the 'hello world' web app in AWS. The 'hello world' web app is reachable from the customer edge router at AWS.

GCP:

The NetFoundry edge router is deployed in 'proxy tunnel mode' as a container within the GKE cluster 'lab-cluster' within the 'ziti-router1' namespace in GCP.

A test app is deployed as a container in the same GKE Cluster 'lab-cluster', where the customer edge router is deployed, but within the default namespace.

The test app is deployed on a different node than the customer edge router.

Deployment Cluster Node/s Namespace
zitirouter1 lab-cluster gke-lab-cluster-lab-pool-fb097429-jmbd ziti-router1
test app lab-cluster gke-lab-cluster-lab-pool-fb097429-2v7l
gke-lab-cluster-lab-pool-fb097429-cbdc
gke-lab-cluster-lab-pool-fb097429-9rle
default

1. Service Config for AWS 'hello world' web app:

The NetFoundry service is configured with an intercept address of your choice ( IP or host name) and the instance IP of the 'hello-world' web app as the host address. The identity is that of the customer edge router which is provisioned in the same VPC as that of the 'hello-world' web app.

 

2. Deploy a NetFoundry Customer edge router in GKE Cluster in Proxy Tunnel Mode:

The NetFoundry edge router is the WAN gateway in the cluster that helps you to reach the applications/microservices over a private and secure zero trust overlay. The edge router is deployed as a container within a Kubernetes cluster.

Create and Register Customer Edge Router

  • From your Network Dashboard page, navigate to Edge Routers.
  • Under the Edge Routers tab, click on the + sign at the upper right to add an edge router.
  • Give your edge router a name.
  • Give your edge router a router attribute (optional). Router attributes are tags applied to a router. The same tag can be applied to other edge routers to form a collection of Customer-hosted Edge Routers. This attribute can be used for provisioning APPWANs.
  • Select Customer Hosted as your hosting type.
  • Hit Create to complete the process.
  • A new customer-hosted edge router would be created with the registration key. This registration key is required to register the edge router to the network.
  • Download the JWT registration token.

The below provided command utilizes Helm to install the 'ziti-router' application from the 'openziti' chart repository. The JWT registration token downloaded from the console is used for edge router registration.

helm install zitirouter1\
--namespace ziti-router1 --create-namespace \
  openziti/ziti-router \
 --set-file enrollmentJwt=/home/kube-er/zitirouter1.jwt \
   --set linkListeners.transport.service.enabled=false \
 --set edge.advertisedHost=zitirouter1-edge.ziti-router.svc \
   --set ctrl.endpoint="<Controller-DNS>:443" \
       --values /home/kube-er/router.yml

Configure the Ziti router in "proxy tunnel mode":

This step configures Proxy Tunnel Mode on the Ziti Edge Router, enabling your applications in GKE cluster to communicate with Ziti services hosted in AWS.

The edge router runs as a Kubernetes service, acting as a proxy for the Ziti services. For each Ziti service it manages, the edge router creates a corresponding Kubernetes service.

router.yml

tunnel:
mode: proxy
proxyServices:
# this will be bound on the "default" proxy Kubernetes service, see below
- zitiService: helloworld
containerPort: 8080
advertisedPort: 80
# this will be bound on an additionally configured proxy Kubernetes service, see below
#- zitiService: my-other-service.svc
# containerPort: 10022
# advertisedPort: 10022
proxyDefaultK8sService:
enabled: true
type: ClusterIP

In the proxy tunnel mode, the zitiService value (my-ziti-service) refers to the Ziti service you want to access via the ER. The service helloworld in our case is hosted in AWS, and will be accessed from the test app within the GKE cluster through the GKE edge router.

The edge router is deployed in the GKE Cluster named 'lab-cluster' within the 'ziti-router1' namespace.

3. Create your Service Policy:

Create your service policy to allow your edge router deployed in the GKE Cluster to access 'hello-world' web app in AWS over the highly secure NetFoundry cloud network.

Accessing AWS 'helloworld' web app from the test app in the GKE Cluster over the NetFoundry Cloud:

To display the list of services running in the ziti-router1 namespace along with their respective cluster IPs, types, and ports:

To use curl to make a request to a service from within a Kubernetes cluster:

curl http://<service-name>.<name-space>.svc

curl http://zitirouter1-proxy-default.ziti-router1.svc

The AWS 'helloworld' service is accessed over the highly secure NetFoundry cloud network from the test app deployed in another node/another namespace in the same cluster as that of the Edge router.

 

Was this article helpful?
0 out of 0 found this helpful