Introduction:
Organizations may run workloads across multiple clusters. The driving factors could be Isolation between workloads, resource management, compliance, data management, managing cost by dedicating clusters per project or customer etc. The NetFoundry cloud solution provides solution options such as tunnellers and routers for containers on GKE. Our other article covers steps on how to spin up a NetFoundry edge router in a GKE cluster and access apps within the same cluster.
In this article, we'll focus on using NetFoundry cloud and GKE's inter cluster networking to establish secure, private zero trust connections to apps in any cluster from a NetFoundry router in one of the cluster. We'll go through the steps in reaching the app in cluster "labcluster-3" ( a private cluster) via the NetFoundry router provisioned in a different cluster by the name "labcluster" .
Architecture:
Provisioning Steps:
Step 1. Create a new VPC that is different from the existing VPC running the edge router
Step 2. Create the new cluster on this VPC that is private.
The cluster is private and is provisioned in a new VPC - cs-lab-vpc. Make sure that there is no IP clash between the two VPCs of the respective clusters ( existing and new) for the primary or secondary IP subnets. You have the primary subnet range for the nodes, internal LB etc and secondary subnet range ( service subnet and the pod subnet as internal ranges of the cluster)
Step 3. Create a VPC peering between the two VPCs. The peering has to be done in either directions when you want to host and access services from the private cluster - labcluster-3. Select import & export custom routes while creating the peering
Step 4. Create a test app in the private cluster, labcluster-3. The image is available at netfoundry/fireworks:latest
Step 5. We have two methods to expose the service via the load balancer
Option 1 - via the google cloud console and load balancer YAML config
Expose the resource's pods using the load balancer
GKE deploys the load balancer as external by default. Modify it to an internal load balancer by editing the yaml of the load balancer service by adding the following parameter in the annotations under metadata. ( see sample YAML of the load balancer service below).
networking.gke.io/load-balancer-type: Internal
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"0":"k8s2-hxyxraoo-default-fireworksapp-service-1ttettd6"},"zones":["asia-south2-c"]}'
networking.gke.io/load-balancer-type: Internal
192.168.20.23 is the private IP of the load balancer service for the app.
Option 2: Via the GKE CLI
Refer the article on how to create a internal load balancer service manifest in GKE CLI. The steps to create and apply the service manifest are:
vim lb.yaml
apiVersion: v1
kind: Service
metadata:
name: internal-lb
namespace: default
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: fireworksapp
ports:
- name: tcp-port
protocol: TCP
port: 80
targetPort: 80
kubectl apply -f lb.yaml
*** Make sure that the GKE CLI IP is allowed in the cluster's control plane authorized networks. You can get the IP using
curl -s checkip.dyndns.org | sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
Verify that the LB is created
$ kubectl get services internal-lb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
internal-lb LoadBalancer 172.17.204.94 192.168.20.24 80:30539/TCP 22m
Step 6. Configure the service in the NetFoundry console with a intercept address of your choice ( IP or host name) and host address which is that of the LB service
The steps in the article are applicable even if you use a tunnneler vs a router in the "labcluster". The service config would select the tunneller endpoint instead of the router endpoint.
Step 7. Create an identity for your device and configure your service policy to allow source identities to access the fireworkskube service.
Step 8. Enjoy the fireworks