Authenticate to GKE without kubeconfig
Recently I built a small service that periodically connects to all of our Kubernetes clusters in a given Google Cloud Platform project and reports on what is running in each one. As part of building this service, I had to solve a pretty basic problem: how do I connect to GKE cross-cluster (or even cross-project)? A service running in Kubernetes can talk to its own cluster pretty trivially, but going cross-cluster means thinking about authentication. In this article, I’ll share some Go code to allow GKE-hosted services to connect with external clusters using Google Service Account permissions.
At the outset, I figured that what I wanted to do must be possible, since this is exactly how `gcloud container clusters` works. Running a command like:
creates or updates a local ~/.kube/config
file. This config file is how kubectl
actually connects to GKE clusters using GCP credentials. I just needed to mimic the same pieces in a Go server. Should be pretty easy right?
If you just want the answer, feel free to jump down to the end and copy-paste my final solution! But if you’re interested in how I got there, read on.
Searching the internet was my first task, but unfortunately I couldn’t find exactly what I needed
The first answer I found on the subject suggested, as one possible solution, bundling the gcloud
binary with my service, and having my service literally run gcloud container clusters
at runtime. I found that idea problematic:
Bundling
gcloud
would add significant complication to the service image build; I’d be going off rails from the streamlined build process we already have in place for Go servicesInvoking
gcloud
is ultimately a stateful operation in the container; I wouldn’t be able to write encapsulated code and forget about it, or analyze multiple clusters in parallel without worrying about mutable config state.
The second answer I found suggested producing a valid ~/.kube/config
file in advance, bundling it with my service, and using that to connect to various kube clusters. That didn’t work for me either; I wanted my service to be fully dynamic, not something I’d have to rebuild and redeploy whenever we added or removed a cluster. But the article did give me an idea...
Digging deeper into the library code
If all I needed was a valid ~/.kube/config
, then ultimately I just needed to see how the information in ~/.kube/config
was used in the Go library code, and try to reproduce the same behavior in memory. With that in mind, I went diving into the code in k8s.io/client-go/tools/clientcmd
and k8s.io/client-go/tools/clientcmd/api