Doppler and GKE - Wrong Request URL?

I’m trying to integrate Doppler, Bitbucket Pipelines, and Google Kubernetes Engine. I generated my Doppler service token for a specific config in my project. I followed the documentation for Doppler CLI in Dockerfile and have the following in my .yaml file:

containers:
    - name: my-container
      image: my-image-url
      envFrom:
          - secretRef:
              name: doppler-token

I verified the kubernetes secret is being referenced by the correct name through kubectl get secrets.

My build pipelines succeed but the workload in the cluster crashes shortly after with the error log: “Doppler Error: Get “https://api.doppler.com/v3/configs/config/secrets/download?config=my-config&format=json&include_dynamic_secrets=true&project=my-project”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)”

I had to set extra ENV variables in my .yaml for the config and project query parameters to have a value. Prior to adding these additional ENVs the request was being sent like this: "Get “https://api.doppler.com/v3/configs/config/secrets/download?config=&format=json&include_dynamic_secrets=true&project=

I thought setting those ENV variables would fix my connection issue, but you can see the correct values are making it into the query parameters now.

I tried using the --enable-dns-resolver flag but that had no effect. I also updated the DNS provider on the cluster to google cloud dns and that also had no effect. I’m not sure what else to try?

Hi @rrmangum!

Welcome to the Doppler Community!

Yeah, if the missing parameters were the cause then you’d get a more defined error. A timeout is almost always related to DNS in some way. In your case it sounds like you checked that already though. Are you able to exec into the container and try hitting the URL using curl and the token that should be getting populated (and also double-checking that it actually is in the environment)?

As an aside, typically we recommend using our kubernetes operator in k8s environments. That way, you don’t need to bake our CLI into your images at all. Also, it provides a layer of redundancy that will protect you in scenarios where we have an outage (since all your services are using k8s secrets and the operator is just syncing from Doppler to the k8s secrets).

Regards,
-Joel

Thanks for the reply and the welcome @watsonian! Through lots of pain I discovered I failed to enable Cloud NAT for GKE. Worked through that and everything worked like a charm. I also migrated to the kubernetes operator, thanks for the tip!

@rrmangum Cheers! Glad you were able to find the cause of the problem!

@watsonian If I don’t have to install the Doppler CLI in my images anymore, will the CMD line for my Dockerfile still work?

CMD ["doppler", "run", "--", "/api"]

Or is there a different way to create the image starting point?

@rrmangum Nope, you would need to remove the doppler portions. So, assuming you’re using the k8s operator and have set your deployment up to load the secrets from the k8s secret that the operator creates into the deployment environment, then you can just start it up as you would normally without Doppler:

CMD ["/api"]

Assuming that’s the correct execution command for your application, it will start up and the environment should have all of the secrets in the environment already (because k8s injected them there).

1 Like