I am trying to provide the DOPPLER_TOKEN to my service running on Google’s Cloud Run. I am using Docker and Cloud Build. While I can get this to work locally when running with docker-compose, I cannot inject the DOPPLER_TOKEN to my cloud run service.
Furthermore, while I am trying to build it into my image (and that isn’t working), I am pretty sure that is not the preferred way, since it exposes the token to anyone with access to the image. So, I would appreciate any advice here.
Code below:
Dockerfile:
################################################################################
# Python requirements generation stage
# This stage uses poetry to generate a requirements.txt file
# to be used in later stages.
FROM python:3.9 as generate-requirements
# Move to /tmp, where we will generate requirements.txt
WORKDIR /tmp
# Install poetry
RUN pip install poetry
# Install other dependencies
RUN apt-get update && apt-get install -y \
gcc \
libffi-dev \
g++ \
postgresql \
python3-psycopg2 \
libpq-dev
# Copy the pyproject.toml and poetry.lock files to the /tmp directory.
# Because it uses ./poetry.lock* (ending with a *), it won't crash if that file
# is not available yet.
COPY ./pyproject.toml ./poetry.lock* /tmp/
# Generate the requirements.txt file
RUN poetry export -f requirements.txt --output requirements.txt --without-hashes
################################################################################
# Install python requirements stage
# This stage installs the python requirements generating in the previous stage.
# It inherits from the base python:3.9 and only pulls in the requirements.txt
# file from the generate-requirements stage.
FROM python:3.9 as install-requirements
# Move to /code directory
WORKDIR /code
# Copy the requirements.txt file over to the /code directory
COPY --from=generate-requirements /tmp/requirements.txt .
# Install the requirements via pip
RUN pip install --no-cache-dir --upgrade -r requirements.txt
################################################################################
# Development server stage
# This stage runs the development server in the context of Doppler,
# which we use to inject secrets and environment variables that are centrally
# managed.
FROM install-requirements as development-server
# Install Doppler CLI
RUN apt-get update \
&& apt-get install -y apt-transport-https ca-certificates curl gnupg \
&& curl -sLf --retry 3 --tlsv1.2 --proto "=https" 'https://packages.doppler.com/public/cli/gpg.DE2A7741A397C129.key' \
| apt-key add - \
&& echo "deb https://packages.doppler.com/public/cli/deb/debian any-version main" \
| tee /etc/apt/sources.list.d/doppler-cli.list \
&& apt-get update \
&& apt-get -y install doppler
WORKDIR /code
COPY . ./
# Upgrade the DB via Alembic, and run the server
CMD ["sh", "-c", "doppler run -- alembic upgrade head && uvicorn app.api.server:app --host 0.0.0.0 --port 8000"]
Here is my cloud build file:
# This cloud build workflow builds the development API service for running on
# GCP.
steps:
# Build the API docker image.
#
# Because we use a multi-stage Dockerfile, we use the --target argument to
# build the development-server stage.
- name: gcr.io/cloud-builders/docker
id: build_dev_server
args:
[
"build",
"--target",
"development-server",
"--no-cache",
"-t",
"${_GCR_HOSTNAME}/${PROJECT_ID}/${REPO_NAME}/${_SERVICE_NAME}:${COMMIT_SHA}",
"--build-arg DOPPLER_TOKEN=$$DOPPLER_TOKEN",
"src/python/api",
"-f",
"src/python/api/Dockerfile"
]
secretEnv: ['DOPPLER_TOKEN']
# Push the image we built to Google Container Registry
- name: gcr.io/cloud-builders/docker
id: push_image
args:
[
"push",
"${_GCR_HOSTNAME}/${PROJECT_ID}/${REPO_NAME}/${_SERVICE_NAME}:${COMMIT_SHA}",
]
# As a sanity check, run the unit tests on the container we just built.
- name: gcr.io/cloud-builders/docker
id: run_unit_tests
args:
[
"run",
"${_GCR_HOSTNAME}/${PROJECT_ID}/${REPO_NAME}/${_SERVICE_NAME}:${COMMIT_SHA}",
"pytest",
"tests/unit_tests",
]
secretEnv: ['DOPPLER_TOKEN']
# Run the container via Cloud Run
- name: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
id: deploy
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=${_GCR_HOSTNAME}/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '--region=$_DEPLOY_REGION'
- '--quiet'
entrypoint: gcloud
# Cleanup tags against closed pull requests
#- id: "clean up old deployments for closed pr"
# name: "gcr.io/${PROJECT_ID}/deployment-previews"
# args:
# - "cleanup"
# - "--project-id"
# - "${PROJECT_ID}"
# - "--region"
# - "${_DEPLOY_REGION}"
# - "--service"
# - "${_SERVICE_NAME}"
# - "--repo-name"
# - "${_GITHUB_OWNER}/${REPO_NAME}"
images:
- '${_GCR_HOSTNAME}/${PROJECT_ID}/${REPO_NAME}/${_SERVICE_NAME}:${COMMIT_SHA}'
options:
substitutionOption: ALLOW_LOOSE
# See https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values
substitutions:
_DEPLOY_REGION: us-central1
_GCR_HOSTNAME: us.gcr.io
_PLATFORM: managed
_SERVICE_NAME: service_name
_GITHUB_OWNER: $(push.repository.owner.name)
# Supply the DOPPLER_TOKEN, which is stored in the secrets manager.
# Doppler is used to inject all other environment variables.
# See https://docs.doppler.com/docs/enclave-gcp-cloud-build
availableSecrets:
secretManager:
- versionName: "projects/${PROJECT_ID}/secrets/DOPPLER_TOKEN/versions/latest"
env: DOPPLER_TOKEN