- 21, Oct 2024
- #1
У меня есть кластер Kubernetes с 1 главным и 2 рабочими. Все узлы имеют свой IP-адрес. Назовем их так:
- мастер-0
- рабочий-0
- рабочий-1
Политика сетевого модуля и связь со всеми моими узлами настроены правильно, все работает отлично. Если я укажу эту инфраструктуру, то это просто для более конкретного случая.
Используя helm, я создал диаграмму, в которой развертывается базовый nginx. Это образ докера, который я создаю в своем личном реестре gitlab.
С помощью gitlab ci я создал задание, в котором использовались две функции:
kubectl create -f <filename>.yaml
Первая функция позволяет gitlabrunner войти в систему с помощью docker, init helm и kubectl. Второй развернуть на кластере свой образ.
Весь процесс работает хорошо, например, мои задания передаются в gitlab ci, никаких ошибок не возникло, за исключением развертывания модуля.
Действительно, у меня есть эта ошибка:
registry.gitlab.com/path/to/repo/project/image:TAG_NUMBER
Чтобы быть более конкретным, я использую диаграмма руля gitlab-runner и это конфигурация диаграммы:
deployment.yaml
Как видите, я создал секрет на моей работе ci здесь тоже не возникло никаких ошибок. В своей карте я объявляю эту же тайну (его именем) в values.yaml
file, which allow ## GitLab Runner Image
##
## By default it's using gitlab/gitlab-runner:alpine-v{VERSION}
## where {VERSION} is taken from Chart.yaml from appVersion field
##
## ref: https://hub.docker.com/r/gitlab/gitlab-runner/tags/
##
# image: gitlab/gitlab-runner:alpine-v11.6.0
## Specify a imagePullPolicy
## 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent
## The GitLab Server URL (with protocol) that want to register the runner against
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register
##
gitlabUrl: https://gitlab.com/
## The Registration Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance.
## ref: https://docs.gitlab.com/ce/ci/runners/README.html#creating-and-registering-a-runner
##
runnerRegistrationToken: "<token>"
## The Runner Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance. It is token of already registered runner.
## ref: (we don't yet have docs for that, but we want to use existing token)
##
# runnerToken: ""
#
## Unregister all runners before termination
##
## Updating the runner's chart version or configuration will cause the runner container
## to be terminated and created again. This may cause your Gitlab instance to reference
## non-existant runners. Un-registering the runner before termination mitigates this issue.
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-unregister
##
unregisterRunners: true
## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use
## Provide resource name for a Kubernetes Secret Object in the same namespace,
## this is used to populate the /etc/gitlab-runner/certs directory
## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates
##
# certsSecretName:
## Configure the maximum number of concurrent jobs
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
concurrent: 10
## Defines in seconds how often to check GitLab for a new builds
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
checkInterval: 30
## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
# logLevel:
## For RBAC support:
rbac:
create: true
## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs
## cluster-wide or only within namespace
clusterWideAccess: true
## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)
##
serviceAccountName: default
## Configure integrated Prometheus metrics exporter
## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
metrics:
enabled: true
## Configuration for the Pods that that the runner launches for each new job
##
runners:
## Default container image to use for builds when none is specified
##
image: ubuntu:16.04
## Specify one or more imagePullSecrets
##
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: ["namespace-1", "namespace-2", "default"]
## Specify the image pull policy: never, if-not-present, always. The cluster default will be used if not set.
##
# imagePullPolicy: ""
## Specify whether the runner should be locked to a specific project: true, false. Defaults to true.
##
# locked: true
## Specify the tags associated with the runner. Comma-separated list of tags.
##
## ref: https://docs.gitlab.com/ce/ci/runners/#using-tags
##
tags: my-tag-1, my-tag-2"
## Run all containers with the privileged flag enabled
## This will allow the docker:dind image to run if you need to run Docker
## commands. Please read the docs before turning this on:
## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind
##
privileged: true
## The name of the secret containing runner-token and runner-registration-token
# secret: gitlab-runner
## Namespace to run Kubernetes jobs in (defaults to the same namespace of this release)
##
# namespace:
# Regular expression to validate the contents of the namespace overwrite environment variable (documented following).
# When empty, it disables the namespace overwrite feature
namespace_overwrite_allowed: overrided-namespace-*
## Distributed runners caching
## ref: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md#distributed-runners-caching
##
## If you want to use s3 based distributing caching:
## First of all you need to uncomment General settings and S3 settings sections.
##
## Create a secret 's3access' containing 'accesskey' & 'secretkey'
## ref: https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/
##
## $ kubectl create secret generic s3access \
## --from-literal=accesskey="YourAccessKey" \
## --from-literal=secretkey="YourSecretKey"
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
##
## If you want to use gcs based distributing caching:
## First of all you need to uncomment General settings and GCS settings sections.
##
## Access using credentials file:
## Create a secret 'google-application-credentials' containing your application credentials file.
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
## You could configure
## $ kubectl create secret generic google-application-credentials \
## --from-file=gcs-applicaton-credentials-file=./path-to-your-google-application-credentials-file.json
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
##
## Access using access-id and private-key:
## Create a secret 'gcsaccess' containing 'gcs-access-id' & 'gcs-private-key'.
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
## You could configure
## $ kubectl create secret generic gcsaccess \
## --from-literal=gcs-access-id="YourAccessID" \
## --from-literal=gcs-private-key="YourPrivateKey"
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
cache: {}
## General settings
# cacheType: s3
# cachePath: "cache"
# cacheShared: true
## S3 settings
# s3ServerAddress: s3.amazonaws.com
# s3BucketName:
# s3BucketLocation:
# s3CacheInsecure: false
# secretName: s3access
## GCS settings
# gcsBucketName:
## Use this line for access using access-id and private-key
# secretName: gcsaccess
## Use this line for access using google-application-credentials file
# secretName: google-application-credential
## Build Container specific configuration
##
builds:
# cpuLimit: 200m
# memoryLimit: 256Mi
cpuRequests: 100m
memoryRequests: 128Mi
## Service Container specific configuration
##
services:
# cpuLimit: 200m
# memoryLimit: 256Mi
cpuRequests: 100m
memoryRequests: 128Mi
## Helper Container specific configuration
##
helpers:
# cpuLimit: 200m
# memoryLimit: 256Mi
cpuRequests: 100m
memoryRequests: 128Mi
image: gitlab/gitlab-runner-helper:x86_64-latest
## Service Account to be used for runners
##
# serviceAccountName:
## If Gitlab is not reachable through $CI_SERVER_URL
##
# cloneUrl:
## Specify node labels for CI job pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
nodeSelector: {}
# gitlab: true
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# limits:
# memory: 256Mi
# cpu: 200m
requests:
memory: 128Mi
cpu: 100m
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
# Example: The gitlab runner manager should not run on spot instances so you can assign
# them to the regular worker nodes only.
# node-role.kubernetes.io/worker: "true"
## List of node taints to tolerate (requires Kubernetes >= 1.6)
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
# Example: Regular worker nodes may have a taint, thus you need to tolerate the taint
# when you assign the gitlab runner manager with nodeSelector or affinity to the nodes.
# - key: "node-role.kubernetes.io/worker"
# operator: "Exists"
## Configure environment variables that will be present when the registration command runs
## This provides further control over the registration process and the config.toml file
## ref: `gitlab-runner register --help`
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html
##
envVars:
- name: RUNNER_EXECUTOR
value: kubernetes
чтобы использовать его.
Примечания:
Если бы это был реестр, например, мой репозиторий изображений
Failed to pull image "registry.gitlab.com/path/to/repo/project/image:TAG_NUMBER": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.gitlab.com/v2/path/to/repo/project/image/manifests/image:TAG_NUMBER: denied: access forbidden
, I will get an error like my image does not exist or something like this. To be more precise, just notice that the var TAG_NUMBER установлен в моих заданиях, и я восстанавливаю правильное значение. Поэтому я не думаю, что моя проблема связана с URL-адресом хранилища изображений.Я проверил свой секрет с помощью простого развертывания модуля. С этими учетными данными все работает хорошо, модуль в порядке (я использовал
# Init helm client on k8s cluster for using helm with gitlab runner function init_helm() { docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY" mkdir -p /etc/deploy echo ${kube_config} | base64 -d > ${KUBECONFIG} kubectl config use-context ${K8S_CURRENT_CONTEXT} helm init --client-only helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/ helm repo update } # Deploy latest tagged image on k8s cluster function deploy_k8s_cluster() { echo "Create and apply secret for docker gitlab runner access to gitlab private registry ..." kubectl create secret -n "$KUBERNETES_NAMESPACE_OVERWRITE" \ docker-registry gitlab-registry \ --docker-server="https://registry.gitlab.com/v2/" \ --docker-username="${CI_DEPLOY_USER:-$CI_REGISTRY_USER}" \ --docker-password="${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}" \ --docker-email="$GITLAB_USER_EMAIL" \ -o yaml --dry-run | kubectl replace -n "$KUBERNETES_NAMESPACE_OVERWRITE" --force -f - echo "Build helm dependancies in $CHART_TEMPLATE" cd $CHART_TEMPLATE/ helm dep build export DEPLOYS="$(helm ls | grep $PROJECT_NAME | wc -l)" if [[ ${DEPLOYS} -eq 0 ]]; then echo "Creating the new chart ..." helm install --name ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml else echo "Updating the chart ..." helm upgrade ${PROJECT_NAME} --namespace=${KUBERNETES_NAMESPACE_OVERWRITE} . -f values.yaml fi }
command. I juste read this post about kubernetes and gitlab: https://docs.gitlab.com/ee/topics/autodevops/#private-project-support. Возможно, это связано с моей проблемой.
#docker #kubernetes #gitlab #helm #gitlab-ci-runner