- Requirements
- Overview
- Setup Omnibus instances
- Setup Kubernetes clusters
- Collect information
- Configure Primary database
- Deploy chart as Geo Primary
- Set the Geo Primary instance
- Configure Secondary database
- Copy secrets from the primary cluster to the secondary cluster
- Deploy chart as Geo Secondary
- Add Secondary Geo instance via Primary
- Confirm Operational Status
GitLab Geo
GitLab Geo provides the ability to have read-only, geographically distributed application deployments.
While external database services can be used, these documents currently focus on
the use of the Omnibus GitLab for PostgreSQL in order to provide the
most platform agnostic guide, and make use of the automation included within gitlab-ctl.
Requirements
In order to use GitLab Geo with the GitLab Helm chart, the following requirements must be met:
- The use of external PostgreSQL services, as the PostgresSQL included with the chart is not exposed to outside networks, or currently have WAL support required for replication.
- The supplied database must:
- Support replication.
- The primary database must reachable by the primary application deployment, and all secondary database nodes (for replication).
- Secondary databases only need to be reachable by the secondary application deployments.
- Support SSL between primary and secondary.
- The primary application instance must be reachable via HTTPS by all secondary instances. Secondary application instances must be accessible to the primary instance via HTTPS.
Overview
This guide will use 2 Omnibus GitLab instances, configuring only the PostgreSQL services needed, and 2 deployments of the GitLab Helm chart. It is intended to be the minimal required configuration. This documentation does not currently include SSL from application to database, support for other database providers, or promoting a secondary instance to primary.
The outline below should be followed in order:
- Setup Omnibus instances
- Setup Kubernetes clusters
- Collect information
- Configure Primary database
- Deploy chart as Geo Primary
- Set the Geo primary instance
- Configure Secondary database
- Copy secrets from primary cluster to secondary cluster
- Deploy chart as Geo Secondary
- Add Secondary Geo instance via Primary
- Confirm Operational Status
Setup Omnibus instances
For this process, two instances are required. One will be the Primary, the other the Secondary. You may use any provider of machine infrastructure, on-premise or from a cloud provider.
Bear in mind that communication is required:
- Between the two database instances for replication.
- Between each database instance and their respective Kubernetes deployments:
- Primary will need to expose TCP port
5432. - Secondary will need to expose TCP ports
5432&5431.
- Primary will need to expose TCP port
Install an operating system supported by Omnibus GitLab, and then
install the Omnibus GitLab onto it. Do not provide the
EXTERNAL_URL environment variable when installing, as we’ll provide a minimal
configuration file before reconfiguring the package.
Once you have installed the operating system, and the GitLab package, configuration can be created for the services that will be used. Before we do that, information must be collected.
Setup Kubernetes clusters
For this process, two Kubernetes clusters should be used. These can be from any provider, on-premise or from a cloud provider.
Bear in mind that communication is required:
- To the respective database instances:
- Primary outbound to TCP
5432. - Secondary outbound to TCP
5432and5431.
- Primary outbound to TCP
- Between both Kubernetes Ingress via HTTPS.
Each cluster that is provisioned should have:
- Enough resources to support a base-line installation of these charts.
- Access to persistent storage:
- MinIO not required if using external object storage
- Gitaly not required if using external Gitaly
- Redis not required if using external Redis
Collect information
To continue with the configuration, the following information needs to be collected from the various sources. Collect these, and make notes for use through the rest of this documentation.
- Primary database:
- IP address
- hostname (optional)
- Secondary database:
- IP address
- hostname (optional)
- Primary cluster:
- IP addresses of nodes
- Secondary cluster:
- IP addresses of nodes
- Database Passwords (must pre-decide the password(s))
-
gitlab(used inpostgresql['sql_user_password'],global.psql.password) -
gitlab_geo(used ingeo_postgresql['sql_user_password'],global.geo.psql.password) -
gitlab_replicator(needed for replication)
-
- Your GitLab license file
The gitlab and gitlab_geo database user passwords will need to exist in two
forms: bare password, and PostgreSQL hashed password. To obtain the hashed form,
perform the following commands on one of the Omnibus instances, which will ask
you to enter, and confirm the password before outputting an appropriate hash
value for you to make note of.
gitlab-ctl pg-password-md5 gitlabgitlab-ctl pg-password-md5 gitlab_geo
Configure Primary database
This section will be performed on the Primary Omnibus GitLab instance.
To configure the Primary database instance’s Omnibus GitLab, we’ll work from this example configuration.
### Geo Primary
external_url 'http://gitlab-primary.example.com'
roles ['geo_primary_role']
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'gitlab-primary.example.com'
gitlab_rails['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
unicorn['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DB for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - primary application deployment
# - secondary database instance(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
We need to replace several items:
-
external_urlmust be updated to reflect the host name of our Primary instance. -
gitlab_rails['geo_node_name']must be replaced with a unique name for your node. -
gitlab_user_password_hashmust be replaced with the hashed form of thegitlabpassword. -
postgresql['md5_auth_cidr_addresses']can be update to be a list of explicit IP addresses, or address blocks in CIDR notation.
The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16']. It is important to include 127.0.0.1 in
this list, as the automation within Omnibus GitLab will connect using this. The
addresses in this list should include the IP address (not hostname) of your
Secondary database, and all nodes of your primary Kubernetes cluster. This can
be left as ['0.0.0.0/0'], however it is not best practice.
Once the configuration above is prepared:
- Place the content into
/etc/gitlab/gitlab.rb - Run
gitlab-ctl reconfigure. If you experience any issues in regards to the service not listening on TCP, try directly restarting it withgitlab-ctl restart postgresql. - Run
gitlab-ctl set-replication-passwordin order to set the password for thegitlab_replicatoruser. - Retrieve the Primary database server’s public certificate, this will be needed
for the Secondary database to be able to replicate.
cat ~gitlab-psql/data/server.crt- Store this output.
Deploy chart as Geo Primary
This section will be performed on the Primary Kubernetes cluster.
In order to deploy this chart as a Geo Primary, we’ll start from this example configuration.
### Geo Primary
global:
# See docs.gitlab.com/charts/charts/globals
# Configure host & domain
hosts:
domain: example.com
# configure DB connection
psql:
host: geo-1.db.example.com
port: 5432
password:
secret: geo
key: postgresql-password
# configure geo (primary)
geo:
nodeName: primary.example.com
enabled: true
role: primary
# External DB, disable
postgresql:
install: false
-
We’ll need to create a secret containing the database password, for the chart to consume. Replace
PASSWORDbelow with the password for thegitlabdatabase user.kubectl create secret generic geo --from-literal=postgresql-password=PASSWORD -
Update the configuration to reflect the correct values for:
- global.hosts.domain
- global.psql.host
- Also configure any additional settings, such as:
-
Deploy the chart using this configuration
helm upgrade --install gitlab-geo gitlab/gitlab -f primary.yamlNote: With Helm v2, one may need to specify the namespace that the release was deployed to with the--namespace <namespace>option. -
Wait for the deployment to complete, and the application to come online. Once the application is reachable, login.
-
Login to GitLab, and upload your GitLab license file by navigating to
/admin/license. This step is required for Geo to function.
Set the Geo Primary instance
Now that the chart has been deployed, and a license uploaded, we can configure
this as the Primary instance. We will do this via the task-runner Pod.
-
Find the
task-runnerPodkubectl get pods -lapp=task-runner --namespace gitlab -
Run
gitlab-rake geo:set_primary_nodewithkubectl execkubectl exec -ti gitlab-geo-task-runner-XXX -- gitlab-rake geo:set_primary_node -
Check the status of Geo configuration
kubectl exec -ti gitlab-geo-task-runner-XXX -- gitlab-rake gitlab:geo:checkYou should see output similar to below:
WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell. Checking Geo ... GitLab Geo is available ... yes GitLab Geo is enabled ... yes GitLab Geo secondary database is correctly configured ... not a secondary node Database replication enabled? ... not a secondary node Database replication working? ... not a secondary node GitLab Geo HTTP(S) connectivity ... not a secondary node HTTP/HTTPS repository cloning is enabled ... yes Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype Git user has default SSH configuration? ... yes OpenSSH configured to use AuthorizedKeysCommand ... no Reason: Cannot find OpenSSH configuration file at: /assets/sshd_config Try fixing it: If you are not using our official docker containers, make sure you have OpenSSH server installed and configured correctly on this system For more information see: doc/administration/operations/fast_ssh_key_lookup.md GitLab configured to disable writing to authorized_keys file ... yes GitLab configured to store new projects in hashed storage? ... yes All projects are in hashed storage? ... yes Checking Geo ... Finished- Don’t worry about
Exception: getaddrinfo: Servname not supported for ai_socktype, as Kubernetes containers will not have access to the host clock. This is OK. -
OpenSSH configured to use AuthorizedKeysCommand ... nois expected. This Rake task is checking for a local SSH server, which is actually present in thegitlab-shellchart, deployed elsewhere, and already configured appropriately.
- Don’t worry about
Configure Secondary database
This section will be performed on the Secondary Omnibus GitLab instance.
To configure the Secondary database instance’s Omnibus GitLab, we’ll work from this example configuration.
### Geo Secondary
external_url 'http://gitlab-secondary.example.com'
roles ['geo_secondary_role']
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'gitlab-secondary.example.com'
gitlab_rails['auto_migrate'] = false
geo_secondary['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
unicorn['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DBs for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database instance(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
geo_postgresql['listen_address'] = '0.0.0.0'
geo_postgresql['sql_user_password'] = 'gitlab_geo_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database instance(s)
geo_postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
gitlab_rails['db_password']='gitlab_user_password'
We need to replace several items:
-
external_urlmust be updated to reflect the host name of our Secondary instance. -
gitlab_rails['geo_node_name']must be replaced with a unique name for your node. -
gitlab_user_password_hashmust be replaced with the hashed form of thegitlabpassword. -
postgresql['md5_auth_cidr_addresses']should be updated to be a list of explicit IP addresses, or address blocks in CIDR notation. -
gitlab_geo_user_password_hashmust be replaced with the hashed form of thegitlab_geopassword. -
geo_postgresql['md5_auth_cidr_addresses']should be updated to be a list of explicit IP addresses, or address blocks in CIDR notation. -
gitlab_user_passwordmust be updated, and is used here to allow Omnibus GitLab to automate the PostgreSQL configuration.
The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16']. It is important to include 127.0.0.1 in
this list, as the automation within Omnibus GitLab will connect using this. The
addresses in this list should include the IP addresses of all nodes of your
Secondary Kubernetes cluster. This can be left as ['0.0.0.0/0'], however
it is not best practice.
Once the configuration above is prepared:
-
Check TCP connectivity to the primary node’s PostgreSQL server:
openssl s_client -connect <primary_node_ip>:5432 </dev/nullThe output should show the following:
CONNECTED(00000003) write:errno=0Note: If this step fails, you may be using the wrong IP address, or a firewall may be preventing access to the server. Check the IP address, paying close attention to the difference between public and private addresses and ensure that, if a firewall is present, the secondary node is permitted to connect to the primary node on port 5432. - Place the content into
/etc/gitlab/gitlab.rb - Run
gitlab-ctl reconfigure. If you experience any issues in regards to the service not listening on TCP, try directly restarting it withgitlab-ctl restart postgresql. - Place the Primary database’s certificate content from above into
primary.crt -
Set up PostgreSQL TLS verification on the secondary node:
Install the
primary.crtfile:install \ -D \ -o gitlab-psql \ -g gitlab-psql \ -m 0400 \ -T primary.crt ~gitlab-psql/.postgresql/root.crtPostgreSQL will now only recognize that exact certificate when verifying TLS connections. The certificate can only be replicated by someone with access to the private key, which is only present on the primary node.
-
Test that the
gitlab-psqluser can connect to the primary node’s database (the default Omnibus database name is gitlabhq_production):sudo \ -u gitlab-psql /opt/gitlab/embedded/bin/psql \ --list \ -U gitlab_replicator \ -d "dbname=gitlabhq_production sslmode=verify-ca" \ -W \ -h <primary_node_ip>When prompted enter the password collected earlier for the
gitlab_replicatoruser. If all worked correctly, you should see the list of primary node’s databases.A failure to connect here indicates that the TLS configuration is incorrect. Ensure that the contents of
~gitlab-psql/data/server.crton the primary node match the contents of~gitlab-psql/.postgresql/root.crton the secondary node. -
Replicate the databases. Replace
PRIMARY_DATABASE_HOSTwith the IP or hostname of your Primary database instance.gitlab-ctl replicate-geo-database --slot-name=geo_2 --host=PRIMARY_DATABASE_HOST -
After replication has finished, we must reconfigure the Omnibus GitLab one last time to ensure
pg_hba.confis correct for the secondary.gitlab-ctl reconfigure
Copy secrets from the primary cluster to the secondary cluster
We now need to copy a few secrets from the Primary Kubernetes deployment to the Secondary Kubernetes deployment.
gitlab-geo-gitlab-shell-host-keysgitlab-geo-rails-secret
- Change your
kubectlcontext to that of your Primary. - Collect these secrets from the Primary deployment
kubectl get -n gitlab -o yaml secret gitlab-geo-gitlab-shell-host-keys > ssh-host-keys.yaml
kubectl get -n gitlab -o yaml secret gitlab-geo-rails-secret > rails-secrets.yaml
- Change your
kubectlcontext to that of your Secondary. -
Apply these secrets
kubectl apply -f ssh-host-keys.yaml kubectl apply -f rails-secrets.yaml
We’ll now need to create a secret containing the database passwords. Replace the passwords below with the appropriate values.
kubectl create secret generic geo \
--from-literal=postgresql-password=gitlab_user_password \
--from-literal=geo-postgresql-password=gitlab_geo_user_password
Deploy chart as Geo Secondary
This section will be performed on the Secondary Kubernetes cluster.
In order to deploy this chart as a Geo Secondary, we’ll start from this example configuration.
## Geo Secondary
global:
# See docs.gitlab.com/charts/charts/globals
# Configure host & domain
hosts:
hostSuffix: secondary
domain: example.com
# configure DB connection
psql:
host: geo-2.db.example.com
port: 5432
password:
secret: geo
key: postgresql-password
# configure geo (secondary)
geo:
enabled: true
role: secondary
nodeName: secondary.example.com
psql:
host: geo-2.db.example.com
port: 5431
password:
secret: geo
key: geo-postgresql-password
# External DB, disable
postgresql:
install: false
-
Update the configuration to reflect the correct values for:
global.hosts.domainglobal.psql.hostglobal.geo.psql.host- Also configure any additional settings, such as:
-
Deploy the chart using this configuration
helm upgrade --install gitlab-geo gitlab/gitlab -f secondary.yamlNote: With Helm v2, one may need to specify the namespace that the release was deployed to with the--namespace <namespace>option. -
Wait for the deployment to complete, and the application to come online.
Add Secondary Geo instance via Primary
Now that both databases are configured and applications are deployed, we must tell the Primary that the Secondary exists:
- Visit the primary instance’s Admin Area > Geo
(
/admin/geo/nodes) in your browser. - Click the New node button.
- Add the secondary instance. Use the full URL for the URL. Do NOT check the This is a primary node checkbox.
- Fill in Name with the
global.geo.nodeName. These values must always match exactly, character for character. - Optionally, choose which groups or storage shards should be replicated by the secondary instance. Leave blank to replicate all.
- Click the Add node button.
Once added to the admin panel, the secondary instance will automatically start replicating missing data from the primary instance in a process known as backfill. Meanwhile, the primary instance will start to notify each secondary instance of any changes, so that the secondary instance can act on those notifications immediately.
Confirm Operational Status
The final step is to verify the Geo replication status on the secondary instance once fully
configured, via the task-runner Pod.
-
Find the
task-runnerPodkubectl get pods -lapp=task-runner --namespace gitlab -
Attach to the Pod with
kubectl execkubectl exec -ti gitlab-geo-task-runner-XXX -- bash -l -
Check the status of Geo configuration
gitlab-rake gitlab:geo:checkYou should see output similar to below:
WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell. Checking Geo ... GitLab Geo is available ... yes GitLab Geo is enabled ... yes GitLab Geo secondary database is correctly configured ... yes Database replication enabled? ... yes Database replication working? ... yes GitLab Geo HTTP(S) connectivity ... * Can connect to the primary node ... yes HTTP/HTTPS repository cloning is enabled ... yes Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype Git user has default SSH configuration? ... yes OpenSSH configured to use AuthorizedKeysCommand ... no Reason: Cannot find OpenSSH configuration file at: /assets/sshd_config Try fixing it: If you are not using our official docker containers, make sure you have OpenSSH server installed and configured correctly on this system For more information see: doc/administration/operations/fast_ssh_key_lookup.md GitLab configured to disable writing to authorized_keys file ... yes GitLab configured to store new projects in hashed storage? ... yes All projects are in hashed storage? ... yes Checking Geo ... Finished- Don’t worry about
Exception: getaddrinfo: Servname not supported for ai_socktype, as Kubernetes containers will not have access to the host clock. This is OK. -
OpenSSH configured to use AuthorizedKeysCommand ... nois expected. This Rake task is checking for a local SSH server, which is actually present in thegitlab-shellchart, deployed elsewhere, and already configured appropriately.
- Don’t worry about