Unverified Commit 66abd260 authored by Marin Jankovski's avatar Marin Jankovski

Merge commit 'cb604491' into 12-2-auto-deploy-20190818

parents a30e9d14 cb604491
...@@ -284,13 +284,16 @@ Introduced in GitLab 11.3. This file lives in `/var/log/gitlab/gitlab-rails/impo ...@@ -284,13 +284,16 @@ Introduced in GitLab 11.3. This file lives in `/var/log/gitlab/gitlab-rails/impo
Omnibus GitLab packages or in `/home/git/gitlab/log/importer.log` for Omnibus GitLab packages or in `/home/git/gitlab/log/importer.log` for
installations from source. installations from source.
## `auth.log` ## `auth.log`
Introduced in GitLab 12.0. This file lives in `/var/log/gitlab/gitlab-rails/auth.log` for Introduced in GitLab 12.0. This file lives in `/var/log/gitlab/gitlab-rails/auth.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/auth.log` for Omnibus GitLab packages or in `/home/git/gitlab/log/auth.log` for
installations from source. installations from source.
It logs information whenever [Rack Attack] registers an abusive request. This log records:
- Information whenever [Rack Attack] registers an abusive request.
- Requests over the [Rate Limit] on raw endpoints.
NOTE: **Note:** NOTE: **Note:**
From [%12.1](https://gitlab.com/gitlab-org/gitlab-ce/issues/62756), user id and username are available on this log. From [%12.1](https://gitlab.com/gitlab-org/gitlab-ce/issues/62756), user id and username are available on this log.
...@@ -334,3 +337,4 @@ installations from source. ...@@ -334,3 +337,4 @@ installations from source.
[repocheck]: repository_checks.md [repocheck]: repository_checks.md
[Rack Attack]: ../security/rack_attack.md [Rack Attack]: ../security/rack_attack.md
[Rate Limit]: ../user/admin_area/settings/rate_limits_on_raw_endpoints.md
...@@ -77,11 +77,12 @@ authentication is not provided. For ...@@ -77,11 +77,12 @@ authentication is not provided. For
those cases where it is not required, this will be mentioned in the documentation those cases where it is not required, this will be mentioned in the documentation
for each individual endpoint. For example, the [`/projects/:id` endpoint](projects.md). for each individual endpoint. For example, the [`/projects/:id` endpoint](projects.md).
There are three ways to authenticate with the GitLab API: There are four ways to authenticate with the GitLab API:
1. [OAuth2 tokens](#oauth2-tokens) 1. [OAuth2 tokens](#oauth2-tokens)
1. [Personal access tokens](#personal-access-tokens) 1. [Personal access tokens](#personal-access-tokens)
1. [Session cookie](#session-cookie) 1. [Session cookie](#session-cookie)
1. [GitLab CI job token](#gitlab-ci-job-token-premium) **(PREMIUM)**
For admins who want to authenticate with the API as a specific user, or who want to build applications or scripts that do so, two options are available: For admins who want to authenticate with the API as a specific user, or who want to build applications or scripts that do so, two options are available:
...@@ -151,6 +152,14 @@ The primary user of this authentication method is the web frontend of GitLab its ...@@ -151,6 +152,14 @@ The primary user of this authentication method is the web frontend of GitLab its
which can use the API as the authenticated user to get a list of their projects, which can use the API as the authenticated user to get a list of their projects,
for example, without needing to explicitly pass an access token. for example, without needing to explicitly pass an access token.
### GitLab CI job token **(PREMIUM)**
With a few API endpoints you can use a [GitLab CI job token](../user/project/new_ci_build_permissions_model.md#job-token)
to authenticate with the API:
* [Get job artifacts](jobs.md#get-job-artifacts)
* [Pipeline triggers](pipeline_triggers.md)
### Impersonation tokens ### Impersonation tokens
> [Introduced][ce-9099] in GitLab 9.0. Needs admin permissions. > [Introduced][ce-9099] in GitLab 9.0. Needs admin permissions.
......
doc/ci/quick_start/img/build_log.png

34.4 KB | W: | H:

doc/ci/quick_start/img/build_log.png

135 KB | W: | H:

doc/ci/quick_start/img/build_log.png
doc/ci/quick_start/img/build_log.png
doc/ci/quick_start/img/build_log.png
doc/ci/quick_start/img/build_log.png
  • 2-up
  • Swipe
  • Onion skin
...@@ -88,7 +88,7 @@ visit the project you want to make the Runner work for in GitLab: ...@@ -88,7 +88,7 @@ visit the project you want to make the Runner work for in GitLab:
## Registering a group Runner ## Registering a group Runner
Creating a group Runner requires Maintainer permissions for the group. To create a Creating a group Runner requires Owner permissions for the group. To create a
group Runner visit the group you want to make the Runner work for in GitLab: group Runner visit the group you want to make the Runner work for in GitLab:
1. Go to **Settings > CI/CD** to obtain the token 1. Go to **Settings > CI/CD** to obtain the token
...@@ -124,9 +124,9 @@ To lock/unlock a Runner: ...@@ -124,9 +124,9 @@ To lock/unlock a Runner:
## Assigning a Runner to another project ## Assigning a Runner to another project
If you are Maintainer on a project where a specific Runner is assigned to, and the If you are an Owner on a project where a specific Runner is assigned to, and the
Runner is not [locked only to that project](#locking-a-specific-runner-from-being-enabled-for-other-projects), Runner is not [locked only to that project](#locking-a-specific-runner-from-being-enabled-for-other-projects),
you can enable the Runner also on any other project where you have Maintainer permissions. you can enable the Runner also on any other project where you have Owner permissions.
To enable/disable a Runner in your project: To enable/disable a Runner in your project:
...@@ -250,7 +250,7 @@ When you [register a Runner][register], its default behavior is to **only pick** ...@@ -250,7 +250,7 @@ When you [register a Runner][register], its default behavior is to **only pick**
[tagged jobs](../yaml/README.md#tags). [tagged jobs](../yaml/README.md#tags).
NOTE: **Note:** NOTE: **Note:**
Maintainer [permissions](../../user/permissions.md) are required to change the Owner [permissions](../../user/permissions.md) are required to change the
Runner settings. Runner settings.
To make a Runner pick untagged jobs: To make a Runner pick untagged jobs:
......
...@@ -20,7 +20,7 @@ A typical install of GitLab will be on GNU/Linux. It uses Nginx or Apache as a w ...@@ -20,7 +20,7 @@ A typical install of GitLab will be on GNU/Linux. It uses Nginx or Apache as a w
We also support deploying GitLab on Kubernetes using our [gitlab Helm chart](https://docs.gitlab.com/charts/). We also support deploying GitLab on Kubernetes using our [gitlab Helm chart](https://docs.gitlab.com/charts/).
The GitLab web app uses MySQL or PostgreSQL for persistent database information (e.g. users, permissions, issues, other meta data). GitLab stores the bare git repositories it serves in `/home/git/repositories` by default. It also keeps default branch and hook information with the bare repository. The GitLab web app uses PostgreSQL for persistent database information (e.g. users, permissions, issues, other meta data). GitLab stores the bare git repositories it serves in `/home/git/repositories` by default. It also keeps default branch and hook information with the bare repository.
When serving repositories over HTTP/HTTPS GitLab utilizes the GitLab API to resolve authorization and access as well as serving git objects. When serving repositories over HTTP/HTTPS GitLab utilizes the GitLab API to resolve authorization and access as well as serving git objects.
...@@ -511,7 +511,15 @@ To summarize here's the [directory structure of the `git` user home directory](. ...@@ -511,7 +511,15 @@ To summarize here's the [directory structure of the `git` user home directory](.
ps aux | grep '^git' ps aux | grep '^git'
``` ```
GitLab has several components to operate. As a system user (i.e. any user that is not the `git` user) it requires a persistent database (MySQL/PostreSQL) and redis database. It also uses Apache httpd or Nginx to proxypass Unicorn. As the `git` user it starts Sidekiq and Unicorn (a simple ruby HTTP server running on port `8080` by default). Under the GitLab user there are normally 4 processes: `unicorn_rails master` (1 process), `unicorn_rails worker` (2 processes), `sidekiq` (1 process). GitLab has several components to operate. It requires a persistent database
(PostgreSQL) and redis database, and uses Apache httpd or Nginx to proxypass
Unicorn. All these components should run as different system users to GitLab
(e.g., `postgres`, `redis` and `www-data`, instead of `git`).
As the `git` user it starts Sidekiq and Unicorn (a simple ruby HTTP server
running on port `8080` by default). Under the GitLab user there are normally 4
processes: `unicorn_rails master` (1 process), `unicorn_rails worker`
(2 processes), `sidekiq` (1 process).
### Repository access ### Repository access
...@@ -554,12 +562,9 @@ $ /etc/init.d/nginx ...@@ -554,12 +562,9 @@ $ /etc/init.d/nginx
Usage: nginx {start|stop|restart|reload|force-reload|status|configtest} Usage: nginx {start|stop|restart|reload|force-reload|status|configtest}
``` ```
Persistent database (one of the following) Persistent database
``` ```
/etc/init.d/mysqld
Usage: /etc/init.d/mysqld {start|stop|status|restart|condrestart|try-restart|reload|force-reload}
$ /etc/init.d/postgresql $ /etc/init.d/postgresql
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..] Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
``` ```
...@@ -597,11 +602,6 @@ PostgreSQL ...@@ -597,11 +602,6 @@ PostgreSQL
- `/var/log/postgresql/*` - `/var/log/postgresql/*`
MySQL
- `/var/log/mysql/*`
- `/var/log/mysql.*`
### GitLab specific config files ### GitLab specific config files
GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly referenced config files include: GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly referenced config files include:
......
# Hash Indexes # Hash Indexes
Both PostgreSQL and MySQL support hash indexes besides the regular btree PostgreSQL supports hash indexes besides the regular btree
indexes. Hash indexes however are to be avoided at all costs. While they may indexes. Hash indexes however are to be avoided at all costs. While they may
_sometimes_ provide better performance the cost of rehashing can be very high. _sometimes_ provide better performance the cost of rehashing can be very high.
More importantly: at least until PostgreSQL 10.0 hash indexes are not More importantly: at least until PostgreSQL 10.0 hash indexes are not
......
...@@ -9,7 +9,7 @@ bundle exec rake setup ...@@ -9,7 +9,7 @@ bundle exec rake setup
``` ```
The `setup` task is an alias for `gitlab:setup`. The `setup` task is an alias for `gitlab:setup`.
This tasks calls `db:reset` to create the database, calls `add_limits_mysql` that adds limits to the database schema in case of a MySQL database and finally it calls `db:seed_fu` to seed the database. This tasks calls `db:reset` to create the database, and calls `db:seed_fu` to seed the database.
Note: `db:setup` calls `db:seed` but this does nothing. Note: `db:setup` calls `db:seed` but this does nothing.
### Seeding issues for all or a given project ### Seeding issues for all or a given project
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Storing SHA1 hashes as strings is not very space efficient. A SHA1 as a string Storing SHA1 hashes as strings is not very space efficient. A SHA1 as a string
requires at least 40 bytes, an additional byte to store the encoding, and requires at least 40 bytes, an additional byte to store the encoding, and
perhaps more space depending on the internals of PostgreSQL and MySQL. perhaps more space depending on the internals of PostgreSQL.
On the other hand, if one were to store a SHA1 as binary one would only need 20 On the other hand, if one were to store a SHA1 as binary one would only need 20
bytes for the actual SHA1, and 1 or 4 bytes of additional space (again depending bytes for the actual SHA1, and 1 or 4 bytes of additional space (again depending
......
...@@ -15,14 +15,11 @@ FROM issues ...@@ -15,14 +15,11 @@ FROM issues
WHERE title LIKE 'WIP:%'; WHERE title LIKE 'WIP:%';
``` ```
On PostgreSQL the `LIKE` statement is case-sensitive. On MySQL this depends on On PostgreSQL the `LIKE` statement is case-sensitive. To perform a case-insensitive
the case-sensitivity of the collation, which is usually case-insensitive. To `LIKE` you have to use `ILIKE` instead.
perform a case-insensitive `LIKE` on PostgreSQL you have to use `ILIKE` instead.
This statement in turn isn't supported on MySQL.
To work around this problem you should write `LIKE` queries using Arel instead To handle this automatically you should use `LIKE` queries using Arel instead
of raw SQL fragments as Arel automatically uses `ILIKE` on PostgreSQL and `LIKE` of raw SQL fragments, as Arel automatically uses `ILIKE` on PostgreSQL.
on MySQL. This means that instead of this:
```ruby ```ruby
Issue.where('title LIKE ?', 'WIP:%') Issue.where('title LIKE ?', 'WIP:%')
...@@ -45,7 +42,7 @@ table = Issue.arel_table ...@@ -45,7 +42,7 @@ table = Issue.arel_table
Issue.where(table[:title].matches('WIP:%').or(table[:foo].matches('WIP:%'))) Issue.where(table[:title].matches('WIP:%').or(table[:foo].matches('WIP:%')))
``` ```
For PostgreSQL this produces: On PostgreSQL, this produces:
```sql ```sql
SELECT * SELECT *
...@@ -53,18 +50,10 @@ FROM issues ...@@ -53,18 +50,10 @@ FROM issues
WHERE (title ILIKE 'WIP:%' OR foo ILIKE 'WIP:%') WHERE (title ILIKE 'WIP:%' OR foo ILIKE 'WIP:%')
``` ```
In turn for MySQL this produces:
```sql
SELECT *
FROM issues
WHERE (title LIKE 'WIP:%' OR foo LIKE 'WIP:%')
```
## LIKE & Indexes ## LIKE & Indexes
Neither PostgreSQL nor MySQL use any indexes when using `LIKE` / `ILIKE` with a PostgreSQL won't use any indexes when using `LIKE` / `ILIKE` with a wildcard at
wildcard at the start. For example, this will not use any indexes: the start. For example, this will not use any indexes:
```sql ```sql
SELECT * SELECT *
...@@ -75,9 +64,8 @@ WHERE title ILIKE '%WIP:%'; ...@@ -75,9 +64,8 @@ WHERE title ILIKE '%WIP:%';
Because the value for `ILIKE` starts with a wildcard the database is not able to Because the value for `ILIKE` starts with a wildcard the database is not able to
use an index as it doesn't know where to start scanning the indexes. use an index as it doesn't know where to start scanning the indexes.
MySQL provides no known solution to this problem. Luckily PostgreSQL _does_ Luckily, PostgreSQL _does_ provide a solution: trigram GIN indexes. These
provide a solution: trigram GIN indexes. These indexes can be created as indexes can be created as follows:
follows:
```sql ```sql
CREATE INDEX [CONCURRENTLY] index_name_here CREATE INDEX [CONCURRENTLY] index_name_here
......
...@@ -15,16 +15,6 @@ manifest themselves within our code. When designing our tests, take time to revi ...@@ -15,16 +15,6 @@ manifest themselves within our code. When designing our tests, take time to revi
our test design. We can find some helpful heuristics documented in the Handbook in the our test design. We can find some helpful heuristics documented in the Handbook in the
[Test Design](https://about.gitlab.com/handbook/engineering/quality/guidelines/test-engineering/test-design/) section. [Test Design](https://about.gitlab.com/handbook/engineering/quality/guidelines/test-engineering/test-design/) section.
## Run tests against MySQL
By default, tests are only run against PostgreSQL, but you can run them on
demand against MySQL by following one of the following conventions:
| Convention | Valid example |
|:----------------------|:-----------------------------|
| Include `mysql` in your branch name | `enhance-mysql-support` |
| Include `[run mysql]` in your commit message | `Fix MySQL support<br><br>[run mysql]` |
## Test speed ## Test speed
GitLab has a massive test suite that, without [parallelization], can take hours GitLab has a massive test suite that, without [parallelization], can take hours
......
...@@ -39,7 +39,6 @@ slowest test files and try to improve them. ...@@ -39,7 +39,6 @@ slowest test files and try to improve them.
## CI setup ## CI setup
- On CE and EE, the test suite runs both PostgreSQL and MySQL.
- Rails logging to `log/test.log` is disabled by default in CI [for - Rails logging to `log/test.log` is disabled by default in CI [for
performance reasons][logging]. To override this setting, provide the performance reasons][logging]. To override this setting, provide the
`RAILS_ENABLE_TEST_LOG` environment variable. `RAILS_ENABLE_TEST_LOG` environment variable.
......
...@@ -35,8 +35,8 @@ Once a test is in quarantine, there are 3 choices: ...@@ -35,8 +35,8 @@ Once a test is in quarantine, there are 3 choices:
Quarantined tests are run on the CI in dedicated jobs that are allowed to fail: Quarantined tests are run on the CI in dedicated jobs that are allowed to fail:
- `rspec-pg-quarantine` and `rspec-mysql-quarantine` (CE & EE) - `rspec-pg-quarantine` (CE & EE)
- `rspec-pg-quarantine-ee` and `rspec-mysql-quarantine-ee` (EE only) - `rspec-pg-quarantine-ee` (EE only)
## Automatic retries and flaky tests detection ## Automatic retries and flaky tests detection
......
# Verifying Database Capabilities # Verifying Database Capabilities
Sometimes certain bits of code may only work on a certain database and/or Sometimes certain bits of code may only work on a certain database
version. While we try to avoid such code as much as possible sometimes it is version. While we try to avoid such code as much as possible sometimes it is
necessary to add database (version) specific behaviour. necessary to add database (version) specific behaviour.
To facilitate this we have the following methods that you can use: To facilitate this we have the following methods that you can use:
- `Gitlab::Database.postgresql?`: returns `true` if PostgreSQL is being used - `Gitlab::Database.postgresql?`: returns `true` if PostgreSQL is being used.
- `Gitlab::Database.mysql?`: returns `true` if MySQL is being used You can normally just assume this is the case.
- `Gitlab::Database.version`: returns the PostgreSQL version number as a string - `Gitlab::Database.version`: returns the PostgreSQL version number as a string
in the format `X.Y.Z`. This method does not work for MySQL in the format `X.Y.Z`.
This allows you to write code such as: This allows you to write code such as:
......
...@@ -7,9 +7,8 @@ downtime. ...@@ -7,9 +7,8 @@ downtime.
## Adding Columns ## Adding Columns
On PostgreSQL you can safely add a new column to an existing table as long as it You can safely add a new column to an existing table as long as it does **not**
does **not** have a default value. For example, this query would not require have a default value. For example, this query would not require downtime:
downtime:
```sql ```sql
ALTER TABLE projects ADD COLUMN random_value int; ALTER TABLE projects ADD COLUMN random_value int;
...@@ -27,11 +26,6 @@ This requires updating every single row in the `projects` table so that ...@@ -27,11 +26,6 @@ This requires updating every single row in the `projects` table so that
indexes in a table. This in turn acquires enough locks on the table for it to indexes in a table. This in turn acquires enough locks on the table for it to
effectively block any other queries. effectively block any other queries.
As of MySQL 5.6 adding a column to a table is still quite an expensive
operation, even when using `ALGORITHM=INPLACE` and `LOCK=NONE`. This means
downtime _may_ be required when modifying large tables as otherwise the
operation could potentially take hours to complete.
Adding a column with a default value _can_ be done without requiring downtime Adding a column with a default value _can_ be done without requiring downtime
when using the migration helper method when using the migration helper method
`Gitlab::Database::MigrationHelpers#add_column_with_default`. This method works `Gitlab::Database::MigrationHelpers#add_column_with_default`. This method works
...@@ -311,8 +305,7 @@ migrations](background_migrations.md#cleaning-up). ...@@ -311,8 +305,7 @@ migrations](background_migrations.md#cleaning-up).
## Adding Indexes ## Adding Indexes
Adding indexes is an expensive process that blocks INSERT and UPDATE queries for Adding indexes is an expensive process that blocks INSERT and UPDATE queries for
the duration. When using PostgreSQL one can work around this by using the the duration. You can work around this by using the `CONCURRENTLY` option:
`CONCURRENTLY` option:
```sql ```sql
CREATE INDEX CONCURRENTLY index_name ON projects (column_name); CREATE INDEX CONCURRENTLY index_name ON projects (column_name);
...@@ -336,17 +329,9 @@ end ...@@ -336,17 +329,9 @@ end
Note that `add_concurrent_index` can not be reversed automatically, thus you Note that `add_concurrent_index` can not be reversed automatically, thus you
need to manually define `up` and `down`. need to manually define `up` and `down`.
When running this on PostgreSQL the `CONCURRENTLY` option mentioned above is
used. On MySQL this method produces a regular `CREATE INDEX` query.
MySQL doesn't really have a workaround for this. Supposedly it _can_ create
indexes without the need for downtime but only for variable width columns. The
details on this are a bit sketchy. Since it's better to be safe than sorry one
should assume that adding indexes requires downtime on MySQL.
## Dropping Indexes ## Dropping Indexes
Dropping an index does not require downtime on both PostgreSQL and MySQL. Dropping an index does not require downtime.
## Adding Tables ## Adding Tables
...@@ -370,7 +355,7 @@ transaction this means this approach would require downtime. ...@@ -370,7 +355,7 @@ transaction this means this approach would require downtime.
GitLab allows you to work around this by using GitLab allows you to work around this by using
`Gitlab::Database::MigrationHelpers#add_concurrent_foreign_key`. This method `Gitlab::Database::MigrationHelpers#add_concurrent_foreign_key`. This method
ensures that when PostgreSQL is used no downtime is needed. ensures that no downtime is needed.
## Removing Foreign Keys ## Removing Foreign Keys
......
...@@ -9,7 +9,7 @@ type: reference ...@@ -9,7 +9,7 @@ type: reference
This setting allows you to rate limit the requests to raw endpoints, defaults to `300` requests per minute. This setting allows you to rate limit the requests to raw endpoints, defaults to `300` requests per minute.
It can be modified in **Admin Area > Network > Performance Optimization**. It can be modified in **Admin Area > Network > Performance Optimization**.
For example, requests over `300` per minute to `https://gitlab.com/gitlab-org/gitlab-ce/raw/master/app/controllers/application_controller.rb` will be blocked. For example, requests over `300` per minute to `https://gitlab.com/gitlab-org/gitlab-ce/raw/master/app/controllers/application_controller.rb` will be blocked. Access to the raw file will be released after 1 minute.
![Rate limits on raw endpoints](img/rate_limits_on_raw_endpoints.png) ![Rate limits on raw endpoints](img/rate_limits_on_raw_endpoints.png)
...@@ -18,3 +18,5 @@ This limit is: ...@@ -18,3 +18,5 @@ This limit is:
- Applied independently per project, per commit and per file path. - Applied independently per project, per commit and per file path.
- Not applied per IP address. - Not applied per IP address.
- Active by default. To disable, set the option to `0`. - Active by default. To disable, set the option to `0`.
Requests over the rate limit are logged into `auth.log`.
This diff is collapsed.
# Analytics workspace
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/issues/12077) in GitLab 12.2 (enabled using `analytics` feature flag).
The Analytics workspace will make it possible to aggregate analytics across
GitLab, so that users can view information across multiple projects and groups
in one place.
To access the centralized analytics workspace:
1. Ensure it's enabled. Requires a GitLab administrator to enable it with the `analytics` feature
flag.
1. Once enabled, click on **Analytics** from the top navigation bar.
## Available analytics
From the centralized analytics workspace, the following analytics are available:
- [Cycle Analytics](cycle_analytics.md).
NOTE: **Note:**
Project-level Cycle Analytics are still available at a project's **Project > Cycle Analytics**.
...@@ -112,57 +112,6 @@ Below are the shared Runners settings. ...@@ -112,57 +112,6 @@ Below are the shared Runners settings.
The full contents of our `config.toml` are: The full contents of our `config.toml` are:
**DigitalOcean**
```toml
concurrent = X
check_interval = 1
metrics_server = "X"
sentry_dsn = "X"
[[runners]]
name = "docker-auto-scale"
request_concurrency = X
url = "https://gitlab.com/"
token = "SHARED_RUNNER_TOKEN"
executor = "docker+machine"
environment = [
"DOCKER_DRIVER=overlay2"
]
limit = X
[runners.docker]
image = "ruby:2.5"
privileged = true
[runners.machine]
IdleCount = 20
IdleTime = 1800
OffPeakPeriods = ["* * * * * sat,sun *"]
OffPeakTimezone = "UTC"
OffPeakIdleCount = 5
OffPeakIdleTime = 1800
MaxBuilds = 1
MachineName = "srm-%s"
MachineDriver = "digitalocean"
MachineOptions = [
"digitalocean-image=X",
"digitalocean-ssh-user=core",
"digitalocean-region=nyc1",
"digitalocean-size=s-2vcpu-2gb",
"digitalocean-private-networking",
"digitalocean-tags=shared_runners,gitlab_com",
"engine-registry-mirror=http://INTERNAL_IP_OF_OUR_REGISTRY_MIRROR",
"digitalocean-access-token=DIGITAL_OCEAN_ACCESS_TOKEN",
]
[runners.cache]
Type = "s3"
BucketName = "runner"
Insecure = true
Shared = true
ServerAddress = "INTERNAL_IP_OF_OUR_CACHE_SERVER"
AccessKey = "ACCESS_KEY"
SecretKey = "ACCESS_SECRET_KEY"
```
**Google Cloud Platform** **Google Cloud Platform**
```toml ```toml
...@@ -178,20 +127,25 @@ sentry_dsn = "X" ...@@ -178,20 +127,25 @@ sentry_dsn = "X"
token = "SHARED_RUNNER_TOKEN" token = "SHARED_RUNNER_TOKEN"
executor = "docker+machine" executor = "docker+machine"
environment = [ environment = [
"DOCKER_DRIVER=overlay2" "DOCKER_DRIVER=overlay2",
"DOCKER_TLS_CERTDIR="
] ]
limit = X limit = X
[runners.docker] [runners.docker]
image = "ruby:2.5" image = "ruby:2.5"
privileged = true privileged = true
volumes = [
"/certs/client",
"/dummy-sys-class-dmi-id:/sys/class/dmi/id:ro" # Make kaniko builds work on GCP.
]
[runners.machine] [runners.machine]
IdleCount = 20 IdleCount = 50
IdleTime = 1800 IdleTime = 3600
OffPeakPeriods = ["* * * * * sat,sun *"] OffPeakPeriods = ["* * * * * sat,sun *"]
OffPeakTimezone = "UTC" OffPeakTimezone = "UTC"
OffPeakIdleCount = 5 OffPeakIdleCount = 15
OffPeakIdleTime = 1800 OffPeakIdleTime = 3600
MaxBuilds = 1 MaxBuilds = 1 # For security reasons we delete the VM after job has finished so it's not reused.
MachineName = "srm-%s" MachineName = "srm-%s"
MachineDriver = "google" MachineDriver = "google"
MachineOptions = [ MachineOptions = [
...@@ -202,17 +156,18 @@ sentry_dsn = "X" ...@@ -202,17 +156,18 @@ sentry_dsn = "X"
"google-tags=gitlab-com,srm", "google-tags=gitlab-com,srm",
"google-use-internal-ip", "google-use-internal-ip",
"google-zone=us-east1-d", "google-zone=us-east1-d",
"engine-opt=mtu=1460", # Set MTU for container interface, for more information check https://gitlab.com/gitlab-org/gitlab-runner/issues/3214#note_82892928
"google-machine-image=PROJECT/global/images/IMAGE", "google-machine-image=PROJECT/global/images/IMAGE",
"engine-registry-mirror=http://INTERNAL_IP_OF_OUR_REGISTRY_MIRROR" "engine-opt=ipv6", # This will create IPv6 interfaces in the containers.
"engine-opt=fixed-cidr-v6=fc00::/7",
"google-operation-backoff-initial-interval=2" # Custom flag from forked docker-machine, for more information check https://github.com/docker/machine/pull/4600
] ]
[runners.cache] [runners.cache]
Type = "s3" Type = "gcs"
BucketName = "runner"
Insecure = true
Shared = true Shared = true
ServerAddress = "INTERNAL_IP_OF_OUR_CACHE_SERVER" [runners.cache.gcs]
AccessKey = "ACCESS_KEY" CredentialsFile = "/path/to/file"
SecretKey = "ACCESS_SECRET_KEY" BucketName = "bucket-name"
``` ```
## Sidekiq ## Sidekiq
......
...@@ -59,15 +59,14 @@ Once [Single sign-on](index.md) has been configured, we can: ...@@ -59,15 +59,14 @@ Once [Single sign-on](index.md) has been configured, we can:
### Azure ### Azure
First, double check the [Single sign-on](index.md) configuration for your group and ensure that **Name identifier value** (NameID) points to `user.objectid` or another unique identifier. This will match the `extern_uid` used on GitLab. The SAML application that was created during [Single sign-on](index.md) setup now needs to be set up for SCIM.
![Name identifier value mapping](img/scim_name_identifier_mapping.png) 1. Check the configuration for your GitLab SAML app and ensure that **Name identifier value** (NameID) points to `user.objectid` or another unique identifier. This will match the `extern_uid` used on GitLab.
#### Set up admin credentials ![Name identifier value mapping](img/scim_name_identifier_mapping.png)
Next, configure your GitLab application in Azure by following the 1. Set up automatic provisioning and administrative credentials by following the
[Provisioning users and groups to applications that support SCIM](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/use-scim-to-provision-users-and-groups#provisioning-users-and-groups-to-applications-that-support-scim) [Provisioning users and groups to applications that support SCIM](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/use-scim-to-provision-users-and-groups#provisioning-users-and-groups-to-applications-that-support-scim) section in Azure's SCIM setup documentation.
section in Azure's SCIM setup documentation.
During this configuration, note the following: During this configuration, note the following:
...@@ -97,6 +96,7 @@ You can then test the connection by clicking on **Test Connection**. If the conn ...@@ -97,6 +96,7 @@ You can then test the connection by clicking on **Test Connection**. If the conn
NOTE: **Note:** If you used a unique identifier **other than** `objectId`, be sure to map it instead to both `id` and `externalId`. NOTE: **Note:** If you used a unique identifier **other than** `objectId`, be sure to map it instead to both `id` and `externalId`.
1. Below the mapping list click on **Show advanced options > Edit attribute list for AppName**. 1. Below the mapping list click on **Show advanced options > Edit attribute list for AppName**.
1. Leave the `id` as the primary and only required field. 1. Leave the `id` as the primary and only required field.
NOTE: **Note:** NOTE: **Note:**
...@@ -129,8 +129,7 @@ When testing the connection, you may encounter an error: **You appear to have en ...@@ -129,8 +129,7 @@ When testing the connection, you may encounter an error: **You appear to have en
When checking the Audit Logs for the Provisioning, you can sometimes see the When checking the Audit Logs for the Provisioning, you can sometimes see the
error `Namespace can't be blank, Name can't be blank, and User can't be blank.` error `Namespace can't be blank, Name can't be blank, and User can't be blank.`
This is likely caused because not all required fields (such as first name and This is likely caused because not all required fields (such as first name and last name) are present for all users being mapped.
last name) are present for all users being mapped.
As a workaround, try an alternate mapping: As a workaround, try an alternate mapping:
......