1. 16 Dec, 2018 1 commit
  2. 13 Dec, 2018 3 commits
  3. 12 Dec, 2018 4 commits
  4. 11 Dec, 2018 3 commits
    • Yorick Peterse's avatar
      Refactor Project#create_or_update_import_data · 26378511
      Yorick Peterse authored
      In https://gitlab.com/gitlab-org/release/framework/issues/28 we found
      that this method was changed a lot over the years: 43 times if our
      calculations were correct. Looking at the method, it had quite a few
      branches going on:
          def create_or_update_import_data(data: nil, credentials: nil)
            return if data.nil? && credentials.nil?
            project_import_data = import_data || build_import_data
            if data
              project_import_data.data ||= {}
              project_import_data.data = project_import_data.data.merge(data)
            if credentials
              project_import_data.credentials ||= {}
              project_import_data.credentials =
      If we turn the || and ||= operators into regular if statements, we can
      see a bit more clearly that this method has quite a lot of branches in
          def create_or_update_import_data(data: nil, credentials: nil)
            if data.nil? && credentials.nil?
              project_import_data =
                if import_data
              if data
                if project_import_data.data
                  # nothing
                  project_import_data.data = {}
                project_import_data.data =
              if credentials
                if project_import_data.credentials
                  # nothing
                  project_import_data.credentials = {}
                project_import_data.credentials =
      The number of if statements and branches here makes it easy to make
      mistakes. To resolve this, we refactor this code in such a way that we
      can get rid of all but the first `if data.nil? && credentials.nil?`
      statement. We can do this by simply sending `to_h` to `nil` in the right
      places, which removes the need for statements such as `if data`.
      Since this data gets written to a database, in ProjectImportData we do
      make sure to not write empty Hash values. This requires an `unless`
      (which is really a `if !`), but the resulting code is still very easy to
    • Gilbert Roulot's avatar
      Generalise test compare service · e6226e8c
      Gilbert Roulot authored
      It adds a base class for CompareTestReportsService
      containing common code with CompareLicenseManagementReportsService
      which is present in GitLab Enterprise Edition.
    • Stan Hu's avatar
      Revert "Merge branch '28682-can-merge-branch-before-build-is-started' into 'master'" · 1bd7f7cb
      Stan Hu authored
      This reverts commit 793be43b, reversing
      changes made to 8d0b4872.
      For projects not using any CI, enabling merge only when pipeline succeeds
      caused merge requests to be in unmergeable state, which caused significant
      See https://gitlab.com/gitlab-org/gitlab-ce/issues/55144 for more details.
  5. 09 Dec, 2018 1 commit
  6. 07 Dec, 2018 7 commits
    • Zeger-Jan van de Weg's avatar
      Allow public forks to be deduplicated · 896c0bdb
      Zeger-Jan van de Weg authored
      When a project is forked, the new repository used to be a deep copy of everything
      stored on disk by leveraging `git clone`. This works well, and makes isolation
      between repository easy. However, the clone is at the start 100% the same as the
      origin repository. And in the case of the objects in the object directory, this
      is almost always going to be a lot of duplication.
      Object Pools are a way to create a third repository that essentially only exists
      for its 'objects' subdirectory. This third repository's object directory will be
      set as alternate location for objects. This means that in the case an object is
      missing in the local repository, git will look in another location. This other
      location is the object pool repository.
      When Git performs garbage collection, it's smart enough to check the
      alternate location. When objects are duplicated, it will allow git to
      throw one copy away. This copy is on the local repository, where to pool
      remains as is.
      These pools have an origin location, which for now will always be a
      repository that itself is not a fork. When the root of a fork network is
      forked by a user, the fork still clones the full repository. Async, the
      pool repository will be created.
      Either one of these processes can be done earlier than the other. To
      handle this race condition, the Join ObjectPool operation is
      idempotent. Given its idempotent, we can schedule it twice, with the
      same effect.
      To accommodate the holding of state two migrations have been added.
      1. Added a state column to the pool_repositories column. This column is
      managed by the state machine, allowing for hooks on transitions.
      2. pool_repositories now has a source_project_id. This column in
      convenient to have for multiple reasons: it has a unique index allowing
      the database to handle race conditions when creating a new record. Also,
      it's nice to know who the host is. As that's a short link to the fork
      networks root.
      Object pools are only available for public project, which use hashed
      storage and when forking from the root of the fork network. (That is,
      the project being forked from itself isn't a fork)
      In this commit message I use both ObjectPool and Pool repositories,
      which are alike, but different from each other. ObjectPool refers to
      whatever is on the disk stored and managed by Gitaly. PoolRepository is
      the record in the database.
    • Steve Azzopardi's avatar
      Add endpoint to download single artifact by ref · 401f65c4
      Steve Azzopardi authored
      Add a new endpoint
      which is the close the web URL for consistency sake. This endpoint can
      be used to download a single file from artifacts for the specified ref
      and job.
      closes https://gitlab.com/gitlab-org/gitlab-ce/issues/54626
    • Tiago Botelho's avatar
    • Tomasz Maczukin's avatar
    • Kamil Trzciński's avatar
      Encrypt CI/CD builds tokens · a910c09b
      Kamil Trzciński authored
      Brings back 1e8f1de0 reverted in !23644
      Closes #52342
      See merge request gitlab-org/gitlab-ce!23436
    • Stan Hu's avatar
    • Robert Speicher's avatar
      Revert "Merge branch 'fix/gb/encrypt-ci-build-token' into 'master'" · 950b9130
      Robert Speicher authored
      This reverts commit 1e8f1de0, reversing
      changes made to 62d97112.
  7. 06 Dec, 2018 6 commits
    • Jan Provaznik's avatar
      Use FastDestroy for deleting uploads · 239fdc78
      Jan Provaznik authored
      It gathers list of file paths to delete before destroying
      the parent object. Then after the parent_object is destroyed
      these paths are scheduled for deletion asynchronously.
      Carrierwave needed associated model for deleting upload file.
      To avoid this requirement, simple Fog/File layer is used directly
      for file deletion, this allows us to use just a simple list of paths.
    • Kamil Trzciński's avatar
      Log and pass correlation-id between Unicorn, Sidekiq and Gitaly · 39c1731a
      Kamil Trzciński authored
      The Correlation ID is taken or generated from received X-Request-ID.
      Then it is being passed to all executed services (sidekiq workers
      or gitaly calls).
      The Correlation ID is logged in all structured logs as `correlation_id`.
    • Dylan Griffith's avatar
      Introduce Knative Serverless Tab · 2c80a1c0
      Dylan Griffith authored
    • James Lopez's avatar
      Resolve "Can add an existing group member into a group project with new... · 64c11f10
      James Lopez authored
      Resolve "Can add an existing group member into a group project with new permissions but permissions are not overridden"
    • Shinya Maeda's avatar
      Expose merge request pipeline variables · fab30c11
      Shinya Maeda authored
      Introduce the following variables
    • Stan Hu's avatar
      Remove unnecessary includes of ShellAdapter · e96fd232
      Stan Hu authored
      Determined by running the script:
      included = `git grep --name-only ShellAdapter`.chomp.split("\n")
      used = `git grep --name-only gitlab_shell`.chomp.split("\n")
      included - used
  8. 05 Dec, 2018 6 commits
  9. 04 Dec, 2018 9 commits
    • Thong Kuah's avatar
      Eager load clusters to prevent N+1 · 6c642c08
      Thong Kuah authored
      This also means we need to apply the `current_scope` otherwise this
      method will return all clusters associated with the groups regardless of
      any scopes applied to this method
    • Thong Kuah's avatar
      Unify into :group_clusters feature flag · ebf87fd9
      Thong Kuah authored
      With this MR, group clusters is now functional, so default to enabled.
      Have a single setting on the root ancestor group to enabled or disable
      group clusters feature as a whole
    • Thong Kuah's avatar
      Various improvements to hierarchy sorting · f85440e6
      Thong Kuah authored
      - Rename ordered_group_clusters_for_project ->
      - Improve name of order option. It makes much more sense to have `hierarchy_order: :asc`
      and `hierarchy_order: :desc`
      - Allow ancestor_clusters_for_clusterable for group
      - Re-use code already present in Project
    • Thong Kuah's avatar
      Create k8s namespace for project in group clusters · d54791e0
      Thong Kuah authored
      AFAIK the only relevant place is Projects::CreateService, this gets
      called when user creates a new project, forks a new project and does
      those things via the api.
      Also create k8s namespace for new group hierarchy
      when transferring project between groups
      Uses new Refresh service to create k8s namespaces
      - Ensure we use Cluster#cluster_project
      If a project has multiple clusters (EE), using Project#cluster_project
      is not guaranteed to return the cluster_project for this cluster. So
      switch to using Cluster#cluster_project - at this stage a cluster can
      only have 1 cluster_project.
      Also, remove rescue so that sidekiq can retry
    • Thong Kuah's avatar
      Teach Cluster about #all_projects · 8419b7dd
      Thong Kuah authored
      For project level, it's the project directly associated. For group
      level, it's the projects under that group.
    • Thong Kuah's avatar
      Teach Project about #all_clusters · 9c5977c8
      Thong Kuah authored
      This returns a union of the project level clusters and group level
      clusters associated with this project.
    • Thong Kuah's avatar
      Add association project -> kubernetes_namespaces · 703233e1
      Thong Kuah authored
      kubernetes_namespaces is not needed for project import/export as it
      tracks internal state of kubernetes integration
    • Thong Kuah's avatar
      Assert all_projects work for child groups · e9eccee9
      Thong Kuah authored
    • Thong Kuah's avatar
      Deploy to clusters for a project's groups · 5bb2814a
      Thong Kuah authored
      Look for matching clusters starting from the closest ancestor, then go
      up the ancestor tree.
      Then use Ruby to get clusters for each group in order. Not that
      efficient, considering we will doing up to `NUMBER_OF_ANCESTORS_ALLOWED`
      number of queries, but it's a finite number
      Explicitly order query by depth
      This allows us to control ordering explicitly and also to reverse the
      order which is useful to allow us to be consistent with
      Clusters::Cluster.on_environment (EE) which does reverse ordering.
      Puts querying group clusters behind Feature Flag. Just in case we have
      issues with performance, we can easily disable this