1. 25 Oct, 2018 1 commit
    • Andrew Newdigate's avatar
      Add experimental support for Puma · 1065f8ce
      Andrew Newdigate authored
      This allows us (and others) to test drive Puma without it affecting all
      users. Puma can be enabled by setting the environment variable
      "EXPERIMENTAL_PUMA" to a non empty value.
  2. 18 Oct, 2018 1 commit
  3. 10 Oct, 2018 1 commit
    • Zeger-Jan van de Weg's avatar
      Remove Git circuit breaker · 30b4ce94
      Zeger-Jan van de Weg authored
      Was introduced in the time that GitLab still used NFS, which is not
      required anymore in most cases. By removing this, the API it calls will
      return empty responses. This interface has to be removed in the next
      major release, expected to be 12.0.
  4. 21 Sep, 2018 1 commit
  5. 17 Sep, 2018 1 commit
  6. 15 Aug, 2018 1 commit
  7. 11 Jul, 2018 1 commit
  8. 25 Jun, 2018 1 commit
  9. 14 May, 2018 4 commits
  10. 17 Apr, 2018 1 commit
  11. 10 Apr, 2018 1 commit
    • blackst0ne's avatar
      [Rails5] Fix running spinach tests · 1a455f3d
      blackst0ne authored
      1. Add support for `RAILS5=1|true` for the `bin/spinach` command.
      2. Synchronize used spinach versions both for rails4 and rails5.
      For rails5 it was accidently used spinach 0.10.1 instead of 0.8.10.
      That brought some problems on running spinach tests.
      Example of failure message:
      NoMethodError: undefined method `line' for #<Spinach::Scenario:0x000000000c86ba80>
      Did you mean?  lines
        /builds/gitlab-org/gitlab-ce/features/support/env.rb:52:in `before_scenario_run'
  12. 03 Apr, 2018 1 commit
  13. 21 Mar, 2018 1 commit
  14. 20 Feb, 2018 1 commit
  15. 26 Jan, 2018 2 commits
  16. 19 Jan, 2018 1 commit
  17. 08 Dec, 2017 1 commit
    • Bob Van Landuyt's avatar
      Move the circuitbreaker check out in a separate process · f1ae1e39
      Bob Van Landuyt authored
      Moving the check out of the general requests, makes sure we don't have
      any slowdown in the regular requests.
      To keep the process performing this checks small, the check is still
      performed inside a unicorn. But that is called from a process running
      on the same server.
      Because the checks are now done outside normal request, we can have a
      simpler failure strategy:
      The check is now performed in the background every
      `circuitbreaker_check_interval`. Failures are logged in redis. The
      failures are reset when the check succeeds. Per check we will try
      `circuitbreaker_access_retries` times within
      `circuitbreaker_storage_timeout` seconds.
      When the number of failures exceeds
      `circuitbreaker_failure_count_threshold`, we will block access to the
      After `failure_reset_time` of no checks, we will clear the stored
      failures. This could happen when the process that performs the checks
      is not running.
  18. 13 Oct, 2017 1 commit
  19. 09 Aug, 2017 1 commit
  20. 22 Jul, 2017 1 commit
    • Jacopo's avatar
      Let's start labeling our CHANGELOG entries · eb2b895a
      Jacopo authored
      Added the type attribute to a CHANGELOG entry. When you create a new
      entry the software asks for the category of the change and sets the
      associated type in the file.
  21. 23 Jun, 2017 1 commit
  22. 21 Mar, 2017 1 commit
  23. 06 Feb, 2017 2 commits
  24. 10 Jan, 2017 1 commit
  25. 09 Dec, 2016 1 commit
  26. 02 Dec, 2016 1 commit
  27. 29 Nov, 2016 1 commit
  28. 03 Nov, 2016 1 commit
  29. 02 Nov, 2016 1 commit
  30. 31 Oct, 2016 2 commits
  31. 21 Oct, 2016 1 commit
    • Yorick Peterse's avatar
      Re-organize queues to use for Sidekiq · 97731760
      Yorick Peterse authored
      Dumping too many jobs in the same queue (e.g. the "default" queue) is a
      dangerous setup. Jobs that take a long time to process can effectively
      block any other work from being performed given there are enough of
      these jobs.
      Furthermore it becomes harder to monitor the jobs as a single queue
      could contain jobs for different workers. In such a setup the only
      reliable way of getting counts per job is to iterate over all jobs in a
      queue, which is a rather time consuming process.
      By using separate queues for various workers we have better control over
      throughput, we can add weight to queues, and we can monitor queues
      better. Some workers still use the same queue whenever their work is
      related. For example, the various CI pipeline workers use the same
      "pipeline" queue.
      This commit includes a Rails migration that moves Sidekiq jobs from the
      old queues to the new ones. This migration also takes care of doing the
      inverse if ever needed. This does require downtime as otherwise new jobs
      could be scheduled in the old queues after this migration completes.
      This commit also includes an RSpec test that blacklists the use of the
      "default" queue and ensures cron workers use the "cronjob" queue.
      Fixes gitlab-org/gitlab-ce#23370
  32. 12 Aug, 2016 2 commits
  33. 17 Jun, 2016 1 commit