Mutual Exclusion with Redis and Ruby
Mutual Exclusion with Redis and Ruby
Redis is usually used for caching in web applications. About 2 or 3 years ago, we encountered a concurrency issue that forced us to protect a document without relying on database locks. We did some research and found out that Redis can be used to implement a distributed mutex.

If the concept is new, think of the mutex (mutual exclusion) as a key to a room and only one person can use the room at a time. If another person needs to enter the room, that person needs to wait for the person inside the room to come out first and release the key.

The Problem
Suppose we have a simple Resource class with 2 fields: label and value. We create a resource called "Shared Resource" with the initial value of zero. This resource will later be processed by multiple processors at the same time.

        

          class Resource
            include Mongoid::Document

            field :label, type: String
            field :value, type: Integer, default: 0
          end

          # create a shared resource
          resource = Resource.new(label: "Shared Resource", value: 0)
          resource.save
        
      

We then create a class which processes the resource we created earlier. The method some_operation! simply assigns the value of the resource to a temporary variable, adds 10 to it, and finally saves the new value to the same resource.

        

          class ResourceProcessor
            def initialize(resource: resource)
              @resource = resource
            end

            def some_operation!
              sleep(rand())
                  
              @resource.reload
              puts "A"

              temp_storage = @resource.value
              sleep(rand())

              puts "B"

              @resource.value = temp_storage + 10
              @resource.save

              puts "C"
              puts @resource.value
            end
          end
        
      

We add the method process! which creates 5 threads and executes them in a concurrent manner.

        

          class ResourceProcessor
            ...

            def process!
              threads = []

              5.times do
                threads << Thread.new do
                  some_operation!
                end
              end

              threads.each do |thread|
                thread.join
              end

              @resource.reload
              puts @resource.value
            end

            ...
          end
        
      

Finally, we execute the following lines of code to see the update process in action. We pass the resource to the ResourceProcessor and run the process! method.

        

          processor = ResourceProcessor.new(resource: resource)
          processor.process!

          # OUTPUT:
          # A
          # A
          # B
          # C
          # 10
          # A
          # B
          # C
          # 20
          # B
          # C
          # 10
          # A
          # A
          # B
          # C
          # 20
          # B
          # C
          # 20
          # FINAL VALUE: 20
        
      

What happened? A race condition occured because @resource.value has been changed by another thread while its current value is being used in another. Unfortunately, the final value is 20 which is not the expected result. This is a big issue if you're not using atomic fields such as counters.

The Solution
To solve this problem, it is necessary to lock the resource with a mutex lock. Mutex (or Mutual Exclusion) ensures that no two concurrent processes are processing the same resource at the same time.

We'll be using the redis-semaphore gem to wrap our method in a mutex lock.

        

          class ResourceProcessor
            ...

            def lock
              mutex = Redis::Semaphore.new(@resource.id, 
                                           connection: ENV['REDIS_HOST'], 
                                           port: ENV['REDIS_PORT'])

              mutex.lock do
                puts "LOCK: START"
                yield
                puts "LOCK: END"
              end
            end

            ...

            def process!
              threads = []

              5.times do
                threads << Thread.new do
                  lock do
                    some_operation!
                  end
                end
              end

              threads.each do |thread|
                thread.join
              end

              @resource.reload
              puts @resource.value
            end

            ...
          end
        
      

We simply use the ID of the resource as the key. This means that 2 processors can process two different resources at the same time since the keys are different.

After applying the lock, the other threads need to wait until lock is released. This way, no race conditions will happen since the resource is protected by the lock. After the last thread has finished executing, the final value of the Resource is 50 as expected.

        

          processor = ResourceProcessor.new(resource: resource)
          processor.process!

          # OUTPUT:
          # LOCK: START
          # A
          # B
          # C
          # 10
          # LOCK: END
          # LOCK: START
          # A
          # B
          # C
          # 20
          # LOCK: END
          # LOCK: START
          # A
          # B
          # C
          # 30
          # LOCK: END
          # LOCK: START
          # A
          # B
          # C
          # 40
          # LOCK: END
          # LOCK: START
          # A
          # B
          # C
          # 50
          # LOCK: END
          # FINAL VALUE: 50
        
      

Take note that locks make your code slower so it is important to know how to use locks properly. For example, it is sometimes unnecessary to wrap a big block of code inside a lock if the update is atomic. Remember that the goal of locks is to protect your data and not code. You may be also tempted to lock the entire table. Most of the time you will only need to lock the entry or row. Finally, it is critical that you are aware that locking is just one the several ways to solve concurrency issues.