Thread safety in iOS to avoid race conditions/read and write problem /deadlock : Swift

Varun Tomar
5 min readMar 23, 2021

--

Multithreading is one of the most important concept of any software or hardware. But it comes with more complexity and also introduces common issues like race conditions and dead lock. Some time we call the race condition as read and write problem.

Both these problems are related to managing access to shared resources.

Let's first understand the problems, how they can come into picture.

Race Condition : A race condition is an undesirable situation that occurs when a program attempts to perform two or more operations at the same time on a shared resources. It results in undesirable outcomes.

To simulate this, lets say we have an shared array that is being accessed from two independent threads.

As shown in above gist, we have created an integer array “arr” that is being modified by two different threads. As we are running these modification task on a concurrent queue, that means two separate threads need to be created to execute operations.

What you think should be the output 🧐, first operation remove last element form array, afterwords second operation remove all elements. We should get an empty array at last. Don’t tell but it’s not 🤫🤫. Below is the output we get :

Our application gets crashed, but why 🧐🧐?

Because of race condition 🏃‍♂️🏃‍♂️ between two threads to modify same shared array. We added removeLast() call first in our queue and removeAll() second but second thread won first and removed all elements. So when first thread try to remove last element, there was no element left in the array to get removed, so app gets crashed 🤷‍♂️. This is not fair right? Off-course it’s not to get undesirable results.

It’s a small example of race condition, there may be more complex senerio where shared resource is accessible by various threads and we never know how our program react in different user experience. 😮 Ohhk we got it, hold on and answer how to fix this 🤨🤨?

Surely we will fix it 😀😀 but have some patience and lets understand second problem first i.e deadlock.

Deadlock : A deadlock is a condition when a program cannot access a resource it needs to continue because when threads are waiting for each other to release a shared resource, ending up blocked for infinity.

Here we created a serial queue that operates two nested sync operations. When first operation added, caller thread gets blocked but after first print statement when second sync operation added, it’s blocked same caller thread and deadlock happend☠️ 🔐.

Now comes to solutions to avoid race conditions and deadlock 🤩🤩 :

  1. Locks (NSLock) : First we need to figure out critical section in our code where lets say our shared resource gets accessed or modified. We will work on same example that we used for race condition, so we can put a lock where we modify our shared array and unlock it when it’s done. Below is the modified code :

Please walk through above gist, we have created a NSLock and applied it when we set or modify our shared array instance. That means only single thread can access it at a time, once unlock gets called other thread in queue gets chance to modify the array. This is one of the way to avoid race conditions😀😀. But be careful as sometime NSLock can cause deadlock. Lets say we have two concurrent threads running :

  1. When ThreadA is executing, it acquired a lockA to access functionA
  2. ThreadB is executing concurrently, it acquired another lockB for synchronising access to functionB
  3. Before releasing lockA, ThreadA tries to access functionB, so it needs to wait for lockB to get released.
  4. Before releasing lockB, ThreadB tries to access functionA, so it needs to wait for lockA to get released.
  5. Now ThreadA has lockA and waiting for ThreadB to release lockB, but ThreadB will not release lockB until lockA is released first. Resulting in both threads waiting for each other forever. A deadlock.

Please refer below code :

Expected output from above code :

Me in functionA called from thread A

Me in functionB called from thread A

Me in functionA called from thread B

Me in functionB called from thread B

Actual output:

Me in functionA called from thread A

Me in functionB called from thread B

NOTE : This discrepancy is due to deadlock happen when ThreadA has locked functionA and waiting for functionB, which has already locked by ThreadB. And ThreadB is waiting for functionA (blocked by ThreadA). So we should be careful while using NSLock.

2. Dispatch Semaphore : DispatchSemaphore is used to count semaphore. By calling the signal() function, we can increase its count, and by calling the wait() function, we can decrease its count. We can set number of concurrent operations we want to get executed for a particular section of code.

Now let’s see how DispatchSemaphore can help us avoid the race condition. We are taking same example

So while modifying our shared resource array, we added a semaphore wait call and when it’s done we make signal call. So that dispatchSemaphore value gets increased again and others thread can enter into.

If we want that 2 threads can access that critical section of code at a time then we will set concurrency value 2. let semaphore = DispatchSemaphore(value: 2)

Aaaha! it’s simple and clean.. right😀😀?

3. Serial Queue : We can make use of serial queues to synchronise access to shared resource but in that case we have to call shared resource modifying code section from the same serial queue. Please refer below code, we are setting shared resource array using serialQueue, so at a time only one thread is accessing it. We can also add serialQueue operation during reading out shared resource.

4. Dispatch Barrier with Concurrent Queue : What if we don’t want to block things in serial queues to execute one by one? We have to go with concurrent queue, but in concurrent execution how we can make our write operation thread safe 🧐🧐? Here comes Dispatch Barrier to lift from this problem 🪂🪂. When dispatching a code block to a concurrent queue, you can assign a flag to it indicating that it is a barrier task, meaning that when it is time to execute this task, it should be the only executing item on the specified queue. Below is the code to refer :

Conclusion : The idea behind every approch is same, identify the critical code section and make it accessible by only one thread at a time.

Hope you enjoyed this article. If you feel, please share and give some claps 👏👏 and stay tuned 😄😄 for next articles. Thanks for reading this🙏🙏. Any feedback in the comment section would be highly appreciated.

You can follow me for fresh articles.

Lets connect on LinkedIn

--

--

Varun Tomar
Varun Tomar

Responses (2)