
I always love to dig on some fundamental technologies then give you “golden” surprises, things you think you know well but you actually not.
Today I wanna talk about Grand Central Dispatch (GCD) and the way IOS manage threads.
This article was written with credits from John Sundell, Moses Kim, …
DispatchQueen
Synchronous and Asynchronous
Synchronous and Asynchronous by meaning of themselves already discribe how it works. So how many synchronous threads can run at a time? well, as long as you have an amount of available memory GCP will create a virtual processor core and run a thread on it.
System-Provided Queues
All queues that created automatically by the system when an app launches include: a special queue called the main queue and a number of global concurrent dispatch queues
Serial and Concurrent
Serial or consecutive: work items are executed one at a time
when the closure at the top of the queue gets pulled by iOS and operates until it ends, then pulls another queue element and so on.
Concurrent or multithreaded: work items are dequeued in order, but run all at once and can finish in any order.
when the system pulls a closure at the top of the queue and initiates its execution in a certain thread.
if the system has access to more resources, it picks the next element from the queue and launches it on a different thread while the first function is still at work. This way the system can pull a number of functions.
Both serial and concurrent queues process work items in first in, first-out (FIFO) order.
Main queue, global queues and queues you created
The main queue: runs on the main thread, serial queue, system-provided queue
The global queue: runs on any thread, concurrent queue, system-provided queue. Be blocked as soon as all jobs done.
You can set the priority of a queue: hight, default, low and background. With same priority, FIFO is the rule
The queue you created: run on any thread, can be a serial or concurrent queue.
Managing Units of Work
Delaying a cancellable task with DispatchWorkItem
One common misconception about GCD is that “once you schedule a task it cannot be cancelled, you need to use the Operation API for that”. While that used to be true, with iOS 8 & macOS 10.10 DispatchWorkItem was introduced, which provides this exact functionality in a very easy to use API.
Let’s say our UI has a search bar, and when the user types a character we perform a search by calling our backend. Since the user can type quite rapidly, we don’t want to start our network request right away (that could waste a lot of data and server capacity), and instead we’re going to “debounce” those events and only perform a request once the user hasn’t typed for 0.25 seconds.
This is where DispatchWorkItem comes in. By encapsulating our request code in a work item, we can very easily cancel it whenever it’s replaced by a new one, like this:
class SearchViewController: UIViewController, UISearchBarDelegate { // We keep track of the pending work item as a property private var pendingRequestWorkItem: DispatchWorkItem? func searchBar(_ searchBar: UISearchBar, textDidChange searchText: String) { // Cancel the currently pending item pendingRequestWorkItem?.cancel() // Wrap our request in a work item let requestWorkItem = DispatchWorkItem { [weak self] in self?.resultsLoader.loadResults(forQuery: searchText) } // Save the new work item and execute it after 250 ms pendingRequestWorkItem = requestWorkItem DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(250), execute: requestWorkItem) } }
As we can see above, using DispatchWorkItem is actually a lot simpler and nicer in Swift than having to use a Timer or Operation, thanks to trailing closure syntax and how well GCD imports into Swift. We don’t need @objc marked methods or #selector – it can all be done with closures.
DispatchGroup
Grouping and chaining tasks
Sometimes we need to perform a group of operations before we can move on with our logic. For example, let’s say we need to load data from a group of data sources before we can create a model. Rather than having to keep track of all the data sources ourselves, we can easily synchronize the work with a DispatchGroup.
Using dispatch groups also gives us a big advantage in that our tasks can run concurrently, in separate queues. That enables us to start off simple, and then easily add concurrency later if needed, without having to rewrite any of our tasks. All we have to do is make balanced calls to enter() and leave() on a dispatch group to have it synchronize our tasks.
Let’s take a look at an example, in which we load notes from local storage, iCloud Drive and a backend system, and then combine all of the results into a NoteCollection:
// First, we create a group to synchronize our tasks let group = DispatchGroup() // NoteCollection is a thread-safe collection class for storing notes let collection = NoteCollection() // The 'enter' method increments the group's task count… group.enter() localDataSource.load { notes in collection.add(notes) // …while the 'leave' methods decrements it group.leave() } group.enter() iCloudDataSource.load { notes in collection.add(notes) group.leave() } group.enter() backendDataSource.load { notes in collection.add(notes) group.leave() } // This closure will be called when the group's task count reaches 0 group.notify(queue: .main) { [weak self] in self?.render(collection) }
The above code works, but it has a lot of duplication in it. Let’s instead refactor it into an extension on Array, using a DataSource protocol as a same-type constraint for its Element type:
extension Array where Element == DataSource { func load(completionHandler: @escaping (NoteCollection) -> Void) { let group = DispatchGroup() let collection = NoteCollection() // De-duplicate the synchronization code by using a loop for dataSource in self { group.enter() dataSource.load { notes in collection.add(notes) group.leave() } } group.notify(queue: .main) { completionHandler(collection) } } }
With the above extension, we can now reduce our previous code to this:
let dataSources: [DataSource] = [ localDataSource, iCloudDataSource, backendDataSource ] dataSources.load { [weak self] collection in self?.render(collection) }
DispatchSemaphore
Waiting for asynchronous tasks
While DispatchGroup provides a nice and easy way to synchronize a group of asynchronous operations while still remaining asynchronous, DispatchSemaphore provides a way to synchronously wait for a group of asynchronous tasks. This is very useful in command line tools or scripts, where we don’t have an application run loop, and instead just execute synchronously in a global context until done.
Like DispatchGroup, the semaphore API is very simple in that we only increment or decrement an internal counter, by either calling wait() or signal(). Calling wait() before a signal() will block the current queue until a signal is received.
Let’s create another overload in our extension on Array from before, that returns a NoteCollection synchronously, or else throws an error. We’ll reuse our DispatchGroup-based code from before, but simply coordinate that task using a semaphore.
extension Array where Element == DataSource { func load() throws -> NoteCollection { let semaphore = DispatchSemaphore(value: 0) var loadedCollection: NoteCollection? // We create a new queue to do our work on, since calling wait() on // the semaphore will cause it to block the current queue let loadingQueue = DispatchQueue.global() loadingQueue.async { // We extend 'load' to perform its work on a specific queue self.load(onQueue: loadingQueue) { collection in loadedCollection = collection // Once we're done, we signal the semaphore to unblock its queue semaphore.signal() } } // Wait with a timeout of 5 seconds semaphore.wait(timeout: .now() + 5) guard let collection = loadedCollection else { throw NoteLoadingError.timedOut } return collection } }
Using the above new method on Array, we can now load notes synchronously in a script or command line tool like this:
let dataSources: [DataSource] = [ localDataSource, iCloudDataSource, backendDataSource ] do { let collection = try dataSources.load() output(collection) } catch { output(error) }
DispatchSource
Observing changes in a file
The final “lesser known” feature of GCD that I want to bring up is how it provides a way to observe changes in a file on the file system. Like DispatchSemaphore, this is something which can be super useful in a script or command line tool, if we want to automatically react to a file being edited by the user. This enables us to easily build developer tools that have “live editing” features.
Dispatch sources come in a few different variants, depending on what we want to observe. In this case we’ll use DispatchSourceFileSystemObject, which lets us observe events from the file system.
Let’s take a look at an example implementation of a simple FileObserver, that lets us attach a closure to be run every time a given file is changed. It works by creating a dispatch source using a fileDescriptor and a DispatchQueue to perform the observation on, and uses Files to refer to the file to observe:
class FileObserver { private let file: File private let queue: DispatchQueue private var source: DispatchSourceFileSystemObject? init(file: File) { self.file = file self.queue = DispatchQueue(label: "com.myapp.fileObserving") } func start(closure: @escaping () -> Void) { // We can only convert an NSString into a file system representation let path = (file.path as NSString) let fileSystemRepresentation = path.fileSystemRepresentation // Obtain a descriptor from the file system let fileDescriptor = open(fileSystemRepresentation, O_EVTONLY) // Create our dispatch source let source = DispatchSource.makeFileSystemObjectSource( fileDescriptor: fileDescriptor, eventMask: .write, queue: queue ) // Assign the closure to it, and resume it to start observing source.setEventHandler(handler: closure) source.resume() self.source = source } }
We can now use FileObserver like this:
let observer = try FileObserver(file: file) observer.start { print("File was changed") }

A nature, universe, science, music, love lover