HTTP in Swift, Part 9: Resetting
Part 9 in a series on building a Swift HTTP framework:
- HTTP in Swift, Part 1: An Intro to HTTP
- HTTP in Swift, Part 2: Basic Structures
- HTTP in Swift, Part 3: Request Bodies
- HTTP in Swift, Part 4: Loading Requests
- HTTP in Swift, Part 5: Testing and Mocking
- HTTP in Swift, Part 6: Chaining Loaders
- HTTP in Swift, Part 7: Dynamically Modifying Requests
- HTTP in Swift, Part 8: Request Options
- HTTP in Swift, Part 9: Resetting
- HTTP in Swift, Part 10: Cancellation
- HTTP in Swift, Part 11: Throttling
- HTTP in Swift, Part 12: Retrying
- HTTP in Swift, Part 13: Basic Authentication
- HTTP in Swift, Part 14: OAuth Setup
- HTTP in Swift, Part 15: OAuth
- HTTP in Swift, Part 16: Composite Loaders
- HTTP in Swift, Part 17: Brain Dump
- HTTP in Swift, Part 18: Wrapping Up
There are a couple remaining changes we need to make to our loading interface, and one of them is to allow resetting.
Resetting is the idea of taking the current state of the loading chain and wiping it clean. You can think of it as analogous to “logging out”. You might be wondering why we need this at all. If we’re going to be “starting over”, then can’t we simply throw away the loading chain and create a new one?
This is a great question to ask. In many cases, throwing away the loading chain and creating a new one would be sufficient. However, there are a couple key cases where it’s not:
- a loader that maintains persisted state (ie, it keeps state on disk) needs a chance to throw away that state when requested. Saved data (usernames, passwords, authentication tokens) or cached data all count as “persisted”.
- any in-flight
HTTPRequest
will continue executing even as its chain, which may be undesirable behavior.
Resetting
A first pass at allowing resets might look like this:
open class HTTPLoader {
...
open func reset(completionHandler: @escaping () -> Void) {
if let next = nextLoader {
next.reset(completionHandler: completionHandler)
} else {
completionHandler()
}
}
}
This works fine for the naïve case, but quickly falls apart as loaders get more complicated. The problem becomes clear when we ask ourselves: which loader in a chain is responsible for executing the completion handler?
It’s not necessarily the terminal loader (typically the URLSession
-based loader), because there might be another loader in the chain that hasn’t finished its reset work yet. It’s also likely not the first loader in the chain, because it can’t (easily) know whether all the other loaders below it have finished either.
Having a single completion handler like this is complicated. We could imagine a situation where a loader (A
) asks its downstream loader (B
) to reset using a new completion handler. A
would then only execute the completion handler it was given once it has finished its own reset logic (if it has any) and B
has also indicated its own completed resetting by invoking the completion handler it was given. We could definitely make this work, but there’s an easier way: a DispatchGroup
.
A DispatchGroup
is a way to “group” related work together, without knowing ahead of time how much work there actually is. All you know is that things can “join” (or enter()
) the group as they start their work, and leave()
the group when they’re done. For the group creator, it assigns a closure to execute once everything has left the group. This is exactly the behavior we want to model: we want to allow an unknown number of loaders to each perform an unknown number of tasks as part of a single “group” of resetting. And when it’s all done, we want to be notified.
So instead of using a completion handler, we’ll define the API in terms of a DispatchGroup
:
open class HTTPLoader {
...
open func reset(with group: DispatchGroup) {
nextLoader?.reset(with: group)
}
}
And because our users deserve nice things, we’ll provide a convenience method to handle group creation for us:
extension HTTPLoader {
...
public final func reset(on queue: DispatchQueue = .main, completionHandler: @escaping () -> Void) {
let group = DispatchGroup()
self.reset(with: group)
group.notify(queue: queue, execute: completionHandler)
}
}
Adopting this logic in a custom loader is straight-forward:
class MyCustomLoader: HTTPLoader {
...
override func reset(with group: DispatchGroup) {
group.enter() // this loader has work to include in this group
DispatchQueue.global(qos: .userInitiated).async {
// do whatever cleanup this loader needs
group.leave() // we are done with the work
}
// make sure loaders beneath us can reset as well
super.reset(with: group)
}
}
Even though we are calling super
before we have finished work, the top-level DispatchGroup
will not complete until we leave the group. DispatchGroup
has a hard-and-fast rule, that every call to enter()
must be balanced by a matching call to leave()
. If we forget a call to leave()
, our group will never notify that it has finished. If we leave()
too many times, the Dispatch framework will crash our app. We must be diligent about properly balancing our enter()
and leave()
calls.
Food for thought: This requirement of balancing
enter()
andleave()
is identical in concept to how manual memory management used to work with-retain
and-release
calls. Too many-retain
calls = a memory leak. Too many-release
calls = a crash.
The ResetGuard Loader
There’s a very useful loader we can make the works in conjunction with this behavior; I call it the “Reset Guard” loader. The basic idea of this loader is that it stops people from resetting a loader chain while another reset call is already happening. This could happen because of a client error in allowing a user to tap a “Log Out” button multiple times, but there are some situations where we can compose loaders that having a reset guard can be useful as well.
The general idea is simple:
- If the loader is not resetting, then allow requests to load.
- If the loader is not resetting, then allow a reset to begin.
- If the loader is resetting, then fail attempts to load a request
- If the loader is resetting, then attempts to start another reset should do nothing
With these simple requirements, we can start stubbing out an implementation (ignoring issues of thread-safety):
public class ResetGuard: HTTPLoader {
private var isResetting = false
public override func load(request: HTTPRequest, completion: @escaping (HTTPResult) -> Void) {
// TODO: make this thread-safe
if isResetting == false {
super.load(request: request, completion: completion)
} else {
let error = HTTPError(code: .resetInProgress, request: request)
completion(.failure(error))
}
}
...
}
The implementation of the reset(with:)
method is a little bit trickier, because we need to know when the loaders below us have finished loading. Our reset(..)
methods can tell us this, but the trick here is to realize we’re going to need a second DispatchGroup
:
public class ResetGuard: HTTPLoader {
...
public override func reset(with group: DispatchGroup) {
// TODO: make this thread-safe
if isResetting == true { return }
guard let next = nextLoader else { return }
group.enter()
isResetting = true
next.reset {
self.isResetting = false
group.leave()
}
}
}
To make this class thread safe, we’d need to add some sort of barrier when reading and writing the isResetting
property. A DispatchQueue
or an NSLock
seem like reasonable choices.
One thing I want to point out here is the behavior of a second call to reset(...)
while one is already in progress. As the implementation currently stands, a second call to reset(with:)
does nothing. That means that a second request to reset will almost definitely finish before the first call to reset.
Food for thought: Instead of ignoring subsequent requests to reset, how would you aggregate them, so that a second call to reset gets merged with the first one?
This ResetGuard
class is a handy loader to help me stop myself from making mistakes, and it typically ends up being the first loader in my chain:
let chain = resetGuard --> applyEnvironment --> ... --> urlSessionLoader
In the next post, we’ll make our (hopefully) final modification to the HTTPLoader
API to allow for cancelling requests and a custom loader to use it.