git checkout
will do. Sometimes it’s a pre-built ~/.gitconfig
file that has lots of tweaks and display customizations and options configured.
Over the years I’ve built up two that I use extremely frequently: git ui
and git identity
.
There are a number of Git clients out there, and in my opinion Fork is the best one. I’ve been using it for several years and LOVE it. At one point I even advocated that it shouldn’t be free because good apps deserve to earn money.
Regardless of which Git app you use, I frequently find myself trolling around the command line doing stuff (cloning, pulling, etc) and then will come up against a task I want to do that, frankly, would be easier in an app. Interactive file staging is one such example. Enter git ui
.
I created a file called git-ui.zsh
and tossed it in my $PATH
(~/Applications
is where I happen to have it). This file is pretty simple:
GIT_FOLDER="$(git rev-parse --show-toplevel 2>/dev/null)"
if [ "$1" == "-r" ]; then
find "$GIT_FOLDER" -name .git -execdir open -a "$EDITOR_GIT" . \;
else
if [ -e "$GIT_FOLDER/.git" ]; then
echo "Opening $GIT_FOLDER in $(basename $EDITOR_GIT)"
open -a "$EDITOR_GIT" "$GIT_FOLDER"
else
echo "Could not find git repository in $(pwd) or any parent"
fi
fi
As part of the setup, this script relies on an environment variable I defined in my ~/.zshrc
called EDITOR_GIT
:
export EDITOR_GIT="/Applications/Fork.app"
(I have several other EDITOR_...
values defined, such as EDITOR_MD
for markdown files, EDITOR_PLIST
for property lists, etc. Then I’ve built up other commands for dealing with markdown and plist files specifically. This one uses EDITOR_GIT
for working with git repositories.)
git-ui.zsh
first uses git rev-parse
to locate the top-level directory of whatever git repository you happen to be in. For example, if your git repo is cloned to ~/Code/MyRepo
, then it will return /Users/yourname/Code/MyRepo
, no matter which sub-directory of the repo you might be cd
‘d into. This value gets stashed into the GIT_FOLDER
variable.
Then, if you passed a -r
flag to this command (git ui -r
), it recursively finds all .git
folders inside your git repository and asks them to open using your $EDITOR_GIT
app. Otherwise, it looks for .git
folder inside your repository root and, if it finds it, asks your $EDITOR_GIT
to open up that folder. (open
is a built-in command, and open -a
means “Open the passed-in stuff using the specified application”)
By itself this script is cool, but when I also add an alias to my ~/.gitconfig
, it gets better:
[alias]
ui = !sh git-ui.zsh
(i.e. the ui
subcommand is equivalent to invoking the git-ui.zsh
script)
The end result is that I can be anywhere inside my git repo in Terminal and, upon realizing I need to do anything non-trivial, can type git ui
to immediately pop up the current repository in my Git app-of-choice. I can also do git ui -r
to open up all repositories in the current folder; I use this when I’ve got a folder of several related cloned repositories that I work with together.
The other main customization I’ve created is the identity
command. Many developers have multiple Github accounts: they have a personal one for their own projects and a work one that is associated with their work for their current employer (for example). The problem arises when want to work with multiple repositories from the same host (github.com) but under different accounts. You want to work with Repository A under your personal account, and Repository B with your work account.
There are weird hacks you can do to your ~/.ssh/config
file and custom hosts to make this work, but it’s a bit arcane, kind of weird to set up, and difficult to remember how it works. Fortunately, there’s a better way.
Git has a way to customize the command it uses to connect via SSH to the remote server, called core.sshCommand
. If you change this, you can alter the way that git
pushes and pulls from remotes. And since this is a configuration value, it can be specific on a per-repository basis.
What you want is to end up with an sshCommand
that looks like this in your repository’s .git/config
file:
[core]
sshCommand = ssh -i ~/.ssh/id_myworksshkey -F /dev/null -o 'IdentitiesOnly yes'
If you have this, then pushes and pulls to the repository will only use your id_myworksshkey
identity; it won’t try to fall back to another identity or the keychain. And since this setup is a bit awkward to remember, we can make an alias to simplify it in our ~/.gitconfig
file:
[alias]
identity = "!f() { git config core.sshCommand \"ssh -i $1 -F /dev/null -o 'IdentitiesOnly yes'\"; }; f"
Now I can create a repo and run git identity ~/.ssh/id_myworksshkey
to configure it with the right identity for pushing and pulling!
(Side note: not all Git apps will correctly use the specified sshCommand
for a repo, but Fork does!)
What handy git
customizations have you come up with?
Last night’s puzzle was something new. The problem itself was pretty straight-forward (finding values that are common in multiple collections), but it resulted in a 45-minute debugging session that culminated in finding a bug in Swift’s implementation of Set.intersection(_:)
.
The nature of last night’s problem was that, when I had a bunch of inputs, I could expect that there was only a single common element between all of them. After getting lucky and solving the problem correctly, I started golfing my code to make it terser. That’s when I started noticing something odd.
My initial version of the code looked something like this:
let firstGroup: String = ...
let secondGroup: String = ...
let thirdGroup: String = ...
let uniqueLettersInFirstGroup = Set(firstGroup)
let uniqueLettersInSecondGroup = Set(secondGroup)
let uniqueLettersInThirdGroup = Set(thirdGroup)
let commonLetters = uniqueLettersInFirstGroup.intersection(uniqueLettersInSecondGroup).intersection(uniqueLettersInThirdGroup)
let commonLetter = commonLetters.first! // safe to unwrap, because if this crashes the input is bad
// ... do processing with the common letter
This worked great, but it’s also a lot of code. So I started combining things:
let firstGroup: String = ...
let secondGroup: String = ...
let thirdGroup: String = ...
let commonLetters = Set(firstGroup).intersection(secondGroup).intersection(thirdGroup)
let commonLetter = commonLetters.first!
Eventually I decided to make an extension, since this sort of algorithm (“find what’s in common between these groups of things”) is a pretty normal thing to encounter in Advent of Code:
extension Collection where Element: Collection, Element.Element: Hashable {
var commonElements: Set<Element.Element> {
// intersect all the elements
// return the final intersection
}
}
let firstGroup: String = ...
let secondGroup: String = ...
let thirdGroup: String = ...
let commonLetters = [firstGroup, secondGroup, thirdGroup].commonElements
let commonLetter = commonLetters.first!
It was about this point that I started noticing something weird: every time I ran my code, I’d get a different answer.
This is not how Advent of Code works. It is very deterministic: for each input, there is a single correct output. And as luck would have it, I’d already found the correct output. What I was getting now as everything except correct.
After putting in a hefty number of log statements, I realized: my set of “common elements” would sometimes have more than one element in it.
Starting with gnmCjzwnmCPTPhBwPjzBgqPjllJJSWlhfhQDSrpJRhDSlfJl
Intersecting rLHNHrLHVNbVHMMctZFHsbcsDSDWpSDSGfSRsRWSRllfGSSG
Intersecting NNtdMVrLNdZNvLvLZrzCndqBgwwPmwgjggBn
ERROR
Common Elements??? - ["j", "r", "B", "m", "z", "n", "q", "w", "g", "C", "P"]
As a general principle that’s fine. ["apple", "pear", "papaya"].commonElements
would have multiple things in it: ["a", "p"]
. But in this case it was not what I wanted, because I knew that I should only be getting a single element in common (due to the nature of this particular Advent of Code puzzle).
This problem of "getting many things back when I was only expecting one" also fully explains why I kept getting different results every time I ran the code. The Why I was getting a different answer every time the code ran?
Set
of common elements was returning many things, but I was asking for the .first
element in the Set. Sets, by their very nature, do not have a specific ordering, so when I ran the code over and over again, the "first" thing would be different each time. That meant that, each time, I'd get a different final result.
After staring at my code for a good 10 minutes, I did the sane thing: I asked for help.
Some of the others who were also up doing Advent of Code in this particular forum graciously popped into the thread and started doing what I was doing: digging apart every single line of code, trying to identify assumptions or gaps in logic. The questions started simple: “do you have typos?”. Then they started digging in to my commonElements
implementation. “What’s this extension? What if you use dropFirst()
? Are there assumptions that fail because this will be a SubSequence
instead of an Array
? Are you using the Swift Algorithms version of .chunks(ofCount:)
or your own?”. Each question and it’s corresponding answer confirmed that, by all accounts, the code looked correct.
Then on a whim I asked:
is it possible there’s a bug in
Set
? 😛
Now, the likelihood of this actually being the case is very very very very small. So many people are using Set
, a fundamental collection type, every single day that idea of there being a bug in its implementation–especially in something as important as set intersection–seemed laughably absurd.
WTF. Did Set intersection break and we’re just noticing?
We tried different versions of Xcode. We even considered manually downloading Swift:
Try downloading a recent swift toolchain
And yet, the code was still broken. And it continued to be broken when pulled out into a Swift playground, removing all of my fancy extensions:
let a = "gnmCjzwnmCPTPhBwPjzBgqPjllJJSWlhfhQDSrpJRhDSlfJl"
let b = "rLHNHrLHVNbVHMMctZFHsbcsDSDWpSDSGfSRsRWSRllfGSSG"
let c = "NNtdMVrLNdZNvLvLZrzCndqBgwwPmwgjggBn"
var s = Set(a)
s.formIntersection(b)
s.formIntersection(c)
print(s) // ["C", "q", "m", "r", "j", "n", "z", "g", "w", "P", "B"]
At this point, we had to conclude that something really strange was going on. Then one poster hit on an idea:
Yikes, try converting a b and c to sets before intersecting
This change was simple. Instead of s.formIntersection(a)
, I changed it to set.formIntersection(Set(a))
. The change here is that instead of using the generic “find things in common with this other sequence of characters”, I was now using the method “find things in common with this other Set
”. These have different implementations because, due to the nature of how sets work, the second can be done much more “cheaply” than the first.
I think you found a (known) bug
So I tried it, and suddenly my code started working. The one who suggested this then dug up this pull request on the Swift repo: PR #59422: Fix handling of duplicate items in generic Set.intersection. It turns out, there was a bug in Set.intersection(_:)
, but it had only been discovered this past June, and the fix only applies to macOS Ventura and later (my machine is running Monterey still). The scope of the bug is fairly limited: it only showed up if you were using the general intersection method, and the sequence had “exactly as many duplicate items as items missing from self
”. As it turned out, Advent of Code happened to provide me with exactly the right input to hit this multiple times.
In the end, my workaround was simple: I could simply make sure I pass in Set(item)
instead of just item
. But the adventure of digging this deep into my code, questioning assumption after assumption, and coming up with ways to test those assumptions was quite exhilarating.
And it proves that maybe… just maybe… it really is a bug in the standard library.
Edit: An earlier version of this post claimed the bug was not fixed yet. This is incorrect. The bug is fixed in Swift 5.7, but my computer is running macOS Monterey (12.6) and thus using an earlier version of Swift. I have since confirmed that the code works as expected on macOS Ventura.
]]>But, I do remember my first “real” app. I wrote it during the 2001–2002 school year, when I was taking trigonometry in high school. I had a TI-83+ graphing calculator and a smattering of games to play on it, but one day early on it occurred to me that I could, in theory, make my calculator do my homework for me. My homework usually consisted of endless applications of the Law of sines and Law of cosines, and the constant repetition seemed like an ideal thing to make something else do for me.
So, I dragged out the manual that came with my calculator and started creating a program that would ask for three pieces of information about a triangle (some combination of angles and sides) and would give me the rest of the information.
I iterated on this program a LOT. I remember working on my bedroom floor with graph paper, sketching out all of the interaction and (what I later learned was called) control flow. It was a lot to keep in my 15-year-old brain, and the paper helped a lot. The program was enormously helpful, but had the unfortunate side effect of making me learn the material better because of all the time I spent picking apart algorithms, than if I had just done the homework myself.
A couple years ago, I rediscovered my calculator in a box of old stuff. On a whim, I bought a data cable off Amazon, put in fresh batteries, and Lo And Behold, there was my program!
This is a really important program to me. The code itself is straight-forward (lots of goto
statements, asking for input, and then doing math and printing the result), but what it represents is so much more. First, it triggered a deeper interest in math, leading me to go off and derive a new formula related to solving triangles. But more importantly, it was the catalyst to get me really interested in programming: I had a need, and I discovered that I could make something to satisfy that need and make my life (and the lives of my classmates, who also got copies of this program) so much easier. That core epiphany has stayed with me and continues to drive me, two decades later.
So if you’re curious, I’ve put the code online. I also translated it into C to make it more readable, and so you can run it on your computer:
]]>As I was mulling these over, an idea occurred to me: I can improve the process of removing backwards compatibility shims by using conditional compilation to remind me when they’re no longer necessary!
SwiftUI was introduced in iOS 13/macOS 10.15, and we commonly refer to that release as “SwiftUI 1.0”. Over the intervening years, we’ve had SwiftUI 2.0 and SwiftUI 3.0. Each release has added more features, as well as provided additional opportunities for app developers to back-deploy features as they’re building apps and adopting new APIs. In my blog post on backwards compatibility, I introduced the idea of a Backport
type to serve as a namespace for these sorts of compatibility shims.
But … when those shims are no longer necessary, how do we remember that we should take them out? It’s really easy to forget that they’re there and allow unnecessary cruft to build up in a codebase over time.
Wouldn’t it be cool if we could use the compiler to help us know when the code wasn’t necessary anymore?
We’ve seen in previous posts how we can provide “compilation conditions” to use with #if
statements in our codebase, like #if BUILDING_FOR_DEVICE
, #if BUILDING_FOR_APP_EXTENSION
, and so on. We’re going to come up with a way to that will allow us to specify #if TARGETING_SWIFTUI_1
or #if TARGETING_SWIFTUI_2
in our code, and use that to leave messages to our future selves.
Every app you build in Xcode has a “deployment target”, which is the minimum operating system version you allow your app to run on. This value is defined by the MACOSX_DEPLOYMENT_TARGET
build setting (or IPHONEOS_DEPLOYMENT_TARGET
, TVOS_DEPLOYMENT_TARGET
, or WATCHOS_DEPLOYMENT_TARGET
settings, depending on the platform you’re targeting). The value of this build setting is the operating system version number, like 10.15
or 8.3
or whatever.
We can append this value to other build settings via substitution, but we quickly run in to some issues:
_SWIFTUI_VERSION_10.15 = 1
_SWIFTUI_VERSION_11.0 = 2
_SWIFTUI_VERSION_12.0 = 3
_SWIFTUI_VERSION = $(_SWIFTUI_VERSION_$(MACOSX_DEPLOYMENT_TARGET))
If we do this, we get a compilation error! As it turns out, .
is not a legal value to put into build setting names. Fortunately, we can transform the build setting value before substituting it.
Transformation operators are appended to the build setting name, after a :
character. The list of supported operators is below¹.
Operator | Transformation |
---|---|
identifier |
A C identifier representation suitable for use in source code. |
c99extidentifier |
Like identifier , but with support for extended characters allowed by C99. |
rfc1034identifier |
A representation suitable for use in a DNS name. |
quote |
A representation suitable for use as a shell argument. |
lower |
A lowercase representation. |
upper |
An uppercase representation. |
standardizepath |
The equivalent of calling -stringByStandardizingPath on the string. |
base |
The base name of a path - the last path component with any extension removed. |
dir |
The directory portion of a path. |
file |
The file portion of a path. |
suffix |
The extension of a path including the . divider. |
And, these operators can be chained by concatenating another :
and operator name. We’ll use these transformations to come up with a better format for our deployment target value.
If we look at the value, such as 10.15
, we’ll see that it kind of looks like a file name: a file named 10
with an extension of 15
. We can abuse leverage some of the file-based operators to extract the major and minor values:
DEPLOYMENT_TARGET_NUMBER_MAJOR = $(MACOSX_DEPLOYMENT_TARGET:base)
DEPLOYMENT_TARGET_NUMBER_MINOR = $(MACOSX_DEPLOYMENT_TARGET:suffix:c99extidentifier)
DEPLOYMENT_TARGET_NUMBER = $(DEPLOYMENT_TARGET_NUMBER_MAJOR)$(DEPLOYMENT_TARGET_NUMBER_MINOR)
If we do this, we end up with the DEPLOYMENT_TARGET
defined as 10_15
. Unfortunately, we can’t use c99extidentifier
directly, because that strips the leading number of the deployment target. So we resort to this “file” approach (getting the “basename” of the value and its “suffix”, and then using c99extidentifier
to turn the .
into a _
) to get our transformed value.
Now that we can transform our deployment target into a safe value, we can build up our settings:
_SWIFTUI_VERSION_10_15 = 1
_SWIFTUI_VERSION_11_0 = 2
_SWIFTUI_VERSION_12_0 = 3
_SWIFTUI_VERSION = $(_SWIFTUI_VERSION_$(DEPLOYMENT_TARGET_NUMBER))
For completeness, we can define these values for every platform:
_PLATFORM =
_PLATFORM[sdk=mac*] = MACOSX
_PLATFORM[sdk=iphone*] = IPHONEOS
_PLATFORM[sdk=appletv*] = TVOS
_PLATFORM[sdk=watch*] = WATCHOS
// Sanitize the numeric deployment target value
_DEPLOYMENT_TARGET = $($(_PLATFORM)_DEPLOYMENT_TARGET)
DEPLOYMENT_TARGET_NUMBER_MAJOR = $(_DEPLOYMENT_TARGET:base)
DEPLOYMENT_TARGET_NUMBER_MINOR = $(_DEPLOYMENT_TARGET:suffix:c99extidentifier)
DEPLOYMENT_TARGET_NUMBER = $(DEPLOYMENT_TARGET_NUMBER_MAJOR)$(DEPLOYMENT_TARGET_NUMBER_MINOR)
// The naming scheme is "_SWIFTUI_VERSION_" + platform name + "_" + os version
_SWIFTUI_VERSION_MACOSX_10_15 = 1
_SWIFTUI_VERSION_MACOSX_11_0 = 2
_SWIFTUI_VERSION_MACOSX_12_0 = 3
_SWIFTUI_VERSION_IPHONEOS_13_0 = 1
_SWIFTUI_VERSION_IPHONEOS_14_0 = 2
_SWIFTUI_VERSION_IPHONEOS_15_0 = 3
_SWIFTUI_VERSION_TVOS_13_0 = 1
_SWIFTUI_VERSION_TVOS_14_0 = 2
_SWIFTUI_VERSION_TVOS_15_0 = 3
_SWIFTUI_VERSION_WATCHOS_6_0 = 1
_SWIFTUI_VERSION_WATCHOS_7_0 = 2
_SWIFTUI_VERSION_WATCHOS_8_0 = 3
// Get the SwiftUI version based on the platform and deployment target
_SWIFTUI_VERSION = $(_SWIFTUI_VERSION_$(_PLATFORM)_$(DEPLOYMENT_TARGET_NUMBER))
// Define the values to be used as compilation conditions, based on the SwiftUI version
_SWIFTUI_1 = TARGETING_SWIFTUI_1 TARGETING_SWIFTUI_2 TARGETING_SWIFTUI_3
_SWIFTUI_2 = TARGETING_SWIFTUI_2 TARGETING_SWIFTUI_3
_SWIFTUI_3 = TARGETING_SWIFTUI_3
SWIFTUI = $(_SWIFTUI_$(_SWIFTUI_VERSION))
SWIFT_ACTIVE_COMPILATION_CONDITIONS = $(inherited) $(SWIFTUI)
Whew, that’s a lot! But, we’ve got something pretty cool now. Let’s put it to use!
With values like TARGETING_SWIFTUI_1
or TARGETING_SWIFTUI_3
in the SWIFT_ACTIVE_COMPILATION_CONDITIONS
, we can use them as part of #if
conditionals:
extension Backport where Content: View {
#if TARGETING_SWIFTUI_2 || TARGETING_SWIFTUI_1
// we're deploying to macOS < 12
@ViewBuilder func badge(_ count: Int) -> some View {
if #available(macOS 12, *) {
content.badge(count)
} else {
content
}
}
#else
#error("We're only targeting SwiftUI 3+. Backporting `.badge(_:)` is unnecessary and should be removed.")
#endif
}
Now as we adjust our deployment target, the active compilation conditions will change depending on the OS version (and platform) we’re targeting. If we move our deployment target up such that we’re no longer targeting SwiftUI 2 (ie, macOS 11.0, iOS 14, tvOS 14, or watchOS 7), then the compiler will stop building this badge(_:)
method and instead will produce an error telling us to clean up the unnecessary code.
This screenshot shows what happens when we update our deployment target to macOS 12:
This does mean that the first time we change our deployment target, we’ll get a bunch of compilation errors. But given the nature of how Backport
is implemented, this should be a relatively quick process to move past. (And of course, you’re welcome to use #warning
instead of #error
).
There are a couple of small drawbacks with this specific approach.
First, every new SwiftUI version will need new values in your configuration file. As new SDKs come, there’s a small amount of bookkeeping necessary to make sure the various condition values get defined.
Second, if you’re targeting specific minor OS version (macOS 12.3, for example), then you also have to fill out more values for the SwiftUI versions. You could probably work around this by only keying off the major OS version number, but that’s a decision that’s dependent on your use-case and how far back you need to deploy.
Finally, using #error
(as demonstrated above) means that the task of updating a deployment target now becomes a little tedious: you have to fix all of these build errors before continuing; adopting changes in a piecemeal fashion becomes more difficult (although this can be mitigated by using #warning
instead).
When we write code, it’s always nice to leave things for future maintainers to guide them down the correct path and avoid pitfalls. This typically takes the form of comments, but with a bit of clever application we can use the compiler to help as well. This allows us to leave guideposts that can keep our code clean, or leave warnings and reminders. Maybe you want to see if a particular workaround is still necessary in a system framework? Leave a #if
in your code like this that reminds you to check the next time you update your base SDK (SDK_VERSION
). Working around a specific bug in Xcode? Leave a reminder for yourself based on the XCODE_VERSION_MAJOR
(or XCODE_VERSION_MINOR
or XCODE_VERSION_ACTUAL
) to check if it’s still necessary. Maybe you want to remind yourself to revisit some code when your MARKETING_VERSION
goes from 1.x
to 2.0
? Leave yourself a compiler note!
Your future self will thank you.
¹ - This table was taken from Matt Stevens’ blog post here: http://codeworkshop.net/posts/xcode-build-setting-transformations. ↑
]]>A little while ago I was thinking about this problem and came up with a technique that helps clarify some (but not all) aspects of this problem.
When I design APIs, I like to start from the end result and design backwards. I start with the question of “how do I want to use this?” or “what is the least-intrusive way to make this?”. Good APIs have minimal interfaces that allow clients to quickly solve their problems without unnecessarily burdening them with boilerplate or complicated configuration and control flow.
With this in mind, I came up with Backport
.
The definition of Backport
is trivial:
public struct Backport<Content> {
public let content: Content
public init(_ content: Content) {
self.content = content
}
}
At first glance, this doesn’t look very useful; it’s a struct that holds a single value, and it doesn’t do anything. This is by design. Backport
exists to serve as a holding space (namespace) for shims: the conditional code we must write in order to do proper availability checking. Let’s look at a specific example for how we can do this.
In iOS 15, SwiftUI added some modifiers to describe badges on list rows and tab items:
extension View {
public func badge(_ count: Int) -> some View
public func badge(_ label: Text?) -> some View
public func badge(_ key: LocalizedStringKey?) -> some View
public func badge<S>(_ label: S?) -> some View where S : StringProtocol
}
These are really useful, but have the downside that, if you’re supporting iOS 14 or earlier, you get a compiler error if you try to use these directly:
someTabView
.badge(42) // error: 'badge' is only available in iOS 15.0 or newer
This is where Backport
can help:
extension View {
var backport: Backport<Self> { Backport(self) }
}
extension Backport where Content: View {
@ViewBuilder func badge(_ count: Int) -> some View {
if #available(iOS 15, *) {
content.badge(count)
} else {
content
}
}
}
Here we’re doing a couple of things:
View
has a .backport
property that returns a Backport
which holds the viewBackport
holds a view, it gets this badge(_ count: Int)
method..badge()
is available. If it is, it passes the parameter on to the system implementationThis means that we can now do:
someTabView
.backport.badge(42) // no error, and will have a badge if the app is running on iOS 15 or later
This idea of using Backport
as a namespace can apply to types as well. iOS 15 also added a new kind of view, called AsyncImage
. This is a view that can handle loading an image from an asynchronous source, like a URL. In many respects, it is similar to the popular SDWebImage (and similar) package.
However, like with .badge(…)
, if you try to use it while also supporting iOS 14, you’ll get a compiler error:
AsyncImage(url: someImageURL) // error: 'AsyncImage' is only available in iOS 15.0 or newer
Again, let’s start with what we want the final result to look like and use that to work towards an implementation:
Backport.AsyncImage(url: someImageURL)
There are two ways we could implement this; I’ll show both. The first way is what you might expect, with a nested type:
// a same-type requirement of "Content == Any" is needed to help
// the compiler figure out what to do.
// Thanks to type inference, we won't need to actually type
// "Backport<Any>"; the compiler can figure it out when we
// type `Backport.AsyncImage…`
//
// We'd only need to specify the `<Any>` parameter if a different
// extension also declared a nested `AsyncImage` symbol.
extension Backport where Content == Any {
struct AsyncImage: View {
let url: URL?
var body: some View {
if #available(iOS 15, *) {
SwiftUI.AsyncImage(url: url)
} else {
MyCustomAsyncImage(url: url)
}
}
}
}
This approach works well when you have a new type that has a single main entry point for configuration (ie, some sort of designated initializer). However AsyncImage
also lets you specify other parameters and options, and it might make the intermediate Backport.AsyncImage
struct a bit unwieldy to support them all. So, another approach you could take is to use a static function that looks like a type:
extension Backport where Content == Any {
@ViewBuilder static func AsyncImage(url: URL?) -> some View {
if #available(iOS 15, *) {
SwiftUI.AsyncImage(url: url)
} else {
MyCustomAsyncImage(url: url)
}
}
}
This has the same callsite syntax as the nested type (ie, they look identical at the point-of-use), but it doesn’t have the intermediate struct that has to potentially worry about holding a bunch of different configuration values or variations.
Pick the approach that feels simplest and easiest to maintain to you.
Note: This trick of using a capitalized function to “fake” a type name is one I find pretty handy. It bends the naming convention rules a bit, but in my opinion, the goal of “clarity at the callsite” is one that overrides all others.
Unfortunately, I have not come up with a good way to backport things like specific properties on SwiftUI’s EnvironmentValues
, such as .headerProminence
. In theory it should be similar to backporting methods, however every time I’ve gotten into attempting to implement this, I’ve run up against walls. The main problem I’ve found is one of types; for pre-iOS 15 devices, I’d need the type of the property to be Backport.Prominence
, and in the other case I’d want it to be SwiftUI.Prominence
. I’ve yet to find a satisfactory way of making this work.
Of course, if I ever come up with a way, I’ll make another post.
Eventually, your app will drop support for iOS 14 and these shims will no longer be necessary. At this point, the easiest thing to do is to delete the no-longer-necessary types in your Backport
extensions and rebuild your app. You’ll get errors that (for example) Value of type 'Backport<Text>' has no member 'badge'
. This will reveal all the points in your code where you were using the backported .badge
method. Go through these one by one, delete the intermediate .backport
part of the call, and everything should compile again.
Alternatively, you could remove the if #available(…)
check from the backport implementation and leave all those calls in place, but you would end up with an extraneous level of indirection in a bunch of callsites.
Even though I’ve not found a way to use this in every scenario, I have found this to be useful in most scenarios. Using Backport
like this has some really nice advantages:
backport.
and that finds methods and types.Backport<Content>
isn’t a SwiftUI View
, it doesn’t interfere with SwiftUI’s underlying graph engine. In fact, SwiftUI never sees any intermediate view (unless I explicitly make one, such as one of the Backport.AsyncImage
options).Backport<Content>
has no limits on its Content
, this can be used for backporting just about any kind of new functionality from a system framework.Backport
the same as their framework counterparts, searching through the codebase for how a system API gets used still points to the “right” points in my codebackport.
characters and it Just Works™.How do you do compatibility shims and availability checking in your apps? Have you found something that works well for you? What challenges do you face adopting new APIs each year?
This originally appeared as a few posts on my twitter account:
A handy Swift convenience thing I came up with recently (1/n):
— Dave DeLong (@davedelong) October 7, 2021
struct Backport<Content> {
let content: Content
}
extension View {
var backport: Backport<Self> { Backport(content: self) }
}
Also as a blog post from Ralf Ebert, which he wrote with my blessing after learning about this technique from me:
]]>There is an even better way to use iOS-15-only View modifiers in older iOS versions: https://www.ralfebert.de/swiftui/backporting-ios-view-modifiers/
— Ralf Ebert (@RalfEbert) October 7, 2021
Thanks @davedelong https://twitter.com/davedelong/status/1446151822800945155 pic.twitter.com/ELsFjWuuGM
I really like Core Data. It’s an incredibly full-featured object persistence layer with an enormous number of really cool features. I know that it tends to get a bad rap amongst developers, but as I outlined in my post on the “Laws” of Core Data, I believe that most of the negative press comes from mis-using the framework.
I’m not saying that Core Data is without flaw; far from it. There are definitely changes I’d love to see in the API (a few of which I touch on in that post). But I do not believe it deserves the derision I so often hear from developers.
So when it comes to storing offline data in an app, my natural inclination is to use Core Data for the persistence layer. It’s pretty straight-forward to describe a schema in the editor and use the code generation features of Xcode to get it up and running fairly quickly.
Where I hesitate is introducing Core Data into the view layer. Despite the existence of things like @FetchRequest
, I believe that it’s good to keep the layers of my personal apps separate. Core Data, dealing so closely with data organization and persistence, belongs at one of the lowest levels of my apps, and not in the UI.
This is a large part of why I suggest creating an abstraction layer for Core Data. An abstraction layer allows me to hide the nitty gritty details of data mutation and persistence from the parts of the app that deal with displaying that data in the UI.
An initial approach might suggest that creating a Core Data abstraction layer requires foregoing all the nice affordances we have for quickly and expressively extracting information from the persistent store. Not so! The setup we’ll be going over allows me to have the abstraction layer while still using cool things (like property wrappers) for retrieving data.
When designing a new API, I often like to start with the final syntax of how I want to use a thing, and then work backwards to make that syntax possible. As I was imagining a data abstraction layer, I came up with this:
struct ContactList: View {
@Query(.all) var contacts: QueryResults<Contact>
var body: some View {
List(contacts) { contact in
Text(contact.givenName)
}
}
}
This is of course heavily inspired by @FetchRequest
, but that’s not a bad thing. It’s pretty minimal: I define a query to fetch everything (.all
the contacts), and I’ve got a basic body
implementation. Like with @FetchRequest
, I do need a special “results” type, which we’ll get to later.
We’re going to end up with a few different things to make this possible, which I’ll outline here:
DataStore
classQueryable
protocolQueryFilter
protocolQuery
property wrapperQueryResults
structFirst up is the DataStore. In my post on Core Data, I pointed out that it’s not necessary to create a “stack” object, since an NSManagedObjectContext
has everything you need to get and manipulate the underlying objects. So, it might seem odd that the very first thing we’re going to do is wrap up the Core Data stack.
If you think about it though, it makes sense in this situation. We want to provide an abstraction layer on top of Core Data. The entire point is to hide Core Data from the rest of the app, and that implies hiding the persistence controllers from the rest of the app as well (the context, the store coordinator, etc). Thus, I create a DataStore
object that encapsulates this. It manages the details of loading the persistent store and, as necessary, modifying the objects therein.
All mutations to the objects happen via the DataStore. I’ve come to really like the “one-way data flow” principle, where data always flows in a single direction. SwiftUI itself largely operates on this principle: as the state changes, the new UI description “flows” out of the new state. When it comes to my model layer, I do the same: the model objects flow out of the DataStore, and if something needs to change, I send a command to the DataStore instructing it to make the change, and new/modified values flow out of it.
The next two bits are closely related, so I’ll lump them together: The Queryable
and QueryFilter
protocols. They look like this:
public protocol Queryable {
associatedtype Filter: QueryFilter
init(result: Filter.ResultType)
}
public protocol QueryFilter: Equatable {
associatedtype ResultType: NSFetchRequestResult
func fetchRequest(_ dataStore: DataStore) -> NSFetchRequest<ResultType>
}
The Queryable
protocol is the main type we’ll use for defining the in-memory struct
values that we’ll be pulling out of the data store. It has an associated Filter
type, which is the type that describes which values to fetch.
It also has an initializer, which takes the actual NSManagedObject
instance and uses it to populate its properties. This must unavoidably be part of the interface; I’ve not found a good way to hide this yet.
The QueryFilter
protocol defines the associated result type (typically NSManagedObject
or one of your app-specific subclasses), as well as a function for turning the filter itself into an NSFetchRequest
.
You’ll notice that I pass the DataStore
in to the fetch request method. In my experience, the DataStore
tends to end up holding cached information that a particular type might find useful when generating the fetch request, and being able to have that information on-hand during predicate creation has proven to be useful.
Adopting this protocol generally is as you’d expect:
public struct Contact: Queryable {
public typealias Filter = ContactFilter
public let givenName: String
public let familyName: String?
public init(result: MyCoreDataContactObject) {
self.givenName = result.…
self.familyName = result.…
}
}
public struct ContactFilter: QueryFilter {
public static let all = ContactFilter()
public func fetchRequest(_ dataStore: DataStore) -> NSFetchRequest<MyCoreDataContactObject> {
let f = MyCoreDataContactObject.fetchRequest() as NSFetchRequest<MyCoreDataContactObject>
f.predicate = NSPredicate(value: true)
f.sortDescriptors = [NSSortDescriptor(key: "givenName", ascending: true)]
return f
}
}
With this, we’ve got a simple immutable Contact
struct that defines a filter with a single value (.all
). The fetch request as a predicate to fetch everything, and it sorts the results in a particular order.
You can imagine having more bells and whistles on a particular QueryFilter
type that are unique to the attributes of that type. You will need to implement the NSPredicate
generation of course, but you’d have to write those predicates at some layer anyway.
So, let’s start implementing the basics of @Query
:
@propertyWrapper
public struct Query<T: Queryable>: DynamicProperty {
@Environment(\.dataStore) private var dataStore: DataStore
@StateObject private var core = Core()
private let baseFilter: T.Filter
public var wrappedValue: QueryResults<T> { core.results }
public init(_ filter: T.Filter) {
self.baseFilter = filter
}
public mutating func update() {
core.executeQuery(dataStore: dataStore, filter: baseFilter)
}
}
OK, so far this looks pretty normal. We’ve got a custom property wrapper that will trigger a view update pass in SwiftUI when either the dataStore
in the Environment changes, or this “Core” object publishes a “will change” notification. Let’s make a simple first pass at this Core
class.
public struct Query<T: Queryable>: DynamicProperty {
…
private class Core: ObservableObject {
private(set) var results: QueryResults<T> = QueryResults()
func executeQuery(dataStore: DataStore, filter: T.Filter) {
let fetchRequest = filter.fetchRequest(dataStore)
let context = dataStore.viewContext
// you MUST leave this as an NSArray
let results: NSArray = (try? context.fetch(fetchRequest)) ?? NSArray()
self.results = QueryResults(results: results)
}
}
}
This is a really bare-bones implementation. So far, there’s nothing in here that couldn’t be done in the update()
method of the property wrapper directly, and we’re always unconditionally executing the fetch request. We’re also not taking advantage of being an ObservableObject
and broadcasting changes.
So for round 2, we’ll adjust this. For readability, I’ll leave off the outer layer of indentation.
private class Core: NSObject, ObservableObject, NSFetchedResultsControllerDelegate {
var results = QueryResults<T>()
var dataStore: DataStore?
var filter: T.Filter?
private var frc: NSFetchedResultsController<T.Filter.ResultType>?
func fetchIfNecessary() {
guard let ds = dataStore else {
fatalError("Attempting to execute a @Query but the DataStore is not in the environment")
}
guard let f = filter else {
fatalError("Attempting to execute a @Query without a filter")
}
var shouldFetch = false
let request = f.fetchRequest(ds)
if let controller = frc {
if controller.fetchRequest.predicate != request.predicate {
controller.fetchRequest.predicate = request.predicate
shouldFetch = true
}
if controller.fetchRequest.sortDescriptors != request.sortDescriptors {
controller.fetchRequest.sortDescriptors = request.sortDescriptors
shouldFetch = true
}
} else {
let controller = NSFetchedResultsController(fetchRequest: request,
managedObjectContext: ds.viewContext,
sectionNameKeyPath: nil, cacheName: nil)
controller.delegate = self
frc = controller
shouldFetch = true
}
if shouldFetch {
try? frc?.performFetch()
let resultsArray = (frc?.fetchedObjects as NSArray?) ?? NSArray()
results = QueryResults(results: resultsArray)
}
}
func controllerWillChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
objectWillChange.send()
}
func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
let resultsArray = (controller.fetchedObjects as NSArray?) ?? NSArray()
results = QueryResults(results: resultsArray)
}
}
This is looking better. The Core
class is now an NSObject
subclass, so it can be the delegate of an NSFetchedResultsController
; this allows us to get notifications from Core Data itself when the underlying data set changes. Or in other words, if we have the results of a @Query
shown on screen and the underlying data store mutates the data, the Core
object will get a notification, broadcast the objectWillChange
signal, and the on-screen view will update to show the new or updated data. We’re checking in the fetchIfNecessary()
method to make sure we don’t need to re-fetch data if things haven’t changed.
We’ll need to update the Query
wrapper a bit as well:
@propertyWrapper
public struct Query<T: Queryable>: DynamicProperty {
@Environment(\.dataStore) private var dataStore: DataStore
@StateObject private var core = Core()
private let baseFilter: T.Filter
…
public mutating func update() {
if core.dataStore == nil { core.dataStore = dataStore }
if core.filter == nil { core.filter = baseFilter }
core.fetchIfNecessary()
}
}
Before getting to the last bit, let’s step back and take a look at what’s going on:
Query
wrapper retrieves the data store from the environment, so it knows where to look for dataQuery
wrapper is created with a filter that describes what should be fetched from the data storeCore
class that uses the data store and filter to create an NSFetchedResultsController
; this controller provides the results from the underlying Core Data store, as well as notifies of changes to the data, so our views can properly update. (That update is happening because the Core
object calls its objectWillChange.send()
method; SwiftUI is seeing this ObservableObject
held in a @StateObject
and is tracking it for changes.Query
wrapper provides the results back out to the view via this QueryResults
type.The last things to notice before we move on are:
NSArray
, not an Array<T.Filter.ResultType>
NSManagedObject
instances, not values of type T
In order to make our data work nicely with things like List(…)
and ForEach(…)
, it needs to conform to the RandomAccessCollection
protocol. So we need to take the array of results we got from Core Data and wrap it up. Fortunately, this only requires a few lines of code:
public struct QueryResults<T: Queryable>: RandomAccessCollection {
private let results: NSArray
internal init(results: NSArray = NSArray()) {
self.results = results
}
public var count: Int { results.count }
public var startIndex: Int { 0 }
public var endIndex: Int { count }
public subscript(position: Int) -> T {
let object = results.object(at: position) as! T.Filter.ResultType
return T(result: object)
}
}
This is also where using an NSArray
(instead of a Swift Array
) is important. When fetching data from Core Data, we don’t always know how many values we’ll be getting back. There could be 5 or 5 million. Core Data solves this problem by using a subclass of NSArray
that will dynamically pull in data from the underlying store on demand.
On the other hand, a Swift Array
requires having every element in the array all at once, and bridging an NSArray
to a Swift Array
requires retrieving every single value. So if you execute a query that returns 5 million values and then turn the results into an Array<NSManagedObject>
, your app will grind to a halt while it fetches 5 million objects in to memory.
We avoid this by avoiding the NSArray
-to-Array
conversion. This is fine for our cases, because the Core Data array is smart enough to act as a “random access collection”, so we can easily turn around and ask for the value at the 3 millionth position.
Once we’ve retrieved the value, we run it through the initializer defined on the Queryable
protocol, and we end up with the value to show publicly in the UI.
Food for thought: This
QueryResults
type re-builds theT
value every time. How would you add caching so that if it’s built the value once, it doesn’t need to build it again?
It’d be nice to have a way to modify the filter from the UI, so we can control things like sort order or change aspects of the predicate. For example, if you’re showing a list of contacts, it’d be nice to have a search field so you can search for contacts that match some user-provided text. Let’s imagine that the ContactsFilter
type has a var searchString = ""
property that gets translated into a predicate that searches for that text in the underlying data store.
As before, let’s invent the syntax we want, and then write the code to make it happen:
struct ContactList: View {
@Query(.all) var contacts: QueryResults<Contact>
var body: some View {
TextField("Search", text: $contacts.searchString)
List(contacts) { contact in
Text(contact.givenName)
}
}
}
That looks pretty neat! If we can pull out a mutable filter (Binding<T.Filter>
), we can use the existing mechanism on Binding
to scope it down to a single field (@dynamicMemberLookup
) and bind the filter directly into a TextField
. Let’s make this happen.
The $contacts
syntax is some nifty compiler magic to access the projectedValue
on the property wrapper (if it has one). We want to implement this on Query
and provide a Binding<T.Filter>
. Getting the value should return either the current modified filter, or the initial filter. Setting the value should change it on the Core
object:
public struct Query<T: Queryable>: DynamicProperty {
@Environment(\.dataStore) private var dataStore: DataStore
@StateObject private var core = Core()
private let baseFilter: T.Filter
public var wrappedValue: QueryResults<T> { core.results }
public var projectedValue: Binding<T.Filter> {
return Binding(get: { core.filter ?? baseFilter },
set: {
if core.filter != $0 {
core.objectWillChange.send()
core.filter = $0
}
})
}
…
}
There we go! We return a Binding
(which is really a wrapper around getter
and setter
closures) that can get the value by using the core
’s filter, and falling back to the baseFilter
if the Core doesn’t have one. Setting the value will check to make sure its a different value (so we don’t inadvertently trigger unnecessary UI updates if nothing is changing), then will broadcast an imminent change to SwiftUI, and set the new value into the Core
object.
Broadcasting the change means the containing view will be re-built, the Core
will notice the changed filter, and the NSFetchedResultsController
will be updated and re-executed to provide new results!
At the surface, this seems like it’s doing a decent amount of work with little discernible gain. All this, just so I can have a struct
value? Well yes… but actually no. As you develop the model layer in your apps, your Filter
objects will inevitably become more complex, and having a single place where you build your fetch requests will be invaluable. It makes mutating the underlying managed object model much simpler, because you only have one or two places where you need to change code. You come to appreciate how the compiler stops you from inadvertently mutating values, and since Queryable
values typically are structs, you can avoid a decent amount of the queue restriction rules that plague novice Core Data adopters.
In my own usage of this pattern, I’ve found it to be extremely powerful. One neat thing that may not be immediately obvious is that you can have multiple Queryable
types that are backed by the same kind of underlying NSManagedObject
. So if you have a particularly complex managed object and only need certain parts of it in certain situations, you can construct different Queryable
representations for those situations. You can also do things like make your filter conform to Codable
to easily make persistable filters.
I’ve also come to really appreciate how this guarantees that the values used to populate my UI are read-only: they’re immutable structs! I couldn’t mutate them if I wanted to! This also allows me to build in-memory values that can be supplemented with additional information. For example, if I update the Queryable
initializer to take the DataStore
as well as the backing managed object, I have the opportunity to fill out rich value types in ways that Core Data can’t do itself.
Overall, the boilerplate I have to put in to make types be Queryable
seems like a good tradeoff for the features, safety, and expressivity I get in return.
So, the big question… Should you use this code?
NO you should not use this code.
There are two main reasons you should hesitate to lift this code into your project:
I wrote all of this in a text editor. None of it has been checked for completeness. I’ve pulled portions of it from my personal implementation, but I’m sure I’ve missed some edge cases that you’ll want to figure out.
Before you take all of this and splat it into your code base, you need to evaluate what problems you’re trying to solve. I came up with this code in an attempt to follow my personal Laws of Core Data, and the design of this code is a reflection of those constraints. Your code may well be working with a different set of a constraints, and so it is imperative that you figure out what is best for your code. Maybe it’s this exactly. Maybe it’s something similar. But maybe it’s completely different. That’s fine.
The easiest way to make a custom property wrapper that triggers SwiftUI view updates is to simply wrap another property wrapper:
@propertyWrapper
public struct ReinventedState<T>: DynamicProperty {
@State var wrappedValue: T
}
There’s one crucial piece to make this work, and that’s to declare that your property wrapper conforms to DynamicProperty
. If you do this, then the SwiftUI runtime will discover your property wrapper instance and start tracking it for changes. Since your property wrapper is using other built-in property wrappers, SwiftUI will recognize that your wrapper is dependent on the others, and when those update, yours will too.
There is an update()
method that’s part of the DynamicProperty
protocol, but you typically don’t need to implement it (the default implementation does nothing) unless your wrapper is tracking some internal state that it needs to update before the .wrappedValue
can be invoked in a var body
somewhere. update()
is called right before the view’s body is called.
But this is really all there is to it.
One of the most useful property wrappers I’ve made is a simple “feature flag” trigger. It’s fairly common to have feature flags to control whether app-global features are available. Some apps go so far as to download a set of feature flag values and use those to dynamically adjust behavior. I like it for doing this like enabling a hidden debug menu.
The idea behind @Feature
is that I can use it to track a particular key path on a Features
singleton (or environment object) and then when that feature flag changes, the @Feature
wrapper triggers a change at the view level. It looks something like this:
public class Features: ObservableObject {
public static let shared = Features()
private init() { }
@Published public var isDebugMenuEnabled = false
}
@propertyWrapper
public struct Feature<T>: DynamicProperty {
@ObservedObject private var features: Features
private let keyPath: KeyPath<Features, T>
public init(_ keyPath: KeyPath<Features, T>, features: Features = .shared) {
self.keyPath = keyPath
self.features = features
}
public var wrappedValue: T {
return features[keyPath: keyPath]
}
}
Now in a view, I can use it like so:
struct AppSettingsView: View {
@Feature(\.isDebugMenuEnabled) var showDebugMenu
var body: some View {
Form {
if showDebugMenu {
NavigationLink(destination: DebugSettings()) { Text("Debug Settings") }
}
…
}
}
}
Now any time the Features
object publishes a change from anywhere in the app, my @Feature
wrapper will pick it up, and the AppSettingsView
will be automatically re-built (if it’s on-screen).
A couple notes about this:
@EnvironmentObject
instead of an @ObservedObject
in the implementation and you’d get the same resultFeatures
object is being observed, then technically a change to any feature will trigger a change to every @Feature
wrapper. You could imagine working around this by doing something clever with specializing on T: Equatable
and checking for differences on a particular key path before triggering a change. If you did this, you wouldn’t be observing the Features
type directly, but some sort of object that wraps it and re-publishes a particular value.@Feature
relies on a KeyPath
to know what value to pull out. This guarantees some type-safety, but it also means I don’t end up with stringly-typed keys scattered around my app (more on this shortly).@Feature
instead of observing the Features
value directly with @EnvironmentObject
or something, it’s easier to search through the a codebase to find all uses of a feature: I can search for @Feature
or .isDebugMenuEnabled
and be confident that I’ll have found everything.@AppSetting
and @SceneSetting
are two that are nearly identical in construction to @Feature
, but differ only in where they pull their values from. @Feature
relies on a Features
class, whereas @AppSetting
and @SceneSetting
would rely on AppSettings
and SceneSettings
instances, respectively.
I won’t go too much into the code here, beyond saying that I built these because I don’t like scattering the string keys for UserDefaults
values everywhere, and I much prefer the type safety of using key paths.
So, I built an AppSettings
class:
public class AppSettings: ObservableObject {
@AppStorage("appSetting1") public var appSetting1: Int = 42
@AppStorage("appSetting2") public var appSetting2: String = "Hello"
…
}
and a SceneSettings
class:
public class SceneSettings: ObservableObject {
@SceneStorage("sceneValue1") public var sceneSetting1: Int = 54
@SceneStorage("sceneValue2") public var sceneSetting2: String = "World"
…
}
Their respective property wrappers are basically identical to @Feature
, but obviously use a key path rooted in the corresponding type, and then use an @ObservedObject
/@EnvironmentObject
of that type. They also implement the projectedValue
property so that I can create bindings to use these values using the $…
syntax.
struct SomeDetailView: View {
@SceneSetting(\.shouldShowThatOneInfoSection) var showInfo
var body: some View {
DisclosureGroup("Detail Info", isExpanded: $showInfo) {
SomeInfoView()
}
…
}
}
Like with @Feature
, these have the same benefits of type safety, refactorability, and avoiding stringly-typed keys.
I’ll sometimes create one-off property wrappers to access specific things in the environment or specific global values. I do these mainly to draw attention to their usage in a particular view. For example, it can be easier to scan through a view and spot an @OpenURL var openURL
value than it is to scan through a list of a handful of @Environment(\.…)
values and notice the right key path.
When deciding what to use, I always try to err on the side of readability. The compiler doesn’t care what I type, as long as it can correctly generate the executable code. So when deciding what to do, I remember that more often than not, I’m writing this for me, not for the compiler. Therefore, I write it in a way that will (hopefully) make the most sense to me when I come back to this code in weeks, months, or years and have inevitably forgotten what it does.
This is a quick overview of creating custom property wrappers that trigger SwiftUI view updates, and a couple of examples of some that I’ve found to be useful.
The thing I really want to share requires having a bit of background around creating custom property wrappers. I call it @Query
and it is how I follow my Laws of Core Data in SwiftUI.
In Swift, string interpolation is the functionality that lets us do substitutions into strings using the \(...)
syntax:
let name = "world"
let greeting = "Hello, \(name)!"
This works because of a couple of underlying pieces provided by the standard library and the compiler:
ExpressibleByStringInterpolation
The ExpressibleByStringInterpolation
follows the builder pattern, which describes constructing a value by first creating an intermediate “builder”, tossing some configuration calls at it, and then asking it to “build” the final output value.
Thus, the definition of ExpressibleByStringInterpolation
is straightforward:
public protocol ExpressibleByStringInterpolation: ExpressibleByStringLiteral {
associatedtype StringInterpolation: StringInterpolationProtocol
init(stringInterpolation: Self.StringInterpolation)
}
This protocol expresses three requirements for adopting types:
StringInterpolation
that itself conforms to the StringInterpolationProtocol
protocol. This is the “builder” used to construct stuffExpressibleByStringLiteral
protocolThe last requirement is a bit odd at first, but it makes sense. If you allow your types to be constructed like this…
// build an instance of MyType using ExpressibleByStringInterpolation
let myType: MyType = "Hello, \(name)!"
…then presumably you should also allow your types to be constructed like this:
// build an instance of MyType using ExpressibleByStringLiteral
let myType: MyType = "Hello, world!"
The StringInterpolationProtocol
is where things start getting weird, because it can’t be fully expressed in Swift syntax. The declaration of the protocol is this:
public protocol StringInterpolationProtocol {
associatedtype StringLiteralType: _ExpressibleByBuiltinStringLiteral
init(literalCapacity: Int, interpolationCount: Int)
mutating func appendLiteral(_ literal: Self.StringLiteralType)
}
A StringLiteralType
(basically, always use String
for this unless you have really bizarre situation where that’s not right), an initializer (to construct the builder itself), and an appendLiteral
method.
But where’s all the stuff about appending the interpolated values?
The answer is that they’re there, but Swift doesn’t have a way to describe a protocol like this. The gist is that you need a method called appendInterpolation
, but there aren’t any requirements on what the parameters for this method are.
To understand this a bit better, let’s detour and take a look at the transformation that happens with you use string interpolation in code.
When we write an interpolated string, the compiler transforms it to use the builder. If we write something like this:
let value: MyType = "Hello, \(name)!"
Then at compile-time it gets turned into this:
var builder = MyType.StringInterpolation(literalCapacity: 8, interpolationCount: 1)
builder.appendLiteral("Hello, ")
builder.appendInterpolation(name)
builder.appendLiteral("!")
let value = MyType(stringInterpolation: builder)
The literalCapacity
parameter is the number of Characters
present in the literal portion of the string, and the interpolationCount
indicates how many substitutions there are.
What’s interesting here is that appendInterpolation
call. Basically, anything that we put inside the \(…)
part of the interpolation becomes the arguments to the appendInterpolation
method. So \(foo: bar)
becomes ….appendInterpolation(foo: bar)
, \(age, formatter: someNumberFormatter)
becomes ….appendInterpolation(age, formatter: someNumberFormatter)
, and so on.
Thus, we can create string interpolators that limit their accepted substitutions based on what kinds of appendInterpolation
methods we write. You could, for example, create an interpolator that only accepts interpolated integers by only providing an appendInterpolation(_ int: Int)
method.
String interpolation is how SwiftUI is able to build up NSLocalizedString
keys to look up translations in your .strings
files. When we write Text("Hello, \(name)!")
in SwiftUI, we’re not passing a String
instance to the Text
initializer; we’re passing a LocalizedStringKey
instance, and that type happens to be ExpressibleByStringInterpolation
.
And it’s also not a standard interpolation implementation like the one we find on String
. In addition to building up the “fallback” string to use in the case where it can’t find a proper translation, it builds up the key itself. You can imagine how this might work:
struct LocalizedStringKey.StringInterpolation: StringInterpolationProtocol {
var key = ""
var fallback = ""
mutating func appendLiteral(_ literal: String) {
key.append(literal)
fallback.append(literal)
}
mutating func appendInterpolation<T>(_ value: T) {
key.append("%@")
fallback.append("\(value)")
}
}
With this, you can build up both the fallback value ("Hello, world!"
) and the key to use to look up the translation in your strings file ("Hello, %@!"
).
The fact that interpolation is a syntactic transformation means we can get creative. Let’s take a closer look at the appendInterpolation(…)
bit.
When you have a line of code like this, there are technically three different things that could be happening:
foo.doSomething(bar)
The obvious first case is that you have a func doSomething(_ bar: Bar)
method on the type in question.
The next slightly-less-obvious-but-still-kind-of-common case is when you have a var doSomething: (Bar) -> Void
property on the type. In this case, .doSomething(bar)
is retrieving the closure and executing it, all on the same line.
The least-obvious case is that you’ve got a var doSomething: OtherType
property, and that OtherType
is @dynamicCallable
. @dynamicCallable
is almost never used in Swift, because it’s kind of weird and was added to make it easier to bridge in libraries from other languages. It allows you to have a value and then directly “execute” that value by simply throwing (…)
on the end.
We can exploit @dynamicCallable
to get a whole lot more information from string interpolation.
Let’s imagine that we wanted to make it easy to declare some sort of “route” functionality for an app, and that we’d want to support dealing with incoming paths and automatically parse out certain things. For example, a path with /person/1234
might result in a ["person": PersonID(1234)]
value. Or a /person/1234/items/42/delete
might result in: ["person": PersonID(1234), "item": ItemID(42), "action": ItemAction.delete]
.
We can imagine how we might express this with string interpolation:
let route: Route = "/person/\(person: PersonID.self)/items/\(item: ItemID.self)/\(action: ItemAction.self)"
With @dynamicCallable
and string interpolation, we can build this.
It starts by having a Route
type that is ExpressibleByStringInterpolation
:
struct Route: ExpressibleByStringInterpolation {
typealias StringInterpolation = RouteMatcher
init(stringInterpolation: RouteMatcher) { … }
}
Our RouteMatcher
will build up a regular expression to do the parsing.
// yes, this is a class. That will be explained shortly.
class RouteMatcher: StringInterpolationProtocol {
internal var pattern = ""
required init(literalCapacity: Int, interpolationCount: Int) { }
private func isSpecialRegexCharacter(_ char: Character) -> Bool { … }
func appendLiteral(_ literal: String) {
for character in literal {
if isSpecialRegexCharacter(character) { pattern.append("\\") }
pattern.append(character)
}
}
}
So far, so good. But now things are going to get hairy. We need to say allow an interpolation segment (\(person: Person.self)
) where we have a named parameter and accept a type as the value. We can’t just have a whole bunch of appendInterpolation(…)
methods on our RouteMatcher
, because we don’t know what name the user will want to type in. This is where @dynamicCallable
comes in.
So, we define a property called appendInterpolation
that returns our @dynamicCallable
type:
class RouteMatcher: StringInterpolationProtocol {
var appendInterpolation: Capture { Capture(matcher: self) }
}
@dynamicCallable
struct Capture {
let matcher: RouteMatcher
}
Next, we implement the dynamicallyCall()
method that defines the “keyword arguments” syntax. This will allow us to execute the Capture
type using named argument parameters:
@dynamicCallable
struct Capture {
// since RouteMatcher is a reference type, modifying it here is modifying the "right" value
let matcher: RouteMatcher
func dynamicallyCall(withKeywordArguments args: KeyValuePairs<String, Any.Type>) {
// args is a collection of (String, Any.Type) tuples
// TODO: relay the information back to the RouteMatcher
}
}
// example:
// let c: Capture = …
// c(foo: bar)
At this point, our matcher is almost complete. If you build this code, you’ll find that Xcode complains about missing protocol requirements:
error: type conforming to 'StringInterpolationProtocol' does not implement a valid 'appendInterpolation' method
The compiler is still looking for an actual method, even though we’re not going to actually be using it. Still, we need to make the compiler happy, so we add this to our RouteMatcher
:
class RouteMatcher: StringInterpolationProtocol {
…
@_disfavoredOverload
func appendInterpolation(_ willThisExecute: Never) { }
…
}
This satisfies the compiler (it sees a method called “appendInterpolation”), but there are two things that make sure it never actually gets used:
The @_disfavoredOverload
annotation tells the compiler that “if you have to choose between this thing and another thing, prefer the other thing”. Yes, the underscore technically means it’s “private” and therefore “use it at your own risk”. But, it works and is used all over in SwiftUI. 🤷♂️
The use of Never
as the parameter type means that even if the compiler messes up and picks this method, it won’t work because (gasp) you can’t ever create an instance of Never
to pass to the method.
With this, the compiler now happily rewrites the string interpolation code, and our implementation means that we can capture named arguments and the type values provided to them. If you were to follow this through, you might end up with code that looks approximately like this:
protocol PathExtractible {
init?(pathValue: String)
}
struct Route: ExpressibleByStringInterpolation {
private let matcher: (String) -> Dictionary<String, Any>?
init(stringLiteral value: String) {
let m = Matcher(literalCapacity: 0, interpolationCount: 0)
m.appendLiteral(value)
self.matcher = m.build()
}
init(stringInterpolation: Matcher) {
self.matcher = stringInterpolation.build()
}
func match(_ path: String) -> Dictionary<String, Any>? {
return matcher(path)
}
}
class Matcher: StringInterpolationProtocol {
private let escaped = CharacterSet(charactersIn: #"[\^$.|?*+()"#)
fileprivate var pattern = Array<Unicode.Scalar>()
fileprivate var extractions = Array<(String, PathExtractible.Type)>()
var appendInterpolation: Capture { Capture(matcher: self) }
public required init(literalCapacity: Int, interpolationCount: Int) { }
func appendLiteral(_ literal: String) {
for char in literal.unicodeScalars {
if escaped.contains(char) { pattern.append("\\") }
pattern.append(char)
}
}
@_disfavoredOverload
func appendInterpolation(_ willThisExecute: Never) { }
func build() -> (String) -> Dictionary<String, Any>? {
let f = "^" + String(String.UnicodeScalarView(pattern)) + "$"
let regex = try! NSRegularExpression(pattern: f, options: [])
let extractions = self.extractions
return { path in
guard let match = regex.firstMatch(in: path, options: [], range: NSRange(location: 0, length: path.utf16.count)) else {
return nil
}
guard match.numberOfRanges == extractions.count + 1 else {
return nil
}
var extracted = Dictionary<String, Any>()
// capture groups are 1-indexed
for captureGroup in 1 ..< match.numberOfRanges {
let substring = (path as NSString).substring(with: match.range(at: captureGroup))
let (name, type) = extractions[captureGroup - 1]
guard let value = type.init(pathValue: substring) else {
return nil
}
extracted[name] = value
}
return extracted
}
}
@dynamicCallable
struct Capture {
fileprivate let matcher: Matcher
func dynamicallyCall(withKeywordArguments args: KeyValuePairs<String, PathExtractible.Type>) {
for (name, type) in args {
matcher.pattern.append(contentsOf: "(.+?)".unicodeScalars)
matcher.extractions.append((name, type))
}
}
}
}
This uses a “PathExtractible” protocol to know how to construct values extracted from the path. You could do a bit of convenience work to automatically provide an implementation for things that can already be created from strings, like RawRepresentable
or LosslessStringConvertible
types.
Using this code would look something like this:
let route: Route = "/api/profile/\(memberID: Int.self)/lists/\(listID: String.self)"
let validArguments = route.match("/api/profile/1234/lists/5678")
// validArguments = ["memberID": 1234, "listID": "5678"]
let invalidArguments = route.match("/api/feed")
// invalidArguments == nil
There are a couple of downsides to this approach, which is why I’ve yet to come up with a situation where this code would actually be useful:
Route
itself. This means that every path that comes in has to be naïvely tried on every possible Route
in order to find a matchDictionary<String: Any>
. So even though you know that the types are the right kind, you’ll still have to do some force-casting in order to get them in a useful form. Perhaps there’s something more that could be done here using a custom Decoder
type.I share this with you because string interpolation is neat, the @dynamicCallable
stuff is cool, and hopefully you’ll come up with something that uses this that we all can benefit from.
We’ve already take a look at how to implement a loader that authorizes requests via an OAuth 2 flow, but there’s an abstraction that exists on top of that, called OpenID. With the OAuth loader, we needed to specify things like the login url, the url for refreshing tokens, and so on. OpenID allows for identity providers to abstract that away by shipping down a manifest that contains all of these urls (and other idiosyncrasies of the protocol).
If we wanted to implement OpenID ourselves, we’d need a preliminary state in our state machine to first fetch this manifest, and then use it as the basis for subsequent state logic. Alternatively, we could wrap an existing implementation of OpenID (such as the official implementation) in a custom HTTPLoader
subclass, and allow that library to perform the complex logic. In this case, our HTTPLoader
subclass would serve as an adapter between the API provided by the library, and the API wanted by the HTTP loader chain.
Conceptually, a caching loader should be relatively straight-forward to understand. When a request enters this loader, it examines the request (and perhaps an HTTPRequestOption
to indicate if caching is allowed) and sees if it matches any responses that have been persisted (in memory, on disk, etc). If such a response exists, then the loader returns that response instead of sending the request further down the chain.
If a response doesn’t exist, it continues with typical request execution, but also inserts another completion handler so that it can capture the response and (if the right conditions are met), persist it to use for future requests.
Deduplication is similar to caching, in that when a request comes in, the loader sees if it’s similar to an already in-progress request. If it is, then the new request is set aside, and when the original request gets a response, that response is duplicated to the second request.
There are a couple of ways to handle redirected requests.
By default, URLSession
will follow redirects, unless you specifically override the willPerformHTTPRedirection
delegate method on URLSessionTaskDelegate
. So, you could do that and then conditionally allow redirection on requests based on a particular HTTPRequestOption
you’ve created.
Alternatively, you could unconditionally deny redirections at the URLSession
level, and then have a separate RedirectionFollowingLoader
that takes incoming requests, duplicates them, and sends the duplicates down the chain. When the duplicate comes back, the loader examines the response and sees if its a redirection response. If it is, then it constructs a new request for the redirect, and sends that back down.
Once the loader gets back a non-redirection response, it uses that response as the response for the original request and sends it back out. You would need some logic to detect redirection loops and break out of them, but the key idea here is to send down a copy of a request, so that you get a chance to examine the response before deciding what to do about it.
In principle, certificate pinning should look like any other HTTPLoader: a request comes in, and before it gets sent to the next one, the certificate for the target server is validated against a certificate attached to the request as an HTTPRequestOption
.
In practice, this is a little bit more difficult, because certificates are only available as the connection to the remote server is being negotiated down in the URLSessionLoader
. Because of this, the course of action here is to not have a separate CertificatePinningLoader
, but instead to provide a CertificateValidator
value to an HTTPRequest
that can be used if a loader needs to do some certificate validation (similar to Alamofire’s ServerTrustEvaluating
protocol).
Then our URLSessionLoader
needs to be updated to use a delegate, and implement the delegate method to handle a URLAuthenticationChallenge
, and then consult that option for the request when it receives the .serverTrust
challenge.
A peer-to-peer loader is interesting, because it stems from the realization that the contract for the HTTPLoader
says nothing about which device the response comes from. We’ve already seen examples of loaders that will return fake responses (for mocking) or re-use responses (caching and de-duplication). A P2P loader is one that can decide to ship a request off to another device, and allow that device to provide a response.
This could be done via a myriad of technologies, ranging from something like MultipeerConnectivity to Bluetooth or direct socket connections. The possibilities here are pretty vast.
The astute observer will also realize that the URLSessionLoader
we created early on fits in this sort of category. That’s a loader that “off loads” the responsibility of producing a request to another device. It happens to be a device that is also an HTTP server, but our loading stack doesn’t directly have to know that.
One area where this framework does not work terribly well is with streaming responses. This is pretty apparent: we’ve built everything around the expectation that a discrete and finite request has a discrete and finite response. Streaming kind of breaks that expectation. We can do a streamed body in an upload, because the process of sending that stream is part of sending our single discrete request.
There are some kinds of streamed bodies we could handle, such as file downloads. For these, we’d want to provide an OutputStream
(or similar) to say “put any bytes you get back here”; by default this could be a stream to an in-memory Data
value. This would allow us stream a response directly to a file, instead of going through an in-memory Data
value.
For a live video stream, we could provide an OutputStream
that pipes the data into an AVSession
. However, we’d be explicitly foregoing some of the semantics of “single request, single response” in order to make this work. We would also need to be very careful about how we implement request duplication (such as would be needed by a redirecting loader).
There is a lot we can do with this framework. A few things are somewhat complicated and require working around/with specific implementation details of system APIs (such as certificate pinning, streamed responses, etc). On the whole though, this approach of modeling networking as “send a request, and eventually get a response” allows us to build extremely flexible, composeable, and customizable networking stacks.
In the next (and likely final) post, we’ll be zooming back out to look at the high-level overview of the framework we’ve created, and see how it fits in with other Swift technologies.
]]>I started working on this approach after reading Rob Napier’s blog post on protocols on protocols. In it, he makes the point that we seem to misunderstand the seminal “Protocol Oriented Programming” idea introduced by Dave Abrahams Crusty at WWDC 2015. We especially miss the point when it comes to networking, and Rob’s subsequent posts go in to this idea further.
One of the things I hope you’ve realized throughout this blog post series is that nowhere in this series did I ever talk about Codable
. Nothing in this series is generic (with the minor exception of making it easy to specify a request body). There is no mention of deserialization or JSON or decoding responses or anything. This is extremely deliberate.
The point of HTTP is simple: You send an HTTP request (which we saw has a very well-defined structure) and you get back an HTTP response (which has a similarly well-defined structure). There’s no opportunity to introduce generics, because we’re not dealing with a general algorithm.
So this begs the question: where do generics come in? How do I use my awesome Codable
type with this framework? The answer is: the next layer of abstraction.
Our HTTP stack deals with a concrete input type (HTTPRequest
) and a concrete output type (HTTPResponse
). There’s no place to put something generic there. We want generics at some point, because we want to use our nice Codable
structs, but they don’t belong in the HTTP communication layer.
So, we’ll wrap up our HTTPLoader
chain in a new layer that can handle generics. I call this the “Connection” layer, and it looks like this:
public class Connection {
private let loader: HTTPLoader
public init() {
self.loader = ...
}
public func request(_ request: ..., completion: ...) {
// TODO: create an HTTPRequest
// TODO: interpret the HTTPResponse
}
}
In order to interpret a response in a generic way, this is where we’ll need generics, because this is the algorithm we need to make applicable to many different types. So, we’ll define a type that generically wraps an HTTPRequest
and can interpret an HTTPResponse
:
public struct Request<Response> {
public let underlyingRequest: HTTPRequest
public let decode: (HTTPResponse) throws -> Response
public init(underlyingRequest: HTTPRequest, decode: @escaping (HTTPResponse) throws -> Response) {
self.underlyingRequest = underlyingRequest
self.decode = decode
}
}
We can also provide some convenience methods for when we know the Response
is Decodable
:
extension Request where Response: Decodable {
// request a value that's decoded using a JSON decoder
public init(underlyingRequest: HTTPRequest) {
self.init(underlyingRequest: underlyingRequest, decoder: JSONDecoder())
}
// request a value that's decoded using the specified decoder
// requires: import Combine
public init<D: TopLevelDecoder>(underlyingRequest: HTTPRequest, decoder: D) where D.Input == Data {
self.init(underlyingRequest: underlyingRequest,
decode: { try decoder.decode(Response.self, from: $0.body) })
}
}
With this, we have a way to encapsulate the idea of “sending this HTTPRequest
should result in a value I can decode using this closure”. We can now implement that request
method we stubbed out earlier:
public class Connection {
...
public func request<ResponseType>(_ request: Request<ResponseType>, completion: @escaping (Result<ResponseType, Error>) -> Void) {
let task = HTTPTask(request: request.underlyingRequest, completion: { result in
switch result {
case .success(let response):
do {
let response = try request.decode(httpResponse: response)
completion(.success(response))
} catch {
// something when wrong while deserializing
completion(.failure(error))
}
case .failure(let error):
// something went wrong during transmission (couldn't connect, dropped connection, etc)
completion(.failure(error))
}
})
loader.load(task)
}
}
And using conditionalized extensions, we can make Request
construction simple:
extension Request where Response == Person {
static func person(_ id: Int) -> Request<Response> {
return Request(personID: id)
}
init(personID: Int) {
let request = HTTPRequest(path: "/api/person/\(personID)/")
// because Person: Decodable, this will use the initializer that automatically provides a JSONDecoder to interpret the response
self.init(underlyingRequest: request)
}
}
// usage:
// automatically infers `Request<Person>` based on the initializer/static method
connection.request(Request(personID: 1)) { ... }
// or:
connection.request(.person(1)) { ... }
There are some important things at work here:
404 Not Found
response is a successful response. It’s a response we got back from the server! Interpreting that response is a client-side problem. So by default, we can blindly attempt to deserialize any response, because every HTTPResponse
is a “successful” response. That means dealing with a 404 Not Found
or 304 Not Modified
response is up to the client.Request
decode the response, we provide the opportunity for individualized/request-specific deserialization logic. One request might look for errors encoded in a JSON response if decoding fails, while another might just be satisfied with throwing a DecodingError
.Request
uses a closure for decoding, we can capture domain- and contextually-specific values in the closure to aid in the decoding process for that particular request!XMLDecoder
or something custom. Each request has the opportunity to decode a response however it wishes.Request
mean we have a nice and expressive API of connection.request(.person(42)) { ... }
This Connection
layer also makes it easy to integrate with Combine. We can provide a method on Connection
to expose sending a request and provide back a Publisher
-conforming type to use in a publisher chain or as part of an ObservableObject
or even with a .onReceive()
modifier in SwiftUI:
import Combine
extension Connection {
// Future<...> is a Combine-provided type that conforms to the Publisher protocol
public func publisher<ResponseType>(for request: Request<ResponseType>) -> Future<ResponseType, Error> {
return Future { promise in
self.request(request, completion: promise)
}
}
// This provides a "materialized" publisher, needed by SwiftUI's View.onReceive(...) modifier
public func publisher<ResponseType>(for request: Request<ResponseType>) -> Future<Result<ResponseType, Error>, Never> {
return Future { promise in
self.request(request, completion: { promise(.success($0)) }
}
}
}
We’ve finally reached the end! I hope you’ve enjoyed this series and that it’s opened your mind to new possibilities. Some things I hope you take away from this:
URLSession
-specific trees in the HTTP forest.Thanks for reading!
]]>Do you have thoughts on the content of this series? Maybe you’ve found that some things work well with this approach, or things that don’t? I’d love to hear about your experience! Feel free to contact me via this site or on Twitter.