Categories
Software

URLSession and Async/Await

My podcast app called PeaPodder uses a complex algorithm to scan for new content. This post is about an attempt to simplify this code using the async/await syntax that Apple added to Swift (in 2021 I believe)

For a description of the intentions behind this syntax and how to use it, I’d recommend starting with this video.

PeaPodder has a list of subscribed podcasts, and needs to periodically scan for new content. The algorithm works something like this:

  1. Set a state flag scanning to true
  2. For the first podcast, do the following:
  3. Does it already have more than 120 minutes of new content? If so, go back to step 2 and start scanning the next podcast
  4. If not, are there any episodes that are known, but not yet downloaded? If there are, proceed to step 5, otherwise, jump to step 6
  5. Start a network request to fetch the first known episode, with a delegate to handle completion and/or errors, and exit this scan
  6. If there are no known episodes, check if is appropriate1 to query ListenNotes to see if there are new episodes for the current podcast. If not, go back to step 2 with the next podcast
  7. Start to fetch to see if the podcast has any new episodes, using a delegate to handle completion and/or errors. Also, exit this scan

The delegate mentioned in step 5 will receive callbacks to either add the new episode to the app data store, or report the download area. After this adding/reporting is done, it will restart the scanning process, from the beginning.

Similarly the delegate mentioned in step 7 will either successfully fetch the podcast metadata or report the fetch error. After this is done, it will restart the scanning process, again from the very beginning.

I didn’t love this algorithm for a couple of reasons.

Reason 1, starting at the top of the list each time it comes back from a fetch just feels embarrassingly clunky. (inelegant?) But given how much time could elapse waiting for an episode to download, it is possible the user could modify the original list of podcasts. Given that possibility, this approach feels like the simplest ‘safe’ way to perform this task

Reason 2 is the code complexity required to ensure all the callbacks handle all the cases correctly. The Apple video mentioned above includes a relatively simple example of this complexity. This code gets even more complex because both the podcast metadata fetch and the episode download can be performed outside the automatic scan. This means the fetch code needs to work correctly both using the automatic scanning context, and also a user triggered context.

The async/await version of scan does the following:

  1. Iterate through all the podcasts
  2. Does the current podcast have enough minutes? If so, go back to step 1 with the next podcast
  3. If not, are there any known, unplayed episodes? If so download one or more of them either until they are all downloaded, or the podcast has enough minutes of content.
  4. If the content limit has not yet been reached, check if it is appropriate to query ListenNotes to see if the podcast has any new episodes
  5. If the query gets sent, and returns with details about one or more new episodes, download it/them, stopping if/when the podcast content limit is reached
  6. Go to the next podcast

All this gets done in one top to bottom code path. The functions that perform the network requests can be reused for fetches that occur outside the scanning process, without any additional code. Errors get handled where they occur. Depending on the error, the scan may either continue, or abort/throw. And the intention will be clear from looking at the code.

Unfortunately my testing uncovered a PLOT TWIST. The URLSession calls that can be used in async tasks cannot be used to perform background fetches. Background configured URLSessions can only use the functions with delegate callbacks. In theory, PeaPodder could use foreground fetches, but it would mean users would need to keep the app running in the foreground while the fetching and downloading is happening.

So I learned a lot. I now feel quite comfortable working with async functions. I also feel I improved my understanding of the inner working of URLSession. Also I’ve shone a bright light on a portion of the PeaPodder app that had become a mysterious black hole to me. Sadly the podcast scanning function still has some gnarly complicated code.

  1. Peapodder uses the free tier of service at ListenNotes, which limits monthly use to 1000 calls to the API. I’ve used a couple of different techniques to minimize the number of calls that get made, while maximizing the app’s ability to fetch episodes in a timely manner. I’m debating whether this might be the topic for another post. ↩︎
Categories
Hobbies Photography

Focus!

Several months ago, I took my beloved favourite camera to a Hana soccer game, fully intending to take photos. Between the torrential rain, and my parents having come to watch I ended up not taking any photos. My camera stayed in my pack for the entire game. Sadly I hadn’t paid much attention to protecting it from the rain. (spoiler: wrapping a camera in a tattered plastic shopping bag, and putting it in an entry level day pack will lead to heartbreak if you stand in the rain for 2 hours.)

After the game ended I took a few of photos and they were all out of focus. I didn’t think much of it, but the next time I took soccer photos, they were all unsharp. And the time after that, more blurry photos. Time to take my head out of the sand. After trying multiple lenses on both camera bodies, I became reasonably confident both the body and the lens had issues.

I sent the body to Nikon (something I hate doing) but was still unsure what I should do about the lens. I’d bought some time waiting until the body came back, but soon would need to figure out how to ‘fix’ the lens. (My camera body is a Nikon D500 and the lens is an 80-200 AF 2.8)

When my body came back from Nikon, I wanted to figure out how to get my favourite lens back to being useful. My first plan was to setup my fancy test target try different AF-FineTune values. I eventually landed on the following setup:

  1. test target in the bright sun shine
  2. camera mounted on a tripod close enough that the target filled the frame (but far enough away that it could focus)
  3. shutter release timer

I forget the exact AF FineTune value that resulted from this exercise, but sadly when using the lens wide open, it still tended to focus behind the subject. 🙁

I was able to sort of work around this by:

  1. trying to focus in front of my subjects
  2. stopping down the aperture (eg F4)

But still, even with these adaptations I was getting a large portion of photos being blurry. Eventually, I tried a different experiment. I:

  1. took my son’s bike to a grassy field and set up the camera on a tripod 20 feet away
  2. set the zoom to 80mm (widest value)
  3. using typical settings (1/1000s, f2.8) took photos using different AF Fine Tune values (-20, -15, -10, -5, 0, 5, 10, 15, 20)
  4. Moved the camera a few different distances from the bike (35′, 45′, 70′) For the middle distances, I used multiple zoom values (typically 80mm, 125mm, 200mm) The further away I went, the longer the focal lengths I used.
  5. In all distance/focalLength combinations, I’d take photos with the same array of AF Fine Tune values

The results of this experiment were revealing. All photos were focused behind the bike. The -20 photos were generally less back focused, not much too blurry to be usable.

Unfortunately, the cost of repairing this lens most likely exceeds the cost of replacing it.

In happier news I have now acquired a very modern 70-200 F4 lens. Less shallow depth of field, but also less weight, and much less expensive. While I still dream of switching to mirrorless (I’d love to use a Z6iii) I’m enjoying the sharp photos I’m able to create with my 70-200 and D500. I’ve also enjoyed doing the ‘bike on grass’ experiment with different combinations of lens and body.

Categories
Software

Furigana in SwiftUI (4)

This is part 4. The previous episode is here.

To quickly recap, we are now able to layout text that looks like this:

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

We accomplish this by passing an array of (mainText, furigana) pairs into our container view.

But how can we generate this array? For each entry in our strings files we typically have a romaji version and a kana/kanji version. For example “To go” has a romaji version: “iku” and a kana/kanji version: “行く”

I ended up building a two step process:

  1. convert the romaji to hiragana
  2. ‘line up’ the hiragana only string with the kana/kanji string, and infer which hiragana represented the kanji

Aside: I originally imagined a markdown scheme to represent which furigana would decorate which text. Something like this: [[今日((こんいち))]][[は]]

I eventually realized this markdown format wasn’t adding any value, so instead I can generate the (hiragana, kana/kanji) pairs directly from the inputs from the strings files.

Romaji to Hiragana

In an acronym, TDD. Test driven design was essential to accomplishing this conversion. One of the bigger challenges here is the fact that there is some ambiguity in converting from romaji to hiragana. Ō can mean おお or おう. Ji can mean じ or ぢ. Is tenin てにん or てんいん?

I started using the following process:

  1. start with the last character, and iterate through to the first
  2. at each character, prepend it to and previously unused characters
  3. determine if this updated string of characters mapped to a valid hiragana
  4. if not, assume the previous string of characters did map to a valid hiragana and add it to the final result
  5. remove the used characters from the string of unused characters
  6. go to step 2 and grab the ‘next’ character

Consider the following example: ikimasu

  1. does u have a hiragana equivalent? yup: う
  2. grab another character, s. does su have a hiragana? yup: す
  3. grab another character, a. does asu have a hiragana? nope
  4. add すto our result string, and remove su from our working value
  5. does a have a hiragana? yup: あ
  6. grab the next romaji character, m. does ma have a hiragana? yup: ま
  7. etc.

There were other wrinkles that came up right away. They included

  • how to handle digraphs like kya, sho, chu (きゃ, しょ, ちゅ)
  • handling longer consonants like the double k in kekkon with っ

Handling these wrinkles often forced me to refactor my algorithm. But thanks to my ever growing collection of TDD test cases, I could instantly see if my courageous changes broke something. I was able to refactor mercilessly which was very freeing.

Writing this, I pictured a different algorithm where step 1 is breaking a string into substrings where each substring ends in a vowel. Then each substring could probably be converted directly using my romaji -> hiragana dictionary. This might be easier to read and maintain. Hmm..

Furigana-ify my text

This felt like one of those tasks that is easy for humans to do visually, but hard to solve with a program.

When we see:

and:

みせ に  い きます

店 に  行 きます

humans are pretty good at identifying which chunks of hiragana represent the kanji below. In the happy path, it’s fairly easy to iterate through the two strings and generate the (furigana, mainText pairs)

But sadly my input data was not free of errors. There were cases where my furigana didn’t match my romaji. Also some strings included information in brackets. eg. some languages have masculine and feminine versions of adjectives. So if a user was going from Japanese to Croatian, the Japanese string would need to include gender. so the romaji might look be Takai (M) and the kana/kanji version would be 高い (男).

Sometimes this meant cleaning up the input data. Sometimes it meant tweaking the romaji to hiragana conversion. Sometimes it meant tweaking the furigana generation process. In all cases thanks to my TDD mindset, it meant creating at least one new test case. I loved the fact that I was able to refactor mercilessly and be confident I was creating any regressions.

This post has been more hand wavy than showing specific code examples, but I did come across one code thing I want to share here.

extension Character {
    var isKanji: Bool {
    // HOW???
    }
}

For better or worse, the answer required some unicode kookiness…

extension Character {
    var isKanji: Bool {
        let result = try? /\p{Script=Han}/.firstMatch(in: String(self))
        return result != nil
    }
}

Implementing similar functionality in String is left as an exercise for the users.

Alternatively, isHiragana is a more contained problem to solve

    var isHiragana: Bool {
        ("あ"..."ん").contains(self)
    }
Categories
Meta

The Relief of Breathing Out

When I find myself with a few quiet minutes, I’ll often practice a breathing technique.

  1. Breathe in for 4 seconds
  2. Hold for 4 seconds
  3. Breathe out for 4 seconds
  4. Hold for 4 seconds
  5. goto 1

Usually when I’m doing this, holding for 4 seconds feels fine, but sometimes, my lungs are yelling ‘C’mon brain! Breathe!’ But even on the days when holding for 4 seconds feels fine, there is a sense of relief when I start breathing again. I recently noticed something odd in that relief.

The relief feels the same when I’m breathe is as when I’m breathe out. This didn’t initially make sense to me. I get why I feel relief in the inhale. My lungs are getting a fresh new supply of O2. Let the feast begin.

But when I breathe in and hold for a few seconds, then exhale I feel a sense of relief. What’s going on there? As far as my lungs can tell, there is no fresh supply of O2

I’m very tempted to get Teleological and imagine that my body ‘knows’ that breathing out is a necessary step to be able to inhale.

I think ‘results-oriented’ is a controversial concept. Meeting an objective is probably better than not completing something. But what if the situation has changed and the objective no longer makes sense? Perhaps accomplishing the original goal might now be detrimental. Also (as recognized by our clever lungs) there can be steps that don’t accomplish the ultimate goal, but are essential in clearing the way for us to achieve our results.

Spare a grateful thought for the lowly exhale. Even though it doesn’t bring any O2 into our thirsty alveoli, it clears the way for inhalation to waltz in and be the respiratory hero.

Not surprisingly our bodies are not being clever. Our brain gets panicky when the concentration of CO2 in our lungs goes up. That is what happens when we hold our breath. When we exhale we expel CO2, or more specifically reduce its concentration. Our bodies aren’t cleverly realizing exhalation is getting us closer to the O2. On the other hand having our bodies get panicky when CO2 is too prevalent sounds like a clever design decision.

Categories
Software

Furigana in SwiftUI (3)

This is part 3. The previous episode is here.

To quickly recap, we want to layout text that looks like this:

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

By putting VStacks in an HStack, we’ve been able to create this:

At this point, I see two possible next steps:

  1. what happens when there’s more text than can fit on one line?
  2. the furigana is SO BIG! (or is the main text to small?)

I held a vote and decided to start with issue #2, fix the sizing. The ten years ago version of me would have been more than happy to do something like:

    let furiGanaSize: CGFloat = 10
    let mainTextSize: CGFloat = 24
    var body: some View {
        VStack(alignment: .center){
            Text(text.furiGana)
               .font(.system(size: furiGanaSize))
            Text(text.mainText)
               .font(.system(size: mainTextSize))
        }
    }

Define default size values that users can over ride. While this would work, it didn’t feel like the most intuitive way to let users specify the size of their text. It would also prevent use of Dynamic Type sizes, eg .font(.body)

What I really want is:

  1. Let users specify the font and size of the main text using all the usual existing SwiftUI font specification approaches
  2. measure the height of main text (mainTextHeight)
  3. use mainTextHeight to compute a desired height for the furigana text
    let furiganaHeight = mainTextHeight * 0.5

GeometryReader?

That was my naive first thought. Do something like:

VStack() {
    Text(model.furiganaText)
     .font(.system(size: furiganaSize))
    GeometryReader() { proxy in
        Text(model.mainText)
         .preference(key: ViewSizeKey.self, value: proxy.size)
         .onPreferenceChange(ViewSizeKey.self) {
           furiganaSize = $0.height * 0.4
         }
    }
}

Sadly that gets a nope. GeometryReader‘s greediness means all the elements get as wide and tall as possible. I’m still not clear on why GeometryReader needs to be so greedy. Why not just take the rect that the contents say they need? Fool me once GeometryReader, shame on you. Fool me twice, shame on me.

Layout Container?

Hell yes. I have had many positive experiences creating containers that conform to Layout. How did it do with my furigana requirements? It did exactly what I needed. sizeThatFits does the following:

  1. determine the required size for the mainText
  2. pass the mainText height into a Binding<CGFloat>
  3. determine the size for the furigana text
  4. calculate the total height (mainTextHeight + furiganaHeight + spacing)
  5. calculate the width (max(mainTextWidth, furiganaWidth))
    func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
        guard subviews.count == 2 else {
            return .zero
        }
        let furiganaSize = subviews[0].sizeThatFits(.unspecified)
        let bodySize = subviews[1].sizeThatFits(.unspecified)
        DispatchQueue.main.async {
            bodyHeight = bodySize.height
        }
        let spacing = subviews[0].spacing.distance(to: subviews[1].spacing, along: .vertical)
        let height = furiganaSize.height + bodySize.height + spacing
        let width = max(furiganaSize.width, bodySize.width)
        return .init(width: width, height: height)
    }

placeSubviews performs similar steps:

  1. determine (again) the sizes for the furigana text and the main text
  2. create size proposals (one for furigana text, the other for the main text)
  3. place the furigana text above the main text, using the size proposals created in the previous step
    func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
        guard subviews.count == 2 else {
            return
        }
        let furiganaSize = subviews[0].sizeThatFits(.unspecified)
        let bodySize = subviews[1].sizeThatFits(.unspecified)
        let spacing = subviews[0].spacing.distance(to: subviews[1].spacing, along: .vertical)
        
        let furiganaSizeProposal = ProposedViewSize(furiganaSize)
        let mainTextSizeProposal = ProposedViewSize(bodySize)
        var y = bounds.minY + furiganaSize.height / 2
        
        subviews[0].place(at: .init(x: bounds.midX, y: y), anchor: .center, proposal: furiganaSizeProposal)
        y += furiganaSize.height / 2 + spacing + bodySize.height / 2
        subviews[1].place(at: .init(x: bounds.midX, y: y), anchor: .center, proposal: mainTextSizeProposal)
    }

FuriganaContainer contains a single element of the text to display. Its code is shown below.

struct TextElement: View {
    
    let textModel: TextWithFuriGana
    @State var bodyHeight: CGFloat = 0
    
    var body: some View {
        FuriganaContainer(bodyHeight: $bodyHeight) {
            Text(textModel.furiGana)
                .font(.system(size: bodyHeight * 0.5))
            Text(textModel.mainText)
        }
    }
}

The parent function looks something like this

struct TextArray: View {
    let fullText: [TextWithFuriGana]

    var body: some View {
        HStack(alignment: .bottom, spacing: 0) {
            ForEach(fullText) { text in
                    TextElement(textModel: text)
            }
        }
    }
}

TextArray gets instantiated something like this

struct ContentView: View {
    var body: some View {
        VStack {
            TextArray(fullText: TextWithFuriGana.testArray)
                .font(.largeTitle)
        }
    }
}

One topic that hasn’t been discussed but feels less interesting is the code that determines/creates the furigana text. This ended up being an interesting and challenging task that I’ll discuss in my next post.

Categories
Software

Furigana in SwiftUI (2)

This is part 2. The previous episode is here.

To quickly recap: We want to layout text that looks like this:

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

When I began looking into implementation options, all roads seemed to be leading toward an AttributedString in a UILabel. There are a variety of descriptions of CoreText support for something called Ruby Text. There is also a mark down schemes for expressing Ruby text and sample code showing how to tickle the belly of CoreText to layout furigana text. I was not able to get any CoreText furigana working and there are likely two reasons for this:

  1. There are reports (not substantiated by me) that Apple removed Ruby Text support from Core Text. This seems like an odd thing, but definitely plausible
  2. I really wanted to do something that was SwiftUI-ish (SwiftUI-y?)

It’s quite possible that if I’d kept hammering away at CoreText I’d have got it to work. My official excuse is my heart wasn’t in it.

To implement this in SwiftUI my first approach included:

  1. a markdown scheme, eg “おはよおはよざいます。<<今日((こんいち))>>は。”
  2. code to convert markdown strings (like the one above) into an array of model objects
    struct TextModel {
    let mainText: String
    let furiGana: String
    }
  3. UI code that takes in an array of TextModels and displays them in collection of VStacks in an HStack
struct TextCollection: View {
    let textElements: [TextModel]
    init(markdown: String) {
        self.textElements = markdown.convertedToElements
    }
    var body: some View {
        HStack() {
            ForEach(textElements) { element in
                VStack() {
                    Text(element.furiGana)
                    Text(element.mainText)
                }
            }
        }
    }
}

The result looks like this:

Not yet perfect, but a great start!

In the next part, I’ll discuss my journey to figure out how to size the furigana as a function of the size of the main text. Stay tuned!

Categories
Software

Furigana in SwiftUI

I have been thinking about implementing Furigana in my Flash Cards App for a while now. Until recently it has definitely not been high priority. But I am in the process of adding some Japanese vocab where I’m using kanji. So far I’ve only been adding the really basic characters, but still… In order to have this app be accessible to all users, adding furigana support has recently become more important.

What the heck is Furigana?!

Written Japanese uses four different types of characters. They are:

  1. Kanji – These are pictograms where each character represents an idea and can have multiple pronunciations. For example, 月 is the character used to express month, and moon. Sometimes it will be pronounced getsu, other times tsuki etc. It can be combined with other characters in longer ‘words’. 月曜日 means Monday. I find it quite interesting that the Japanese word for Monday literally translates to Moon Day. There are thought to be over 50,000 unique Kanji. To be considered vaguely literate, the consensus suggests you need to know how to read and write 1,500 to 2,000 Kanji characters. Yikes!
  2. Hiragana – This is the most common Japanese alphabet. It is comprised of approximately 50 characters all representing specific sounds. Each hiragana character represents a single pronunciation. Also the sound of each character is also its name. For example KA (か) is always pronounced ‘ka.’ This might be a mind blowing concept to speakers of English (Looking at you W) Young kids start reading and writing hiragana. Over time they use more Kanji (see above) Where a beginning Japanese student might write: くうこうにいきます (kuukou ni ikimasu) an adult would more likely write: 空港に行きます. Both of these mean ‘I am going to the airport’ but the first text is all hiragana while the second is a combination of kanji and hiragana
  3. Katakana – This is an alphabet, similar to hiragana, but only used for words that have been introduced to Japanese: ピアノ (Piano), カラオケ (Karaoke, an interesting word in many ways), ホテル (Hotel)
  4. Romaji – Yet another alphabet that uses the English/western alphabet to express the same sounds available with Hiragana and Katakana. For example SA (romaji) is pronounced the same as さ (hiragana) is pronounced the same as サ (katakana) I think Romaji is primarily used for creating content aimed at non-Japanese speakers (eg maps). However it is also common to see ads aimed at Japanese speakers use romaji, cuz in some circles in Japan, Western/English stuff is seen as ‘cool.’

Great, but I thought this was about furigana…

It is! Furigana is hiragana written above or beside kanji to let readers know how it should be pronounced. (Remember there are thought to be over 50,000 kanji characters and they frequently have multiple context-dependent pronunciations. As a rule of thumb, the more common the kanji the greater the number of different ways it can be pronounced.)

Here is a simple example of furigana

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

One other point that bears mentioning is that Japanese can be written in rows (left to right, top to bottom) or in columns (top to bottom, right to left). I’m currently under the impression that SwiftUI only supports the row-based text layout. For the moment, I’m going to focus on row-based layout and ignore column-based layout.

If you want to learn more about what I’ve described here, you could do worse than going to Wikipedia.

The next post will jump into the SwiftUI implementation.

Categories
Software

A Situation Where It Makes Sense to Have Two Sources of Truth?

Don’t be fooled by the title. Writing code where a given value is stored in two different ways (in two different places) feels wrong to me too. But at the moment, it feels like solving my current problem with a single source of truth would result in more complex code.

Background; What am I trying to do?

In my ZoomBurst photo editing extension, currently the zooming and rotation effects use the centre of the image rect as the centre of the effect. For quite some time, I’ve been curious about what it would take to add the ability to allow users to specify a centre point for the effects. (My currently thinking would be to let users specify a single point to use as the centre for both zoom and rotation. But never say never…)

How am I Trying to Do it?

In broad strokes, two things need to happen to add this feature.

  1. add UI to allow users to specify the custom centre point
  2. CoreImage filters need to be updated to support effectCenter not being the same as imageCenter

For the UI, I’m adding an overlay marker to the output image. Users can change the centre by dragging this marker around. In swiftUI marker position is updated using the .offset() function in View.

I defined a @State variable to store the offset. It will be (0,0) at the top left corner and (imageSize.width, imageSize.height) at the bottom right.

    @State var currentOffset: CGSize = .zero

    func marker(inside size: CGSize) -> some View {
        return Circle()
            .stroke(.black, lineWidth: 10)
            .stroke(.white, lineWidth: 6)
            .frame(width: Self.markerDimension)
            .offset(currentOffset)
}

So far so good. However the CoreImage code cannot use this because imageSize will be different for preview output vs final image output. So the CombineOptions structure needs to store the centre value using a UnitVector size (ie height and width will be in the range [0..1])

While it’s technically possible to rely on the unitVector size value in CombineOptions, this would create the need for quite a bit of translating between [0..1] and [0..imageSize] First there is the value passed to the offset function. But there is also a surprising amount of logic to prevent users from dragging the marker off the side of the preview image.

 DragGesture()
     .onChanged { gesture in
         let proposedOffset = gesture.translation + baseOffset
         currentOffset = proposedOffset.validSize(using: markerRect)



    func validSize(using rect: CGRect) -> CGSize {
        if rect.contains(self.asPoint) {
            return self
        }
        var result: CGSize = self
        if self.width < 0 {
            result.width = 0
        } else if self.width > rect.width {
            result.width = rect.width
        }
        if self.height < 0 {
            result.height = 0
        } else if self.height > rect.height {
            result.height = rect.height
        }
        return result
    }

Should users be able to drag the marker so that it half off the preview image? (ie the centre is right on the image edge?) Or will they be ok only dragging the marker to the point where it is still entirely overtop of the preview image. All of this code requires adding and subtracting half the width of the marker view in a surprising number of places. The code to do this is much easier to understand if it all takes place in the [0..imageSize] domain, rather than the [0..1] domain.

I’m not saying the more complex thing couldn’t be done. But I was imagining somebody coming into this code 6 months or 6 years from now. I feared that the brain power to understand the code jumping between the two domains would beg the question… ‘why is this jumping back and forth between [0..imageSize] and [0..1] so much?’

In order to minimize the chance of the two values getting out of sync I created a single place in the code where one value gets changed, and it updates the [0..1] value. Come to think of it, this would be a good excuse to add a Psst comment explaining this conundrum to future me or anyone else that happens to have the good fortune to be reading this code in the future. Maybe even link to this post.

Another thought I’ve just had (and don’t think I have enough functioning neurons this late at night to properly tackle) is how would something like this fly in an environment where others are reviewing my code. I feel like there would be a (justifiable) tendency among reviewers to be skeptical of my decision to use two sources of truth. I also suspect my attempts to defend it would be more qualitative than quantitative. And I’m not entirely sure how it would be resolved. I’d like to think it would be more than just a battle of wills because I’m not a fan of battles of wills.

Categories
Meta

A Good Old-Fashioned Internet Rabbit Hole

Prevailing wisdom suggests any chunks of time spent online will be steeped in outrage, anger and division. I recently spent a bit of time online and experienced none of the above. As an added bonus I learned quite a few apparently unrelated facts.

When is a condominium not a condominium? What is the email address for the Austrian Consul-General in BC? And what country governs Pheasant Island?

These are just some of the questions whose answers I learned on my recent random walk. And if you manage to read to the end of this post, you too will learn the answers to these questions and so much more…

It all started when I wanted to find out if there were any diplomatic offices in the building at 800 Pender. (But I’ll save that story for another day.) It turns out the BC Provincial Government keeps a comprehensive list of all consular offices in the province. Well done BC Government.

It turned out there are no consular offices at 800 Pender, but holy cow, the email addresses were all over the map. To be fair, about half of them were explicitly affiliated with the home country government. eg, El Salvador

Of the remainder, they seemed to fall into two categories. In the first category, the email seemed to be the personal email of the consul general, using one of the usual providers (eg. gmail, outlook, shaw.ca etc.) Well done Jamaica scoring a free sfu email address.

The third category is email addresses associated with website that is to some degree dedicated to the consul office. eg Bosnia and Herzegovina (BiH).

This got me wondering, what do I see if I go to bcbih.com? It’s a pretty snazzy website, with sections for Tourism, the CV of the Consul, investing in BiH, and investing in RS… Wait, what the heck is RS? Oh, the Republic of Srpska. Wait, what the heck is the Republic of Srpska, and who stole all their vowels?!

It turns out (according to Wikipedia) Republic of Srpska (RS) is an entity within BiH. It further turns out BiH contains 2 entities of roughly the same geographical area:

  • Republic of Srpska (primarily populated by ethnic Serbs)
  • Federation of Bosnia and Herzegovina (primarily populated by Bosniaks, and to a lesser extent Croats)

I was definitely not aware that BiH was divided geographically and ethnically in this way. If somebody told me Bosnia and Herzegovina was composed of two approximately same-sized entities, I’d have guessed one was Bosnia, the other Herzegovina. Wrong! Bosnia and Herzegovina is composed of Srpska and Bosnia Herzegovina. I had so many questions here, but I got distracted by Brčko. (pink in the map above)

It turns out Brčko is a condominium. Wait, what? According to Wikipedia, this flavour of condominium is defined as: a territory … in or over which multiple sovereign powers formally agree to share equal dominium … and exercise their rights jointly, without dividing it into “national” zones.

It’s like joint custody, but for countries instead of parents. Antarctica and Post-WWII Germany are two high profile examples of condominia. Wikipedia includes an impressive list of the current condominia. The one that caught my eye was Pheasant Island.

Pheasant Island is near the mouth of a river (Bidasoa) that defines the border of France and Spain. According to Wikipedia, Pheasant Island became a condominium when the Treaty of the Pyrenees was signed in 1659. According to the treaty the island lives with Spain from 1 February until 31 July. For the remainder of the year (1 August until 31 January) it lives with papa France.

In a world where things seem fairly firmly bolted down, geopolitically, I love the fact that there are quirky things like islands that switch from one country to another every six months.

Again, according to Wikipedia “there are no pheasants on Pheasant Island.” I wonder if there are any condos…

Categories
Software

A Simple Way to Conform to Identifiable Without adding an ID to a Legacy Data Model Object

As the title suggests, I have a mature data model, and I want to add conformance to Identifiable but (for backwards compatibility reasons) not actually add id as an instance variable

My model object in question already implements Hashable, so plan A was to use the hashValue generate id. Sadly hashValue is an Int and id needs to be a UUID. So how does one convert an Int to a UUID?

As is often the case there was a discussion on StackOverflow offering alternatives. The option that leapt out at me introduced to Swift language options I had never used before.

  1. withUnsafeBytes
  2. load(fromByteOffset: as:)

withUnsafeBytes is an array ‘operator’ that will (I believe) iterate over all the bytes in the array. load will pour the bytes into a new object whose type is specified by the as: parameter. As somebody who has done a small amount of writing c and ZERO c++ this code feels very foreign to me.

I chose to solve my problems with two pieces of code. First I extended Int like this:

extension Int {
    public var asID: UUID {
        let vals: [Int64] = [Int64(self), 0]
        return vals.withUnsafeBytes { $0.load(as: UUID.self) }
    }
}

I then added Identifiable conformance like this:

extension Thread: Identifiable {
    public var id: UUID {
        return self.hashValue.asID
    }
}

Solving this problem reminded me just how much complexity there is ‘under the hood’ in almost any code we write. I find it amazing that:

  1. There are smart people who are well versed in the (obscure?) corners of the software engineering information domain.
  2. By invoking an appropriate combination of keywords in the search bar, I can be connected to the helpful guidance created by these smart people.

Yay technology!