Categories
Software

Furigana in SwiftUI (2)

This is part 2. The previous episode is here.

To quickly recap: We want to layout text that looks like this:

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

When I began looking into implementation options, all roads seemed to be leading toward an AttributedString in a UILabel. There are a variety of descriptions of CoreText support for something called Ruby Text. There is also a mark down schemes for expressing Ruby text and sample code showing how to tickle the belly of CoreText to layout furigana text. I was not able to get any CoreText furigana working and there are likely two reasons for this:

  1. There are reports (not substantiated by me) that Apple removed Ruby Text support from Core Text. This seems like an odd thing, but definitely plausible
  2. I really wanted to do something that was SwiftUI-ish (SwiftUI-y?)

It’s quite possible that if I’d kept hammering away at CoreText I’d have got it to work. My official excuse is my heart wasn’t in it.

To implement this in SwiftUI my first approach included:

  1. a markdown scheme, eg “おはよおはよざいます。<<今日((こんいち))>>は。”
  2. code to convert markdown strings (like the one above) into an array of model objects
    struct TextModel {
    let mainText: String
    let furiGana: String
    }
  3. UI code that takes in an array of TextModels and displays them in collection of VStacks in an HStack
struct TextCollection: View {
    let textElements: [TextModel]
    init(markdown: String) {
        self.textElements = markdown.convertedToElements
    }
    var body: some View {
        HStack() {
            ForEach(textElements) { element in
                VStack() {
                    Text(element.furiGana)
                    Text(element.mainText)
                }
            }
        }
    }
}

The result looks like this:

Not yet perfect, but a great start!

In the next part, I’ll discuss my journey to figure out how to size the furigana as a function of the size of the main text. Stay tuned!

Categories
Software

Furigana in SwiftUI

I have been thinking about implementing Furigana in my Flash Cards App for a while now. Until recently it has definitely not been high priority. But I am in the process of adding some Japanese vocab where I’m using kanji. So far I’ve only been adding the really basic characters, but still… In order to have this app be accessible to all users, adding furigana support has recently become more important.

What the heck is Furigana?!

Written Japanese uses four different types of characters. They are:

  1. Kanji – These are pictograms where each character represents an idea and can have multiple pronunciations. For example, 月 is the character used to express month, and moon. Sometimes it will be pronounced getsu, other times tsuki etc. It can be combined with other characters in longer ‘words’. 月曜日 means Monday. I find it quite interesting that the Japanese word for Monday literally translates to Moon Day. There are thought to be over 50,000 unique Kanji. To be considered vaguely literate, the consensus suggests you need to know how to read and write 1,500 to 2,000 Kanji characters. Yikes!
  2. Hiragana – This is the most common Japanese alphabet. It is comprised of approximately 50 characters all representing specific sounds. Each hiragana character represents a single pronunciation. Also the sound of each character is also its name. For example KA (か) is always pronounced ‘ka.’ This might be a mind blowing concept to speakers of English (Looking at you W) Young kids start reading and writing hiragana. Over time they use more Kanji (see above) Where a beginning Japanese student might write: くうこうにいきます (kuukou ni ikimasu) an adult would more likely write: 空港に行きます. Both of these mean ‘I am going to the airport’ but the first text is all hiragana while the second is a combination of kanji and hiragana
  3. Katakana – This is an alphabet, similar to hiragana, but only used for words that have been introduced to Japanese: ピアノ (Piano), カラオケ (Karaoke, an interesting word in many ways), ホテル (Hotel)
  4. Romaji – Yet another alphabet that uses the English/western alphabet to express the same sounds available with Hiragana and Katakana. For example SA (romaji) is pronounced the same as さ (hiragana) is pronounced the same as サ (katakana) I think Romaji is primarily used for creating content aimed at non-Japanese speakers (eg maps). However it is also common to see ads aimed at Japanese speakers use romaji, cuz in some circles in Japan, Western/English stuff is seen as ‘cool.’

Great, but I thought this was about furigana…

It is! Furigana is hiragana written above or beside kanji to let readers know how it should be pronounced. (Remember there are thought to be over 50,000 kanji characters and they frequently have multiple context-dependent pronunciations. As a rule of thumb, the more common the kanji the greater the number of different ways it can be pronounced.)

Here is a simple example of furigana

Hello aka Konnichi wa aka 今日は aka こんにちは
こんにち is furigana that tells readers how to pronounce 今日

One other point that bears mentioning is that Japanese can be written in rows (left to right, top to bottom) or in columns (top to bottom, right to left). I’m currently under the impression that SwiftUI only supports the row-based text layout. For the moment, I’m going to focus on row-based layout and ignore column-based layout.

If you want to learn more about what I’ve described here, you could do worse than going to Wikipedia.

The next post will jump into the SwiftUI implementation.

Categories
Software

A Situation Where It Makes Sense to Have Two Sources of Truth?

Don’t be fooled by the title. Writing code where a given value is stored in two different ways (in two different places) feels wrong to me too. But at the moment, it feels like solving my current problem with a single source of truth would result in more complex code.

Background; What am I trying to do?

In my ZoomBurst photo editing extension, currently the zooming and rotation effects use the centre of the image rect as the centre of the effect. For quite some time, I’ve been curious about what it would take to add the ability to allow users to specify a centre point for the effects. (My currently thinking would be to let users specify a single point to use as the centre for both zoom and rotation. But never say never…)

How am I Trying to Do it?

In broad strokes, two things need to happen to add this feature.

  1. add UI to allow users to specify the custom centre point
  2. CoreImage filters need to be updated to support effectCenter not being the same as imageCenter

For the UI, I’m adding an overlay marker to the output image. Users can change the centre by dragging this marker around. In swiftUI marker position is updated using the .offset() function in View.

I defined a @State variable to store the offset. It will be (0,0) at the top left corner and (imageSize.width, imageSize.height) at the bottom right.

    @State var currentOffset: CGSize = .zero

    func marker(inside size: CGSize) -> some View {
        return Circle()
            .stroke(.black, lineWidth: 10)
            .stroke(.white, lineWidth: 6)
            .frame(width: Self.markerDimension)
            .offset(currentOffset)
}

So far so good. However the CoreImage code cannot use this because imageSize will be different for preview output vs final image output. So the CombineOptions structure needs to store the centre value using a UnitVector size (ie height and width will be in the range [0..1])

While it’s technically possible to rely on the unitVector size value in CombineOptions, this would create the need for quite a bit of translating between [0..1] and [0..imageSize] First there is the value passed to the offset function. But there is also a surprising amount of logic to prevent users from dragging the marker off the side of the preview image.

 DragGesture()
     .onChanged { gesture in
         let proposedOffset = gesture.translation + baseOffset
         currentOffset = proposedOffset.validSize(using: markerRect)



    func validSize(using rect: CGRect) -> CGSize {
        if rect.contains(self.asPoint) {
            return self
        }
        var result: CGSize = self
        if self.width < 0 {
            result.width = 0
        } else if self.width > rect.width {
            result.width = rect.width
        }
        if self.height < 0 {
            result.height = 0
        } else if self.height > rect.height {
            result.height = rect.height
        }
        return result
    }

Should users be able to drag the marker so that it half off the preview image? (ie the centre is right on the image edge?) Or will they be ok only dragging the marker to the point where it is still entirely overtop of the preview image. All of this code requires adding and subtracting half the width of the marker view in a surprising number of places. The code to do this is much easier to understand if it all takes place in the [0..imageSize] domain, rather than the [0..1] domain.

I’m not saying the more complex thing couldn’t be done. But I was imagining somebody coming into this code 6 months or 6 years from now. I feared that the brain power to understand the code jumping between the two domains would beg the question… ‘why is this jumping back and forth between [0..imageSize] and [0..1] so much?’

In order to minimize the chance of the two values getting out of sync I created a single place in the code where one value gets changed, and it updates the [0..1] value. Come to think of it, this would be a good excuse to add a Psst comment explaining this conundrum to future me or anyone else that happens to have the good fortune to be reading this code in the future. Maybe even link to this post.

Another thought I’ve just had (and don’t think I have enough functioning neurons this late at night to properly tackle) is how would something like this fly in an environment where others are reviewing my code. I feel like there would be a (justifiable) tendency among reviewers to be skeptical of my decision to use two sources of truth. I also suspect my attempts to defend it would be more qualitative than quantitative. And I’m not entirely sure how it would be resolved. I’d like to think it would be more than just a battle of wills because I’m not a fan of battles of wills.

Categories
Software

A Simple Way to Conform to Identifiable Without adding an ID to a Legacy Data Model Object

As the title suggests, I have a mature data model, and I want to add conformance to Identifiable but (for backwards compatibility reasons) not actually add id as an instance variable

My model object in question already implements Hashable, so plan A was to use the hashValue generate id. Sadly hashValue is an Int and id needs to be a UUID. So how does one convert an Int to a UUID?

As is often the case there was a discussion on StackOverflow offering alternatives. The option that leapt out at me introduced to Swift language options I had never used before.

  1. withUnsafeBytes
  2. load(fromByteOffset: as:)

withUnsafeBytes is an array ‘operator’ that will (I believe) iterate over all the bytes in the array. load will pour the bytes into a new object whose type is specified by the as: parameter. As somebody who has done a small amount of writing c and ZERO c++ this code feels very foreign to me.

I chose to solve my problems with two pieces of code. First I extended Int like this:

extension Int {
    public var asID: UUID {
        let vals: [Int64] = [Int64(self), 0]
        return vals.withUnsafeBytes { $0.load(as: UUID.self) }
    }
}

I then added Identifiable conformance like this:

extension Thread: Identifiable {
    public var id: UUID {
        return self.hashValue.asID
    }
}

Solving this problem reminded me just how much complexity there is ‘under the hood’ in almost any code we write. I find it amazing that:

  1. There are smart people who are well versed in the (obscure?) corners of the software engineering information domain.
  2. By invoking an appropriate combination of keywords in the search bar, I can be connected to the helpful guidance created by these smart people.

Yay technology!

Categories
Software

SwiftUI TextField Focus

When a new view gets presented on the screen (for example a password entry view) most of the time it makes sense to start with the focus in the text field. The keyboard should be visible and the user should be able to just start typing. Sure it’s a minor inconvenience to force users to first tap in the field, and then start typing, but heck we’re not savages are we?

UIKit’s UITextfield always had support for programmatically setting focus via UIResponder. [myTextField makeFirstResponder]

I get the sense this functionality has been gradually evolving in SwiftUI. I feel like the current implementation is more complex than the UIResponder model from yesteryear. I don’t want to sound like I’m grumbling here, and I feel like the complexity is needed to stay true to the SwiftUI model (composability, single source of truth, etc.) Having said that, I did need to iterate way more than anticipated.

The first piece is the @FocusState descriptor used to describe a variable in a View struct. There appear to be two ways to use FocusState.

  1. a Binding<Bool> type when there is a single View that either has or does not have focus
  2. a Binding<enum?> type where there are multiple views, and maybe one of them has focus, or maybe none of them have focus.

I should point out that the notion of focus here is broader than just text entry views. Any type of View and have focus, but the expected behaviour for focus in non-textEntry views is beyond the scope of this post.

Here is an example of how to use a Binding<Bool> FocusState

struct AddEntryView: View {
    
    @Binding var unsavedData: String
    @FocusState var addEntryHasFocus: Bool

    var body: some View {
        TextEditor(text: $unsavedData)
            .focused($addEntryHasFocus)
            .defaultFocus($addEntryHasFocus, true)
    }
}

Here is an example of how to use a Binding<enum?> FocusState

struct PasswordView: View {
    enum PasswordField {
        case secure
        case regular
    }
    @FocusState private var passwordField: PasswordField?
    @State var passwordText1: String = ""
    @State var passwordText2: String = ""
    let passwordExists: Bool

    var body: some View {
        VStack() {
            SecureField("Enter password", text: $passwordText1)
                .focused($passwordField, equals: .secure)
            TextField("Create a password", text: $passwordText2)
                .focused($passwordField, equals: .regular)
        }
        .defaultFocus($passwordField, .secure)
    }
}

Unfortunately, my use case is sort of in between these two implementation options. I only want one field, but sometimes I want it to be a TextField, and sometimes I want it to be a SecureField.

struct PasswordView: View {
    @FocusState private var passwordField: ????
    @State var passwordText: String = ""
    let passwordExists: Bool

    var body: some View {
        if passwordExists {
            SecureField("Enter password", text: $passwordText)
                .focused(????)
        } else {
            TextField("Create a password", text: $passwordText)
                .focused(????)
        }
    }
}

Since there will only ever be one field displayed, I naively thought I could just use the same Binding<Bool> var for both SecureField and TextField. Sadly those values just get ignored. So I need to use the enum approach, even there will only ever be a single field.

And just to add a bit of insult to injury, I wasn’t able to get .defaultFocus working. According to an Apple Engineer in the dev forums, I may have uncovered a bug. woo! All that to say, my next attempt looked something like this.

struct PasswordView: View {
    enum PasswordField {
        case secure
        case regular
    }
    @FocusState private var passwordField: PasswordField?
    @State var passwordText: String = ""
    let passwordExists: Bool

    var body: some View {
	VStack() {
            if passwordExists {
                SecureField("Enter password", text: $passwordText)
                    .focused($passwordField, equals: .secure)
            } else {
                TextField("Create a password", text: $passwordText)
                    .focused($passwordField, equals: .regular)
            }
        }
        .onAppear {
            passwordField = passwordExists ? .secure : .regular
        }
    }
}

BUT, I encountered a life cycle issue. .onAppear gets called before the document gets loaded. In my case, the document contents determine whether to show the TextField or the SecureField. So I needed to replace .onAppear with onChange. So here’s an approximation of my final code.

struct PasswordView: View {
    enum PasswordField {
        case secure
        case regular
    }
    @FocusState private var passwordField: PasswordField?
    @State var passwordText: String = ""
    @Binding var document: TurtleDocument

    var body: some View {
	VStack() {
            if document.passwordExists {
                SecureField("Enter password", text: $passwordText)
                    .focused($passwordField, equals: .secure)
            } else {
                TextField("Create a password", text: $passwordText)
                    .focused($passwordField, equals: .regular)
            }
        }
        .onChange(of: document.encryptedData) {
            passwordField = document.passwordExists ? .secure : .regular

        }
    }
}
Categories
Software

Investigating SwiftUI Document-based Apps

My Turtle app is a document-based app (iOS and MacOS) was written a long time ago. The iOS version relied heavily on UIDocumentBrowserViewController for managing the documents. After a couple of years of neglect the document management was starting to have problems. The iOS version wasn’t able to create new documents. Also closing and saving a document seemed to fail about half of the time.

Eventually these problems made the app unusable enough that I started weighing my options on how best to get things working again. I first considered doing something incremental, focussing on specific issues with the iOS build and sticking with the UIKit based implementation.

But then I saw a WWDC video demonstrating how to get started building a SwiftUI Document based app. I felt some trepidation that there would be a lot of refactoring from UIDocument to SwiftUI’s Document. The UIDocument subclass included metadata that in hindsight was could/should have been UI state. The original implementation also includes complexity related to supporting one iOS target and one macOS target. Last but not least, Turtle supports two different document formats. (Legacy and Threaded) I’m positive I’ve done a bad job of structuring the code to support these two document formats. Moving to a SwiftUI DocumentGroup would require rebuilding this.

So despite all these reasons to stick with the messy UIKit implementation, I jumped into a SwiftUI implementation. While I still have some functionality gaps, on the whole this conversion has been satisfying and even fun.

Future posts will detail some of the issues I’ve encountered along the way.

Categories
Software

Camera1 + Camera2 = Lua

I recently used two cameras to take photos at a soccer game. One with a moderately zoom-y lens (80-200mm) the other with a more zoom-y lens (200-500mm).

Rovers vs Altitude FC
August 4, 2024 (exact time TBD) Rovers vs Altitude FC, Swangard Stadium, Burnaby

When I was looking through the uploaded photos I noticed the time stamps of the two cameras didn’t line up. I’ve seen this before when I have some photos from my phone (which is very smart with time zones and daylight savings) and other photos from an SLR. (much less smart with time zones and daylight savings) Lightroom is very good about letting you select the photos from one camera and offsetting their capture times by a specific amount.

But this is where I made a strategic blunder! I adjusted camera 1 to match the clock of camera 2. Next I went to Lightroom to adjust the times of the camera1 photos. This would have been easy if I knew the delta between the clocks in the two cameras. Sadly I had just destroyed that evidence (by synching the camera clocks.)

So I tried estimating, which was close, but there were a few places where photos from both cameras now had the same capture time, and I was unable to judge from the content whether I needed to increase or decrease the compensation.

I was fairly confident that if I could present the capture times visually (as two data series in a graph) it would be obvious how much I’d need to shift one series to sync with the other series. If only I could extract all the time stamps from both sets of photos, and present them in a graph.

But how do I get all the time stamps of a set of photos in Lightroom? A bit of research taught me I would need to either buy or create a Lightroom plug-in. Being both stingy and keen to learn stuff like how to write a Lightroom plug in, I chose: create!

Martin Fowler has a very helpful page for people that want to create their first Lightroom plug-in. This page also has a line that rings true for me.

“As a programmer, I’m always looking for ways to spend several hours programming to save an hour’s work.”

Martin Fowler

I feel seen.

So I followed the basic arc of Martin’s Lightroom/Lua journey, and was able to extract the needed time stamp values.

For the curious among you, it looked something like this:

I was then able to clean it up and import it into Numbers. (And struggle to get Numbers to show a scatter plot where each series has its own X values. Not intuitive. Thank you YellowBox) And I created a graph like this:

I was definitely going to need to stretch it out, in order to be try out different offset values. Once I stretched it, I was able to shift the blue dots left so that the green dots all fell in gaps.

Of course, next time I’m in this situation I will remember to sync my cameras before I start. And as a fallback, if I do end up using un-synched cameras, I will be sure to fix the photo metadata before I sync the camera clocks.

And for any Lua fans out there, here’s my plug-in code.

local LrApplication = import 'LrApplication'
local LrLogger = import 'LrLogger'
local LrTasks = import 'LrTasks'

local myLogger = LrLogger( 'lightroomLogger' )
myLogger:enable( "logfile" )

local function log( message )
  myLogger:trace( message )
end

local catalog = LrApplication.activeCatalog()

local function findCandidates()
	return catalog:findPhotos {
	   searchDesc = {
			 criteria = "captureTime",
			 operation = "thisMonth",
			 value = 2024-08-04,
	   }
	} 
 end

local function showDates()
	LrTasks.startAsyncTask(function()
		local photos = findCandidates()
		for i, v in ipairs(photos) do 
			-- local prop = v:getRawMetadata("fileSize")
			local createTime = v:getRawMetadata("dateTime")
			local colourName = v:getRawMetadata("colorNameForLabel")
			local createTimeOriginal = v:getRawMetadata("dateTimeOriginal")
			local fileName = v:getRawMetadata("path")
			local result = "orig, " .. createTimeOriginal .. ", " .. createTime .. ", " .. colourName .. ", " .. fileName
			log(result)
		end
	end )
end

showDates()
Categories
Software

How to listen for changes in an @AppStorage value

For my flash cards app, I’ve added a few user settings that I’ve discussed in other posts. For example, users can choose whether they want to use the same ask language for all cards (e.g. Always first show words in Japanese). They can also choose whether they want to use かな or Romaji for Japanese. They can also choose a subset of the cards they want to use (eg numbers, months/daysOfTheWeek etc.)

Any time one of these settings gets changed by the user, the app’s LanguageManager singleton needs to update the list of available words that match the settings. (for example, if a given word does not have a French translation and the user specifies fr as their only answer language, that card can’t be shown to the user.)

Strictly speaking, available cards could be recalculated on demand. (ie each time a user switches to a new card) But that would be suboptimal, given the available cards only changes when one of the settings changes. But in the spirit of saving cycles, I feel like finding a way to calculate available cards only when the value changes is a worthy cause.

For a starting point, I have the following code where the UI is setting the UserDefault value.

struct CardPickerView: View {
    
    @AppStorage("cardPile") var cardPile: CardPile = .allRandom
    var body: some View {
        VStack {
            Picker("Card Pile", selection: cardPile) {
                ForEach(CardPile.allCases,
                        id: \.self) { 
                    Text($0.title)
                        .tag($0.rawValue)
                }
            }
        }
    }
}

And the following code where LanguageManager updates availableCards

class LanguagesManager {
    static let shared = LanguagesManager()

    // The UserDefaults values that will impact which cards/words can be used
    @AppStorage("scriptPickers") var scriptPickers:  ScriptPickers = ScriptPickers.defaultDictionary
    @AppStorage("languageChoice") var languageChoice: LanguageChoice = .all
    @AppStorage("cardPile") var cardPile: CardPile = .allRandom

    // All the words
    let words: Words

    // The words that match the criteria
    var availableWords: Words

    func updateAvailableWords() {
        var result = words.filtered(by: cardPile.wordList)
        let askLanguages = languageChoice.askLanguages
        let answerLanguages = languageChoice.answerLanguages
        result = result.matching(askIdentifiers: askLanguages, answerIdentifiers: answerLanguages)
        availableWords = result
    }
}

This code mostly works as expected, however when a user updates scriptPickers or languageChoice or cardPile, availableWords doesn’t get updated. <frownyFace!!> My first thought was to use didSet (like this)

    @AppStorage("cardPile") var cardPile: CardPile = .allRandom {
        didSet { updateAvailableWords() }
    }

Sadly didSet on @AppStorage only works in very specific situations for me. Based on what I saw, it works if:

  1. your code is in a View
  2. you explicitly update the value (eg. self.cardPile = .all)

Your mileage may vary. Bottom line: didSet in LanguagesManager was not getting called for me.

It would have been possible to add calls to updateAvailableWords() in the UI code any time one of the related settings changed. But relying on the UI code to keep LanguagesManager fresh. Just. Felt. Wrong.

I also tried to use combine to subscribe to the @AppStorage values, but was not able to get that working. <anotherFrownyFace>

The solution I ended up using was to create a UserDefaults extension that would create a Binding<> for a specific UserDefaults key value. Anytime this value changes, it will fire a notfication.

extension UserDefaults {
    func cardPileBinding(for defaultsKey: String) -> Binding<CardPile> {
        return Binding {
            let rawValue = self.object(forKey: defaultsKey) as? Int ?? 0
            return CardPile(rawValue: rawValue) ?? .allRandom
        } set: { newValue in
            self.setValue(newValue.rawValue, forKey: defaultsKey)
            NotificationCenter.default.post(name: .userDefaultsChanged, object: defaultsKey)
        }
    }
}

extension Notification.Name {
    static let userDefaultsChanged = Notification.Name(rawValue: "user.defaults.changed"
}

Next, I used these bindings in the UI code where I’d previously been using @AppStorage.

struct CardPickerView: View {
    let cardPileBinding: Binding<CardPile>
    init(userDefaults: UserDefaults = .standard) {
        self.cardPileBinding = userDefaults.cardPileBinding(for: "cardPile")
    }
    var body: some View {
        VStack {
            Picker("Card Pile", selection: cardPileBinding) {
                ForEach(CardPile.allCases,
                        id: \.self) { 
                    Text($0.title)
                        .tag($0.rawValue)
                }
            }
        }
    }
}

Then I updated LanguagesManager to listen for this notification.

class LanguagesManager {
    private var subscriptions = Set<AnyCancellable>()
    init() {
        // a bunch of unrelated init stuff....

        NotificationCenter.default.publisher(for: .userDefaultsChanged)
            .sink(receiveValue: { notification in
                self.updateAvailableWords()
            })
            .store(in: &subscriptions)
    }

And this meets my needs, insofar as the UI code doesn’t need to explicitly fiddle with LanguagesManager state any time a user changes their prefs. I don’t like that the UI code needs to use this special binding to do the updating. I also don’t like that the UserDefaults extension needs to define multiple functions to return the different types of Bindings. If you all know of a way to get around this (generics?) I’m all ears!

I also don’t like that the getters and setters in the Binding values need to translate to and from rawValue. @AppSupport magically handled the translations between rawValue and encodedValue.

And one last problem with this approach, for at least one of my settings UI inplementations, calling the setter in the binding didn’t trigger an update to the UI. (it’s on my todo list to investigate this.) I got around this problem by adding a ‘dummy’ @AppStorage var in the View. Adding this dummy value seems to force the view to redraw when the UserDefault value changes.

    @AppStorage("cardPile") var cardPileDummy: CardPile = .allRandom
Categories
Software

A Gory Detail For Xcode Test Targets

Somehow, I’ve gone for years without being aware of this setting in the Test Targets used in Xcode projects. Apologies if this has been obvious to everyone on Earth other than me.

A Tangentially related background story, that enabled me to discover this setting

In my flash cards app, I needed logic to figure out which cards could be used for a given user’s ask/answer languages setting. For example, if a user wants to always use Korean as their ask language, and all other languages available as answer languages, then only cards that included Korean and at least one other language could be presented.

Also, bear in mind that for languages that have roman and non-roman script versions, they need to be treated as two separate languages, only one of which is relevant, depending on which the user has selected in Settings.

My first pass at the logic to pick the next card erred on the side of simple, rather than safe. It used the following process:

  1. Select the ask language (randomly if the user’s preference specifies multiple ask languages)
  2. Determine an answer languages list (again shuffle them randomly if there is more than 1 answer language)
  3. Find the list of words matching the ask language and the answer languages (a bug here was returning an empty word list when I specified a single answer language and there were no words yet with that language)
  4. Pick a word from the list
  5. Calculate the intersection of available languages for the word and answer language(s) specified by the user
  6. Return an object containing the wordId, the askLanguage and the array of answerLanguages

As I mention in the list, there is a bug in step 3 that can cause a crash later in the process. (Better would be to say at the get go, something like: ‘I don’t have any words that match your languages choices’

Also, the above has a highly buried dependency on the user’s setting/pref on whether they want to use the roman version or the non-roman version of languages that have their own special script/alphabets.

So (we’re getting VERY close to the topic of this post) I am currently in the process of improving this process by pulling the user setting dependency out (or at least make it injectable when testing.)

The Point

So I wrote a test, and attempted to run it. But when I ran my single test, with its shiny new injected dependency it never ran, because Xcode first insisted on running my main app, which of course was still crashing when attempting to use the old/broken logic for picking a card.

Thanks to a great answer from the ever helpful Quinn, I was able to discover this setting in Xcode.

Once I set it to None, I was able to run my stand alone test, and resume creating my improved card picking logic.

Categories
Software

Flipping #*&$%@! Cards with SwiftUI

Despite the salty title this was a very low stress investigation. I wanted to find a way to use animation to flip my flash cards. There are many great sources to do something like this:

struct ContentView: View {
    @State var angle: CGFloat = 0    
    var body: some View {
        VStack {
            Text("contentText: frontText")
                .padding(40)
                .background(.green)
                .rotation3DEffect(.degrees(angle), axis: (x: 0.0, y: 1.0, z: 0.0))
                .animation(.default, value: angle)
            Button("Flip") {
                angle += 180
            }
        }
    }
}

Tapping the button in the above app will flip the card, and show an empty green rectangle. So how would I go about showing different (dynamic) content on the ‘back’ of the card? The idea I’m currently going with is to create a front content view and a back contentView. The back contentView begins pre-rotated, so that when the parent view gets rotated it ends up being the correct orientation,

        VStack {
            ZStack {
                FlippableContent(contentText: frontText)
                FlippableContent(contentText: rearText)
                    .rotation3DEffect(
                        .degrees(180), axis: axis)
            }
            .rotation3DEffect(.degrees(angle), axis: axis)
            .animation(.default, value: angle)
            Button("Flip") {
                angle += 180
            }
        }
...
struct FlippableContent: View {
    let contentText: String
    var body: some View {
        VStack {
            Text(contentText)
        }
        .frame(width: 300, height: 250)
        .background(.green)
    }
}

This is close, but not quite right. It always only ever shows the FlippableContent view showing rearText (sometimes forwards, sometimes reverse). I experimented with different angles on the FlippableContents, however I’m not entirely clear on exactly how multiple 3d rotation effects get combined. Not my circus, not my monkeys I guess. Tho I’m definitely a bit curious…

To fix my problem, I’ve created logic that creates different opacity values for the front and back content of the card.

extension CGFloat {
    var rearOpacity: CGFloat {
        return (self / 180).truncatingRemainder(dividingBy: 2)
    }
    var frontOpacity: CGFloat {
        return 1 - rearOpacity
    }
}

And these new functions get used in opacity modifiers in the ZStack.

            ZStack {
                FlippableContent(contentText: frontText)
                    .opacity(angle.frontOpacity)
                FlippableContent(contentText: rearText)
                    .rotation3DEffect(
                        .degrees(180), axis: axis)
                    .opacity(angle.rearOpacity)
            }

Now the view gets initialized with angle at zero, which shows the front text. When the user flips the card, angle gets increased by 180 degrees. This animates through the rotation, and hides the front face, and shows the rear face. On the next flip, the rear face gets hidden and the front face gets shown.