Advancing Technology by Lying to Developers

Mobile devices have been increasing in screen size, screen resolution, memory, and other capabilities on a continuous basis from the time I got my first Apple Newton MessagePad in 1993. Back then screens were about 336×240 pixels, and each pixel was either on or off — no color. There was a total of 4.625 MB (that’s MEGA bytes) of memory. My first Windows CE device was probably my HP 620 LX in 1998. It was a “clamshell” design with a 640×240 screen and 16 MB of memory.

The thing we knew intuitively from being involved in personal computing since there was such a thing as personal computing was that “change is the status quo”. Our programs never assumed how big the screen was, because we knew our program would need to run next week on a bigger screen, so we wrote our code so that it queried the operating system to ask how big the screen was before dynamically laying out its user interface to fill all available pixels. We never assumed that devices would always be monochromatic, so we wrote our compressed file format to accommodate “words of Christ in red” before they could even be displayed in anything but black on a greenish screen. And even though the entire Bible wouldn’t fit in memory of those first devices, we plowed ahead with the best compression we could manage and a user interface that supported displaying two Bibles simultaneously, knowing that very soon you’d be able to get not just one of our Bibles but two whole Bibles onto the device at the same time.

Fast forward to the iPhone in 2007. When you work for Apple you apparently get big-headed and begin to think you’re among the smartest programmers in the world. Nobody can match your brilliance. Each generation of device you work on is “magical”. It has capabilities and features that nobody could have imagined even six months ago. Features like a more memory and a bigger screen.

Since you couldn’t imagine those features last year, and since you’re God’s gift to technology, you’re positive that nobody else could have imagined those features. So what’s going to happen to all those apps written by people “too dumb to work at Apple” when your new device with a bigger screen comes out? Why, they’ll crash, of course.

Not PocketBible.

You only have to be in this business a week to realize that you can’t hard-code your program to assume a particular screen size. But Apple does this with every single device. Up until iOS 8, we had to prepare a “splash image” to display when the program launched in every possible size and resolution. Currently, that means we have to create launch images in 13 different sizes, one for each iPhone screen size that has ever been shipped, in both portrait and landscape orientation.

Current iOS launch image requirements

Current iOS launch image requirements


If instead they allowed us to manipulate a single image at run-time, we could do all of these with one PNG. But they require us to know every size of every screen we might ever run on (by the way, the image above omits devices prior to the iPhone 4, which would add another half-dozen sizes if they hadn’t already been abandoned by Apple).

This isn’t about managing lots of images. It’s about a philosophy that can’t think past yesterday.

Because of this philosophy, when a bigger screen comes out, Apple either “letterboxes” old apps (putting black bars in the empty space that the program couldn’t possibly imagine would ever be there) or scales them (allowing them to believe the screen is no bigger than last year’s device, then scaling up everything they draw to fill the bigger screen). They believe they are saving developers from having to re-release their apps every time a new device comes out. But in reality, they are requiring every developer to re-release their app to jump through whatever hoop is required to get Apple to stop letterboxing or scaling their apps.

iPhone 6 scaling

Pre-iPhone 6 version of PocketBible on the left gets scaled up. Adding a “launch screen” (which is unrelated to drawing text) tells iOS not to lie to us about the screen size, producing the sharper image on the left with absolutely no changes to PocketBible code!


With iOS 7, there was a special checkbox we had to check to tell the OS that we understood their new semi-transparent user interface elements. With iOS 8, in order to convince iOS not to scale your app (producing blurry text), you have to provide a special, scalable launch image that works on any screen size. (Gee whiz, 2015 and we’re finally recognizing that screens might get bigger in the future! Thanks, Apple!) Until you do that (which requires re-releasing your app), iOS will lie to you about the size of the screen then scale your user interface up to the bigger physical size of the screen, producing blurry text.

Oh, and you still have to provide those 13 launch images for older devices.

The result of this policy of “technical advancement by lying to developers” is that instead of one guy at Apple having to write zero lines of new code, hundreds of thousands of developers have to update and re-release their apps. There would not have been a personal computing revolution in the 80’s and 90’s if Microsoft would have taken this approach. Back then, Microsoft would collect commercial software products and use them for regression testing of new versions of DOS and Windows. After all, you wouldn’t want to do something stupid and break every single app the way Apple does with every release of the iPhone.

This industry used to be exciting. I was like a kid in a candy shop. Technology was changing and we were riding the “bleeding edge”. Now I feel like the only grown up in the room. I want to slap some of these Apple and Google kids around and tell them to shape up.

Why I Don’t Care About Swift

Swift is a new programming language created by Apple for use on OS X and iOS devices. The programming world is agog. Apple’s fantastic new language apparently solves all their problems, as evidenced, they say, by the fact that some programmer ported Flappy Birds to it in a few hours.

I’ve been around long enough to see languages come and go. Each claimed to solve all the problems introduced by its predecessors, yet each was replaced by a language that solved all its problems. In some cases, the new language surpassed the success of the language it replaced (C++ and Java); in other cases, the new language faded into obscurity (Modula-2 and Ada).

Lately the motivations for new languages have been dubious. There is a big emphasis on making a language easy to learn and having it hide nasty issues related to memory management and type safety. One review of Swift I read stated, “Apple hopes to make the language more approachable, and hence encourage a new group of self-taught programmers”. While that sounds great, it means that those of us who have mastered our craft after 30 years or more of practice are saddled with the training wheels and water wings that are written into these languages for the noobs.

A classic example is the lack of unsigned integers in Java. The motivation for this was to simplify the language for “new and self-taught programmers” by avoiding errors caused by a lack of understanding of sign-extension. However, for those of us who showed up for class the day that sign extension was taught (that would be day two), we’re left with a language that unnecessarily limits the range of positive integers and requires us to actually have mastered sign extension in order to understand what is happening when we directly manipulate the bits in our integer variables.

Explicit vs. Implicit Typing

One of the simplifications Swift makes is that it infers the types of variables from the values assigned to them rather than requiring the programmer to explicitly type variables. If this was true “weak typing” like I’m familiar with in VBScript, it would be great (though it would come with its own set of problems). But all Swift does is infer the type of the variable from the first value you assign to it.

This actually introduces problems, because it’s not always possible to unequivocally determine the type of a literal value. So Swift gives you ways to force it to interpret a literal value as a given type. Rather than removing the necessity of the programmer understanding types, Swift thus requires “new and self-taught” programmers to have a mastery of types so that they can understand how Swift is working behind the scenes and make sure that their variables have the desired type.

Strings

Swift is said to improve string-handling over Objective-C (the current language used on OS X and iOS). There is certainly room for improvement there. When I first started programming in Objective-C, one of the first things I did was bring over my own C++ string class, as I found NSString to be overly complicated and muddled. Over the years I’ve gotten better with NSString.

I would argue, however, that some of the so-called “improvements” in Swift with respect to strings are differences without a distinction. So instead of this in Objective-C:

[NSString stringWithFormat:@"The value of num is %d", num]

you say this in Swift:

"The value of num is \(num)"

The Swift version is obviously more concise, but it is also less powerful. To add more complex format specifications to Swift you actually have to invoke the functionality of the underlying NSString class, which means the “new and self-taught” programmer, again, needs to understand the details of the implementation in order to do anything beyond the simplest strings.

One of the stated benefits of string handling in Swift is that “all strings are mutable”. One need not worry about whether the string is declared as an NSString (immutable) or NSMutableString (mutable). Well, you don’t have to worry unless you do have to worry — strings assigned to constants are immutable in Swift. So:

var myString1 = "Mutable string"
let myString2 = "Immutable string"
myString1 += myString2    // perfectly legal
myString2 += myString1    // compile-time error

Switch Statements

Swift eliminates the “fall-through” behavior of switch statements, which is said to eliminate bugs caused by omitting the break at the end of each case block. But, oops, sometimes the fall-through behavior is exactly what you want. So Swift adds the fallthrough keyword. It could be argued that Swift eliminates a line of code (the break) while giving the behavior one normally desires. But at the same time, it adds a keyword (fallthrough) that does the opposite. This requires “new and self-taught” programmers to have the same thorough understanding of switch behavior that Objective-C and C++ programmers do.

Single-line Blocks

The Swift compiler will warn you if you omit the braces in any block (such as after an if) and does not allow single-line blocks, thus avoiding this error:

if (x < 0)
    goto fail;
    goto fail;

The code above will always execute one or the other of the goto statements in Objective-C or C++. Even though the second goto is indented, it is not part of the if-block and will be executed if the condition is false.

Swift will warn you about the missing braces and force you to write this:

if (x < 0)
    {
    goto fail;
    }
goto fail;

Or, for those of you who don’t do your braces the right way, this…

if (x < 0) {
   goto fail;
}
goto fail;

This is fine, and hard to argue with. The supposition is that the programmer will immediately recognize the flaw or won’t make the mistake in the first place. On the other hand, I would argue that the same C++ programmer who wrote the erroneous code will write this in Swift:

if (x < 0) {
    goto fail; }
    goto fail;

I always put braces around my blocks, even if they are one-line, so this doesn’t affect me. It’s ironic, however, that while Swift prides itself in eliminating the unnecessary break statement at the end of a case block, it requires two to four additional lines (braces) in if, for, and while statements, which are more numerous.

PocketBible and Swift

I will be more than happy to learn and use Swift for programming on iOS and OS X. I just don’t believe the hype and won’t convert just for the sake of doing something new.

I am a strong proponent of platform-independent languages like C, C++, Java, and, to a lesser extent, Objective-C (the latter is primarily an Apple language, though it has its origins outside of Apple). Such languages allow me to develop code on one platform and re-use it on another. One of the promises of C++ and Java was that you could develop the code for one platform and use it on many others. Swift is an Apple language (the same way C# is a Microsoft language). It only works on Apple devices. While those are numerous, they’re not the only devices out there. So rather than moving toward the “write once, read everywhere” model promised by Java, we’re back to “write everywhere” as each platform requires its own language.

I don’t mind learning a new language. I already jump from C++ to C# to Java to VBScript to Javascript to MS-SQL on a daily basis. For those of us who write code for a living, being multilingual is a job requirement. This is precisely why I care so very little about the supposed advantages of Swift; this isn’t a religious war for me, it’s just a tool. When someone comes out with a new kind of screwdriver, I may or may not buy it until I need it. And then I’ll just buy one and use it — I won’t try to convert all my screwdriver-toting friends.

So will PocketBible for OS X and/or iOS be re-written in Swift? Probably not today, and probably not until Apple requires it. But Swift depends on Objective-C under the hood, so my guess is that Apple will continue to support Objective-C apps for a long time.

Why iOS 7 is Objectively Bad

A discussion on Facebook with an Apple employee resulted in some comments that I thought would be better presented here. The response to my general complaint about iOS 7 was that it was new, and with anything new it just takes time to learn the differences. I disagree. iOS 7 is demonstrably and objectively wrong. Here are just a few observations.

My problem with iOS 7 isn’t “how do I remove an app from memory?” or “how do I do a search?”, but rather with the overall appearance (and therefore usability).

  1. There is less difference between the container and the content. Look at the Contacts app, where the screen is all white except for small blue labels and small black values. I just tapped a blue label (“home”) to change my daughter’s home phone number and it called her instead. It’s hard to tell whether something is touchable or editable. And what is editable (the captions in this case are, in fact, changeable) and what is fixed (like the navigation bar text at the top of the screen).
  2. There is inconsistent use of color. The Contacts app uses a blue “tint color” (which is what the SDK calls the color that is used for buttons, captions, etc. throughout an app). The Calendar app uses orange. The Notes app uses yellow except (and this is true of all apps) when a system-defined UI element pops up (like the confirmation dialog you get when you select the trash can), in which case you get blue as the accent color (except for the Delete Note button, which uses red, even though the trash can itself was yellow).

    What the system seems to be communicating is that color is irrelevant and that I shouldn’t count on color to tell me anything. But at the same time, red is consistently used for “danger” — like delete confirmations. And blue is always used for alert boxes. And the tap-and-hold menu is always white text on black (even if the text itself that you’re selecting is white on black, making the menu impossible to see). So is color important or not? iOS 7 would say “no” out of one side of its mouth and “yes” out of the other.

  3. Fonts and icons are thin, making them blend into the background. Fonts are sans serif, making them harder to read.
  4. There is a lot of gray-on-gray and white-on-white. Low contrast is hard to see. In an effort to de-accent the container and focus on the content, we’ve made the control elements (buttons and captions) harder to see and read.

There are not “how do I do this in iOS 7” observations. These are objective criticisms of the design of the user interface. I don’t need instructions on how to read a sans serif font or how to see low contrast text on a similarly colored background. You can’t educate your users past the unarguable flaws in the design of the operating system.

Ironically, it’s not “ugly”. It looks very clean. But even though white road signs with white lettering would look “clean”, we make road signs with high-contrast white-on-blue or black-on-yellow to make them easy to read. iOS 7 fails the readability and therefore usability test.