Alastair’s Place

Software development, Cocoa, Objective-C, life. Stuff like that.

Testing MarsEdit 2.0

I tried MarsEdit a while back, but I didn’t see a significant advantage in using it over MovableType’s built-in post editor. Of course, I wasn’t blogging much back then…

However, Daniel Jalkut just released MarsEdit 2.0, so I thought I’d give it another try. I’m going to write my next few blog posts using MarsEdit to see if I like it. My guess is that I probably will.

Already I see things that I like very much—like the preview window. That’s a really nice feature, because it lets me see in advance how a post is actually going to look on my site. My guess is that pretty soon, $30 of my hard-earned cash is going to be winging its way on its long journey towards Daniel’s pockets :–)

Connectivity Hell

On Friday, my ’Net connection started bouncing up and down every minute or so. Obviously this isn’t supposed to happen, and this is the second time now that it has. Unfortunately, working from home, I rely on my ’Net connection working.

To their credit, Plus.net have always answered the phone promptly and have tried to help, as much as they can. I don’t actually think it’s their fault though. Most likely (particularly as it happened on a Friday last time too) it’s one of BT’s engineers doing something on a Friday at our local exchange.

Anyway, I’ve pretty much made up my mind that I need an office. Extra expense, obviously, but at least it means I’m not reliant on my home broadband connection.

(It has got a bit better, in that it’s now up and down every five minutes or so, rather than every minute, but even so it makes things rather awkward…)

NSCell and Overriding -objectValue

It’s sometimes tempting when writing a cell class to override the -objectValue and -setObjectValue: methods of NSCell and let something other than the cell store the value (perhaps you could even generate the value from some other data).

But please don’t.

The problem is that NSCell doesn’t use the -objectValue accessor in some important cases1, which means that if you override -objectValue, some of the cell’s methods will produce inconsistent results.

The right way to use NSCell is to always let NSCell store the data, which will avoid significant pain if someone is trying to do something fancy in a subclass of your cell.

1 For instance, in the private _stringForEditing and _attributedStringForEditing methods, which are used to get the string to put into the field editor if you’re using NSCell’s text editing support. Arguably the fault here lies with NSCell, but either way, there are no problems if you let NSCell store its own data.

Optimization Without Measurement - a Seductive Trap

Readers of Apple’s cocoa-dev mailing list will have seen a number of messages recently attacking the Objective-C feature of making messages to nil do nothing.

One of these posts (whose author shall remain nameless) made the claim that

…it’s disappointing that people still think that it’s quicker to just send the message to nil, than to do “if (target) [target message];” since it doesn’t matter how fast you make that message dispatcher, it can’t possibly be faster than the two instructions that the conditional takes…

These kinds of arguments are a very seductive trap for the unwary programmer; they seem “obvious”, right? Nevertheless, as I’ll show, they aren’t always as clear cut as they first seem.

If you are considering optimizations to your program (and you should), you absolutely must measure to see where you should spend your time. Indeed, if you write programs for Mac OS X, you are spoilt for choice in terms of the tools available to help you do this. (I’m a particular fan of Apple’s Shark tool… it’s a large part of the reason that iDefrag’s main display is so fast.)

With that in mind I decided to measure the function call overhead both for Objective-C messages and for a few other common types of function call. Apple don’t have anything specific that can help out with this type of measurement, so I fell back on a simple C test rig. I’m not going to reproduce the code for my complete test here, but I will give you a flavour:

#include <sys/time.h>
#include <stdio.h>

static double
hires_time(void)
{
  struct timeval tv;

  gettimeofday (&tv, NULL);

  return tv.tv_sec + tv.tv_usec * 1e-6;
}

#define ITERATION_COUNT 10000000

int
main (void)
{
  double elapsed;
  unsigned n = 0;
  volatile unsigned a = 0;

  elapsed = -hires_time();
  for (n = 0; n < ITERATION_COUNT; ++n) {
    /* The thing you're timing goes in here */
    ++a;
  }
  elapsed += hires_time();

  printf ("Incrementing a variable in memory takes %g ns.\n",
          (elapsed / ITERATION_COUNT) * 1e9);

  return 0;
}

(Note that you may have to be careful when compiling test rigs like the above to make sure that the compiler’s optimizer doesn’t optimize away the thing you want to test. Be very wary about any unexpectedly fast results.)

Anyway, back to the nil issue. For my test program, my Mac Pro (3GHz Dual-Core Xeon) reports the following results:

Message sends take 5.34852 ns.
Nil sends take 6.01709 ns.
IMP sends take 2.67442 ns.
Function calls take 2.00583 ns.
Dylib calls take 2.67425 ns.
Virtual function calls take 2.33997 ns.

A nearby G5 machine (2.3GHz G5) reports:

Message sends take 15.6726 ns.
Nil sends take 9.14313 ns.
IMP sends take 4.35283 ns.
Function calls take 3.91874 ns.
Dylib calls take 7.83509 ns.
Virtual function calls take 5.87761 ns.

These numbers are timings for a call and return to an empty function/method, averaged over 10 billion calls (yes, billion).

Obviously this is an artificial benchmark, but the long and short of it is that message sends are fast. Not as fast as a plain C function call, perhaps, though using an IMP you can get them to go as fast as a call to a function in a dylib, or even faster on PowerPC machines.

The difference between

if (foo)
  [foo bar];

and

[foo bar];

is very small; according to my measurements, on the G5 it increases the time taken for a non-nil send by 1.3ns and reduces the cost of a nil send by around 7.8ns. For this to be a net gain on the G5, therefore, more than one in six of your message sends would have to be to nil.

On the x86 machine, the time taken for a non-nil send increases by 0.33ns, and we reduce the cost of a nil send by 5.7ns. For this to be a net gain, more than one in seventeen of your message sends need to be to nil.

Are one in six message sends in a typical application to a nil object? I doubt it. How about more than one in seventeen? It’s possible, but in high-performance code (the only place this kind of saving would ever matter) it seems unlikely.

Even if you removed the nil checks from the Objective-C runtime, you’re only going to make the non-nil message sends faster by a tiny amount, and for what? You would have to add tests all over the place for nil, and performance critical code won’t ever care anyway because it will be using IMPs or C functions. So all you’d have done is made it more likely that the end user will lose their data when a nil does turn up unexpectedly. Before the change, your program might have behaved oddly (though there’s a good chance it would have let them save their work). Now it will crash instead.

Perhaps you could have some sort of error handling routine that triggered on a call to nil, but then the runtime needs to check for nil again, so you’ve erased the saving. On top of that, you’ve made all of your code more complicated in the process, and as a result you’ll now be paying a penalty because branch prediction won’t work as well with thousands of “if” statements as opposed to the one branch in the runtime.

The point? There is no credible performance argument for sends to nil being anything other than a no-op. Even a lot of supposedly high-performance code doesn’t need to shave off the few nanoseconds that you might save, and that’s only if messages to nil are more common than the figures above, or if you’ve modified the runtime and accept that messages to nil will cause an outright crash.

In fact, even though it has some justification (because there will be fewer conditional branches in ordinary code), there is no real performance argument in the other direction either; you simply don’t need those few nanoseconds. In the few cases where you might care, you won’t be writing your code that way anyway.

Our Products

My friend and colleague Chris Suter (who you may know from his regular posts on the cocoa-dev and xcode-users developer mailing lists) thinks I should mention our products more on my blog.

He’s probably right; after all, iPartition and iDefrag are pretty useful programs, and both of them have a number of unique features not found in anything similar on any platform. Mind you, just wait until you see iPartition 2.0 when we release it :–)

I promise, by the way, that this blog isn’t going to turn into a stream of advertising for our software. That’s just not interesting. But perhaps I’ll talk more about new features when we add them.

Boosting USB Signals by a Factor of 10?!

I read today on Macworld UK’s website that Yamaha has released a new USB speaker system. That in itself is not terribly interesting (unless you happen to work for Yamaha, or you’re in the market for such a thing), however Macworld went on to say

The system pushes out 20-watts of sound through use of the company’s amplification technology that boosts signals carried over USB by a factor of ten.

I’m sorry… run that by me again?! It “boosts signals carried over USB by a factor of ten”?!

Signals carried over USB are digital signals. How strong the signal is over your USB cable has no effect on the sound at the output, provided it’s strong enough to be distinguished from noise. The only things that determine the power of the sound at the output are the DAC, the analogue audio amplifier, and the speaker(s) to which it is attached. What’s more, unlike analogue audio signals, there is no inherent voltage level for digital audio signals. A sample value of +32,767 might correspond to 1µV, or it could correspond to 1MV. So it doesn’t even make sense to talk about boosting the signal by a factor of ten on the basis of sample values.

This is probably a sloppy error on somebody’s behalf, unlike the thoroughly deceitful practices of the makers of so-called “digital cables”, which are usually an excuse to charge the customer for a cable of quite unnecessarily high quality, which—unbeknownst to most consumers—will make generally no difference to the final audio quality, since it is only eliminating noise from the highly noise-immune (and often error corrected) digital signal, not from the final analogue output1.

1 Generally speaking, digital signals are pretty much an all-or-nothing affair. If the bit error rate is low enough for the error correction algorithm to deal with it, you get sound (or video). Otherwise, you don’t. If you’re right on the edge, you might get a mixture of both (a bit of audio, then some silence, then a bit more audio, etcetera). The only way these cables could improve matters is if you need an unusually long cable run, or you’re in an environment with a very high level of electrical noise, and an ordinary cable isn’t working for you. Otherwise, save your money and buy the cheapest cable you can find for the digital parts of your audio/video system. The analogue bits still benefit from high-quality cabling, obviously.

“Web Networkers ‘at Risk of Fraud’”

The BBC just published an article about the risks of using social networking sites.

Unfortunately, it’s got the usual “don’t give out your personal information” spin on it.

The fact is that we should be able to give out our personal information free from worry about banking organisations allowing other people to open accounts or government departments letting them apply for documents in our names. The security hole is on their side of the problem, not on ours, and if they want to close it then the right way to do it is for the government (or the banks) to issue identity cards and for the banks (and the government) to use those as our canonical identities. Or, better yet, chip us all (like we do with pets), and use the chips as our identity; that’s much harder to copy, if they do it right.

iPhone Fonts

For those of us not yet able to own an iPhone, it has been interesting reading the remarks from our U.S.-based brethren. The most interesting post so far, though, to my mind, must be John Gruber’s blog on the fonts installed on the iPhone.

I think he has hit the nail on the head with many of his points. I can only wonder what drove them to include the awful Courier New, whilst leaving out Courier. Or why they included both Arial and Helvetica, but left out the wonderful Gill Sans and Futura.

The only thing I disagree with in Gruber’s post, I think, is the part where he laments the inclusion of Trebuchet MS. iPhone clearly had to include all of Microsoft’s “standard” web fonts (well, the ones that aren’t just clones, anyway), because otherwise some websites wouldn’t have rendered correctly. This one included, incidentally (I use Trebuchet for some of the sans-serif text).

Of course, with any luck, this problem will go away all by itself as browsers gain support for SVG fonts. Maybe I’m being over-optimistic about that, though, given the past history of not properly supporting standards in some camps.

Richard K. Morgan on Superstition

I’ve just read a very interesting interview with Richard K. Morgan, in this month’s edition of Interzone. In it, Morgan decries the worrying rise of superstitious beliefs in recent years, noting that

…it’s not religion per se – you get the same kind of thing from otherwise apparently intelligent people who believe in rubbish like homeopathy, the power of crystals and the Gaia hypothesis (these people include the Prince of Wales and, apparently, John Gray, for f**k’s sake).

Incidentally, I think he’s talking here about John Gray, the U.K. academic, rather than John Gray, author of Men are from Mars, Women are from Venus or any other person of that name.

He then goes on to say

It’s one thing to be ignorant: that’s a circumstance that those who suffer are often powerless to change. But it’s quite another to emerge from ignorance, and then choose of your own free will to sink back into the slime. Why? Because enlightenment is just too complicated for your poor ickle head?

He’s right, of course. There seems to be mounting evidence that many members of the human race have an unshakeable desire to believe in fairy stories, no matter how obviously (or even how recently) they were constructed by the hand of man. Often it doesn’t even take much work to see the origins of these misguided beliefs, but there has been a worrying tendency amongst our political leaders to pander to them, seeking to put them on an equal platform with scientific theory under the pretence that doing so is somehow even-handed.

In some cases, things have gone even further, and now those with an officially recognised religion are granted rights over and above those afforded atheistic or agnostic citizens. In the United States, for instance, it is illegal to use, possess, manufacture or distribute (without a license, presumably) the drug dimethyltryptamine, except—under a ruling from the Supreme Court, no less—if you belong to the Centro Espirita Beneficiente Uniao do Vegetal, a church in New Mexico. They didn’t even need to provide evidence to back their case, which centred around a belief that drinking hoasca tea enhanced their understanding of God. This was, of course, the same Supreme Court that has ruled that it is illegal for people to possess cannabis, even for medicinal purposes, even if they believe that it is helping them.1 The only difference here is that one group claims to be religious; the other does not.

We have also seen the resurgence of the “creationist” movement, both in the United States and more recently here in the U.K., even attracting tacit support from no other than Tony Blair (further proof, if it were needed, that the man should never have been Prime Minister in the first place). As Morgan says,

What’s next? Re-instate prosecution for witchcraft? NHS checks to make sure your baby is not a faerie changeling?

Quite.

1 I cannot take credit for this excellent example; I happened upon it in Richard Dawkins’s excellent book, The God Delusion.