Last month, Janie Clayton wrote a blog post about a particularly odd interview she had. A lot of what she writes is spot on - it is ridiculous interviewing for an iOS developer and expecting them to answer questions in Java, and it’s even more ridiculous offering to allow someone to use a language you aren’t comfortable with as an interviewer and then telling them that they can’t after all because you don’t know it yourself! Yes, all of that happened. Read the post here.
During this interview, Janie was asked to write a linked list; this is probably the second simplest data structure after an array, and her response to being asked about it was to tell the interviewer that she was
a hacker who learned programming by writing applications rather than learning algorithms and data structures you only use to pass code interviews at corporate entities
and was slightly incensed when the interviewer responded
“Oh, so you’re not a programmer. You’re more of a management type.”
I think one of the reasons Janie got a bit of push back here (which she talks about in her most recent blog post) is that while she’s right that it’s quite unlikely in run-of-the-mill programming jobs that you’ll find yourself needing to implement a linked list, the implication of her response is that this stuff is hard, that it needs a great deal of learning, and that it will be a waste of her time.
None of that is true.
Put another way: there is a reason they teach this stuff in Computer Science degrees. (I do have a CS degree - well, Information Systems Engineering, which included CS and Electronic Engineering - but I learned a lot of this stuff on my own before starting my degree.)
On the Linked List
Let’s deal with the linked list thing first. Even if you know what one is,
the chances are very good that it’s the wrong data structure to use. On
modern microprocessors, in 99% of cases cache locality is more important than
being able to manipulate lists using pointers, so you should use an array
instead. Or a CFArray
. Or a Python list
. Or a C++ std::vector
.
If I ever interview you and ask you about a linked list, it’s because you said you had a CS degree and quite probably you failed to answer a question about a more sophisticated data structure I asked you about. Either that, or I’m going to get you to reason about it somehow and the list itself isn’t really what the question is about, and in that case, if you said you didn’t know what one was, provided you didn’t study CS, I’d show you because the point wasn’t the list, right? (If you did study CS and don’t know what a linked list is, you just failed the interview; regardless of whether you’ve ever used one or not in a real program, you were taught about it and you really should know.)
For the benefit of those who don’t know what a linked list is, imagine you want to store the integers 2, 4, 6, 8, 10. You could use an array
but if you wanted to insert, say, 7, into the array, you’ll have to resize it and copy data around. On modern architectures, in most cases, that’s actually the right way to implement this, but on older systems, on the less powerful hardware used in embedded systems, or in certain special cases you might instead choose to store the numbers like this:
Each number is now stored in a structure with two elements (traditionally called a node); the first is the number, while the second is a pointer to the next structure in the list. This is called a singly-linked list, and it should be apparent that inserting 7 into it is just a matter of allocating a new list node, putting 7 into it, setting its pointer to point at the node containing 8, and then updating the pointer in the node containing 6 to point at it.
Obviously with a singly-linked list, if you have a pointer to a node, you can easily obtain a pointer to the next node, but you have no way to go backwards through the list; this also makes it hard to remove a node given just a pointer. The desire to go either way through the list, and also to make node removal as easy as node insertion leads to the idea of the doubly-linked list:
There’s also a smart-ass variant of the above where there’s only one “pointer” per node, which consists of the exclusive-or of the pointers to the previous and next nodes, which is neat but unless you’re on a memory restricted microcontroller you really shouldn’t.
Circular lists
By the way, there is a nice variant that I haven’t seen in any textbooks, namely the circular list, which lets you quickly add elements at either end of the list and also simplifies bookkeeping because there are never any null pointers.
Here’s a singly-linked version:
Note that we keep a pointer to the last element; to insert at the head of the list, we update the last element’s pointer but not the tail pointer, whereas to insert at the end of the list, we also update the tail pointer.
If you ever have cause to implement a linked list algorithm, I strongly recommend using the circular variant. And if you are unlucky enough to turn up for an interview where someone really does want you to show them a linked list, draw that kind and explain to them what the benefits are (no null pointers, simplified manipulation, fast insertion/removal at either end with only a single tail pointer to manage). Well, if you want the job, anyway.
Why you should learn about algorithms
Note that I said “learn about”, not “learn”. You do not need to be able to write a Quicksort or Shell sort routine from scratch and I would never ask someone to in an interview; if you need to do that, you’ll be able to look it up.
The main thing to understand here is the idea of algorithmic complexity. Usually we’re talking time complexity but occasionally someone might care about space complexity too. Complexity is a measure of how expensive the algorithm is, and we typically express it using “big O notation”. Some examples:
Notation | Meaning |
---|---|
O(1) | The algorithm takes constant time (best possible) |
O(log n) | The algorithm takes time proportional to the logarithm of the size of the input (good) |
O(n) | The algorithm takes time proportional to the size of the input (OK) |
O(n2) | The algorithm takes time proportional to the square of the size of the input (not great) |
O(2n) | The algorithm takes exponential time (bad) |
O(n!) | The algorithm takes time proportional to the factorial of the size of input (really bad) |
You may also see people talk about worst case, amortised worst case and average case. Worst case and average case are fairly easy; amortised worst case is where you consider the overall cost of an algorithm over a set of inputs - the idea being that the amortised worst case will be lower if the worst case is hit less frequently.
It’s also important to understand that, in addition to their complexity, many algorithms have a fixed cost, and that there is a general trend towards higher fixed costs for algorithms and data structures with lower time complexity.
How is this useful? Well, many languages and runtime libraries make you
choose what kind of container to use to hold your data, and this choice can
have a noticable – and sometimes extreme – impact on your program’s
run time and memory usage. To help you make an informed choice, the
documentation will hopefully tell you the algorithmic complexity (or cost)
of the operations on the container. For instance, looking at
std::vector::operator[]
,
we can see that its complexity is listed as “constant” (i.e. O(1)), whereas
std::map::operator[]
lists its complexity as “logarithmic in the size of the container” (i.e.
O(log n)).
The C++ STL also has a few other types you could use instead of std::vector
,
for instance std::deque
or std::list
. It makes you, the developer,
choose, and to make that choice you need some idea of which will be better for
your particular application.
That’s a bit painful, and on iOS and macOS, we’re very lucky – Core
Foundation’s containers are smart and automatically use an appropriate
implementation for the number of items they contain. So, for instance, a
small CFArray
is basically just a C array, but as it grows it changes into a
somewhat more sophisticated data structure that allows fast insertion and
deletion in spite of the number of elements it holds. That said, there will
still sometimes be occasions where you need to choose between a CFArray
and
a CFDictionary
, and there may be occasions when you need a tree rather than
a hash, in which case you might end up rolling your own.
But…
Learning this stuff will take months?
You can learn the basics very quickly
(hopefully reading the above was quite useful).
I could more profitably spend my time learning Core Data?
Yes, maybe, though
this stuff will have applications there too.
Those algorithms textbooks are huge and hard to read :-(
Well, some of
them are, yes. I’d recommend you pick up a copy of Sedgewick’s
Algorithms in <language>.
It’s available in a variety of different language
flavours (I have a C++ copy, but I’ve seen C, Pascal, and Java, and there are
probably others too), it’s short and accessible (lots of pictures and short
example programs). Even skimming it will give you at least some idea of
where to look when you need to.
Finally
If you go for an interview for a job as a programmer, it isn’t unreasonable to expect that someone will ask some questions relating to fundamental algorithms or data structures. If someone does ask, they aren’t trying to discriminate against the underprivileged; they’re trying to discriminate between job applicants on grounds of competence. Even if the question seems irrelevant to what you’re going to do, it’s a good bet that someone who gives a good answer is going to be better at doing the simpler work where you don’t need to know this, and that is something that will factor in to the decision about who to hire. (Of course, that somebody may also be more expensive to hire, so bear that in mind too.)
Now, as I said, I wouldn’t ask in an interview about linked lists per se, unless you say you have a CS degree and you’ve just failed to answer a question I think you should know the answer to, in which case I’m probably trying to decide whether you lied about your degree.
I might ask you to show me how you would search a string (but I don’t expect
you to know the best answer OTOH; the point is to work through it and see how
you react). I might ask about the merits of hash tables
(e.g. std::unordered_map
or CFDictionary
) versus trees (e.g. std::map
).
I would, however, take into account your background when thinking about your
answer, and if you didn’t know about something I might explain a bit and see
what you had to say. The point, often, is about testing your reasoning
skills, not about whether you know the answer and can rattle it off.
One final word of advice: if you respond to a question in an interview, however silly you feel it is, with snark, you probably aren’t going to get the job. Part of the reason for interviewing people is for both parties to decide whether they’d like to work together, and snark is going to put people off.