This points out a deep flaw in many people's understand of algorithms. CS classes teach all kinds of different sorting algorithms and we learn that N*log(N) is the fastest that any sorting algo ran run, but many people forget the initial premise for all of them:
the only thing you can do is compare items and move them. Few problems are really this limited. In this example, we can deduce the order from another data structure.
So when looking at a problem, don't think "hey, this is an instance of problem XYZ that I learned about in school." Instead, think about how this problem is
different from XYZ. You may find that the difference can be exploited with tremendous results.
My best real-world example came years ago when we had a stream of objects and we needed to know if each one was in a set of around 20 million. One guy did an analysis and said we couldn't possibly do it without some huge amount of database hardware. He assumed we had to check the database for each object.
I completely destroyed his argument by pointing out that most objects would NOT be in the database and if we optimized for that case, performance would be fine. The result was an in-memory data structure that is small, very fast and gives a "maybe or no" answer instead of a "yes or no" one. In other words, if the structure says "no" then you're
guaranteed the object isn't in the database. But f the structure says "maybe" then you have to query the DB for a definitive answer. You can tweak the probability that a "maybe" will ultimately result in "yes" and we have that set to 999/1000. In other words, if we have to go to the expense of checking the data, there's a 99.9% probability that we will find the object there. This is still working now, years later, with the database at about 150 million items.
You can sum it up with this: