Daniel Lemire's points about low-level hardware optimization notwithstanding, it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know nothing about the data beyond the fact that it is sorted / monotonic.
If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better. eg: a human searching a physical paper dictionary can zoom into the right bunch of pages faster than pure idealized binary search; it's a separate matter it's hard for humans to continue binary search till the very end and we might default to scanning linearly for the last few iterations (cognitive convenience / affordances of human wetware / etc).
In mathematical language, searching a sorted list is basically inverting a monotonic function, by using a closed-loop control algorithm. Often, we could very well construct a suitable cost function and use gradient descent or its accelerated cousins.
More generally, the best bet to solving a problem more efficiently is always to use more information about the specific problem you want to solve, instead of pulling up the solution for an overly abstract representations. That can offer scalable orders of magnitude speedup compared to constant factor speedups from just using hardware better.
Furthermore, with the vast and immediate knowledge that LLMs have, we could see a proliferation of domain-specific sorting algorithms designed for all types of purposes.
> If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better.
You don't even need priors. See interpolation search, where knowing the position and value of two elements in a sorted list already allows the search to make an educated guess about where the element it's searching for is by estimating the likely place it would be by interpolating the elements.
It's an interpolation search. You interpolate the values you evaluated by whatever method you'd like. No one forces you to do linear interpolation. You can very easily fit a quadratic polynomial with the last 3 points, for example.
Interpolation search seems to have a convergence rate of log log n. That's pretty efficient.
Say, if you know the function is a polynomial of degree N, with N+1 datapoints you can find it – e.g. with Lagrange's polynomial, although the finite precision of computer numbers might make that more complex.
I swear I read an article about treaps but instead of being used to balance the tree, they used the weights to Huffman encode the search depth to reduce the average access time for heterogenous fetch frequencies.
I did not bookmark it and about twice a year I go searching for it again. Some say he’s still searching to this day.
Huffman coding assumes your corpus is a string of discrete elements (symbol strings) without any continuous structure (eg. topology/geometry). With that fairly mild assumption, it gives a recipe to reorganize (transform/encode) your data as a prefix-tree, to minimize the bits of information needed to communicate the contents of your corpus i.e. reducing (on average) the bits of information you need to identify a specific item. Eg. To go back to the analogy from my previous comment above... if the function you are inverting via search has long plateaus then you could simply front-load those as guesses; that's roughly the spirit of Huffman coding, except it eschews monotonicity.
> it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know nothing about the data beyond the fact that it is sorted / monotonic
Also if you do not learn anything about the data while performing the binary search, no? Like, if you are constantly below the estimate, you could gess that the distribution is biases toward large values and adjust your guess based on this prediction.
It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.
If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right. And if we guess incorrectly, in the worst case, the algorithm degrades to a linear scan.
Unless we have prior knowledge. For example: if there is a particular distribution, or if we know we're dealing with integers without any repetition (i.e. each element is strictly greater than the previous one), etc.
> It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.
You have another piece of information, you don't only know if the element was before or after the compared element. You can also know the delta between what you looked at and what you're looking for. And you also have the delta from the previous item you looked at.
Is the disconnect here that in many datasets there is some implicit distribution? For example if we are searching for english words we can assume that the number of words or sentences starting with "Q" or "Z" is very small while the ones starting with "T" are many. Or if the first three lookups in a binary search all start with "T" we are probably being asked to search just the "T" section of a dictionary.
Depending on the problem space such assumptions can prove right enough to be worth using despite sometimes being wrong. Of course if you've got the compute to throw at it (and the problem is large) take the Contact approach: why do one when you can do two in parallel for twice the price (cycles)?
Assuming your key space is anything like randomly distributed.
Thinking about it--yeah, if you can anticipate anything like a random distribution it's a few extra instructions to reduce the number of values looked up. In the old days that would have been very unlikely to be a good deal, but with so many algorithms dominated by the cache (I've seen more than one case where a clearly less efficient algorithm that reduced memory reads turned out better) I suspect there's a lot of such things that don't go the way we learned them in the stone age.
> If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right.
This is true for abstract and random data. I don't think it's true for real world data.
For example, python's sort function "knows nothing" about the data you're passing in. But, it does look for some shortcuts and these end up saving time, on average.
For a list of sorted values with no other knowledge, the binary search is optimal. Provably, it is simple information theory on binary information.
You can do better if the list is stable by reusing information.
But gathering that information during searches is going to require great complexity to leverage, as searches are an irregular information gathering scheme.
So create RAM for speedup optimizations up front.
1) Create a table that maps the first 8 bits to upper and lower indexes in the list. Then binary search over the last 8 bits. That reduces the search time in half.
2) Go all the way, and create an array of 32,768 indexes, with all 1's for misses. Either way, search returns O(1).
Stable lists allow for sliding parametric trade offs between RAM-lookup vs. binary search. From full lookup, to full binary.
> Also [IFF] you do not learn anything about the data while performing the binary search, no?
Yes, absolutely!
I forgot to share this general perspective above, and it's too late to edit, so I'll add it here...
Since binary search assumes only monotonicity; splitting your interval into two equal parts extracts one bit of information per step, and any other choice would extract less information on average. One bit of information per step is how you end up needing log(n) steps to find the answer.
To accelerate your search, you basically need to extract those log(n) bits as fast as you can. You can think of that as leveraging both the prior, and everything you learn along the way -- to adaptively design each step to be the optimal experiment to extract maximum amount of information. And adaptive local models of your search space (gradient / hessian / etc) allow you to extract many more bits of information from each query / experiment, provided the function you are inverting has some local structure.
PS: That is why we leverage these ideas to "search" for the optimum, among a space of solutions.
1. Check for dense list O(1)
2. Check upper bound
3. Constant trip count binary search
The constant trip count is great for the branch predictor, and the core loop is pretty tightly optimized for the target hardware, avoiding multiplies. Every attempt to get more clever made the loop worse and did not pay for itself. It's hard because it's an array-of-structs format with a size of 12, and mostly pretty small N.
Yeah namespaces and public/private would be quite nice, but C doesn't have them, so they get hacked on via macros and prefixing. The syntax was not the hard part of working or analyzing this code, though.
Those are fairly indistinguishable. It's when they start removing letters from words to save... debug symbol bytes or something? That's when c-style naming annoys me.
This is also my pet peeve with a lot of code as well as commands like
npm -g i package-name
Like why would you teach people to do this? I understand people needed to save precious bytes in the sixties so we have cat and ls but saving 192 bytes or whatever with shorter variable names is not a worthwhile tradeoff anymore.
For a pretty small N I've found that less clever can be quite a bit faster. I'd try a linear search - possibly SIMD if you can change the data format to struct-of-arrays. An adaptive approach that uses linear search up to a certain N can also yield some benefit.
If you control the layout, eytzinger layout typically will give you the best of both worlds. As fast as a linear scan for small N, much faster than binary search over a sorted array for large N.
The first implementation I encountered was a linear search, starting at the last-found field. Empirically it performed better to do a binary search with early exit and branchless bounds selection, I think due to branch predictor pressure. The data representation could be changed but it's tricky, as there are other traversals that want to go in sorted order, and there are lots of places that pass just one pointer for fields. But I agree any further improvement will probably have to come from that.
SIMD is tricky even with SoA because there is significant latency going between the general registers and the vector units, plus arm little cores can be configured to share a vector unit with another core.
> SIMD is tricky even with SoA because there is significant latency going between the general registers and the vector units
My experience is mostly limited to AMD64, but libraries like glibc use SIMD in many places for faster linear search. Presumably they've done testing and found it worth while.
Yeah arm little cores are a very different story - they aren't superscalar out of order architectures, they can dispatch up to two operations per cycle.
Big cores are more like that dispatching 8 or more operations per cycle, but they're also more expensive, larger, etc.
Sure, but the whole point is that you often don't know anything further about the data.
That's why b-trees are the standard in databases. The data could be anything, and its characteristics could massively change at any time, as you suddenly import a whole bunch of new rows at once.
And while you can certainly design algorithms around e.g. gradient descent to try to accelerate lookup, b-trees are already incredibly fast, and have lots of other benefits like predictable worse-case performance and I/O requirements, supporting range scans, ordered traversal, prefix conditions, etc.
So yes, you can certainly design lookup algorithms that are more efficient for particular data distributions, but they will also often lack other important properties. And b-trees are already so fast, improvements are often negligible -- like even if another algorithm produces a closer initial guess, it may be slower to locate the final item, or it may be faster on average but have horrible worst-case performance that makes it unusable.
Even with a paper dictionary, I've always used pretty much a binary search beyond the first initial guess, which only saves you a couple of hops. And actually, once I get to the right handful of pages I'm probably more linear than I should be, and I'd probably be faster if I tried to do a rigorous binary search, but I have to balance that with how long it takes to flip pages.
The cost plan is a crude approximation of the actual query cost. Sometimes, the query planner makes a terrible guess. Your resident DBA won't appreciate being sometimes paged at 3 AM on a Sunday. A good strategy is to freeze the query plan once you have sufficient sample size of data in the involved tables.
Fritz Henglein has done some interesting work on fast sorting/grouping. I think Generic Discrimination
Sorting and Partitioning Unshared Data in Linear Time[1] is the main paper. Ed Kmett took those ideas and refined them into the discrimination[2] library for Haskell, and gave a very interesting tech talk about it[3].
For humans binary searching a dict is slower because it requires a different physical action vs. scanning and we have advanced OCR and flipping through capabilites. Especially recognizing when something hasnt changed e.g. still on Es keep flipping is maybe 10ms.
More accurately, binary search is optimal only if you cannot determine distance between the data points (you can compute `<` but not `-`). It's inaccurate to say it's optimal if you know "nothing" about the distribution. If you encounter a point that's much closer to your high pivot vs much closer to your low pivot, there is no possible prior knowledge state uninformed enough to conclude that the best place to search in both cases is in the middle.
Isn't "quaternary" just sort of unrolling the binary search loop by one level? I mean, to find the partition in which the item is located, you still do roughly the same rough number of comparisons. You're just taking them 4 at a time, not 2 at a time. Seems like loop unrolling would give you the same.
Yes, this can be seen as unrolling the loop a bit. It improves performance not by significantly reducing the number of instructions or memory reads, but by relaxing the dependencies between operations so that it doesn't have to be executed purely serially. You could also look at it as akin to speculatively executing both sides of the branch.
Quaternary search effectively performs both of the next loop iteration’s possible comparisons simultaneously with the current iteration’s comparison. This is a little more complex than simple loop unrolling.
Regardless, both kinds of search are O(log N) with different constants. The constants don’t matter so much in algorithms class but in the real world they can matter a lot.
It's trickier than that. Modern processors are speculative, which means that they guess at the result for a comparison and keep going along one side of a branch as far as they can until they are told they guessed wrong or hit some internal limit. If they guessed wrong, they throw away the speculative work, take a penalty of a handful of cycles, and do the same thing again from a different starting point.
Essentially, this means that all loops are already unrolled from the processors point of view, minus a tiny bit of overhead for the loop itself that can often be ignored. Since in binary search the main cost is grabbing data from memory (or from cache in the "warm cache" examples) this means that the real game is how to get the processor to issue the requests for the data you will eventually need as far in advance as possible so you don't have to wait as long for it to arrive.
The difference in algorithm for quad search (or anything higher than binary) is that instead of taking one side of each branch (and thus prefetching deeply in one direction) is that you prefetch all the possible cases but with less depth. This way you are guaranteed to have successfully issued the prefetch you will eventually need, and are spending slightly less of your bandwidth budget on data that will never be used in the actual execution path.
As others are pointing out, "number of comparisons" is almost useless metric when comparing search algorithms if your goal is predicting real world performance. The limiting factor is almost never the number of comparisons you can do. Instead, the potential for speedup depends on making maximal use of memory and cache bandwidth. So yes, you can view this as loop unrolling, but only if you consider how branching on modern processors works under the hood.
Yea, I get that the actual comparison instruction itself is insignificant. It's everything that goes along with it. Seems like quaternary is fetching more data, however.
For instance, if you have 8 elements, 01234567, and you're looking for 1, with binary, you'd fetch 4, 2, and then 1. With quaternary, you'd fetch 2, 4, 6, then 1. Obviously, if you only have 8 elements, you'd just delegate to the SIMD instruction, but if this was a much larger array, you'd be doing more work.
I guess on a modern processor, eliminating the data dependency is worth it because the processor's branch prediction and speculation only follows effectively a single path.
Would be interesting to see this at a machine cycle level on a real processor to understand exactly what is happening.
It's not about doing more or less work; it's about doing the work faster. For instance, it's relatively common to discover that some recomputation can be faster than caching or lookup tables. Similarly, fetching more from memory also can be faster if it means you make less roundtrips.
Well that's where I thought this link was going to go before it went down the simd path... We have a way to beat binary search, it is called b-trees, it has the same basic insight that you can easily take 64 elements from your data set evenly spaced, compare against all of those rapidly, and instead of bifurcating your search space once, you do the same as six times, but because you store the 64 elements in an array in memory, they only take one array fetch and you get cache locality... But as you have more elements, you need to repeat this lookup table like three or four or five times, so it costs a bit of extra space, so what if we make it not cost space by just storing the data in these lookup tables...
A B-tree is not a search algorithm though, it is a data structure. While it would nice to be able to somehow instantly materialize a B-tree from a linear array, CPUs aren't quite there yet. It would also be nice not to have to deal with linear arrays where B-trees would be better fit in the first place, but we are not quite there yet either.
I also wrote recently [1] about Exponential Search [2] which is another algorithm if you need to repeatedly binary search in an array where the elements you're searching are themselves are sorted. It allowed for an 8x speedup in our workload!
Exponential search is useful when you're querying a REST API that addresses resources with sequential IDs, and need the last ID, but there's no dedicated endpoint for it:
HEAD /users/1 -> 200 OK
HEAD /users/2 -> 200 OK
HEAD /users/4 -> 200 OK
...
HEAD /users/2048 -> 200 OK
HEAD /users/4096 -> 404 Not Found
And then a binary search between 2048 and 4096 to find the most recent user (and incidentally, the number of users). Great info to have if you're researching competing SaaS companies.
I'm guessing you don't really do this for users since the response for all of them should be 401 on any user that you aren't logged in as? I would argue even for IDs that don't exist, you should get the same error whether they don't exist or you just aren't authorised to see them. It's been a few years since I worked in web but I think that's what I would have done, GitHub does similar for private repos.
I'd like to point out that all the graphs only go to 4,000 elements, which is basically non-data. Basically it'd be like measuring which car wins a 1cm race.
For small workloads binary search is slower than just checking every element.
To add to this, I think people can forget how small log(n) is... it can practically be seen as constant (as the log base 2 of ATOMS IN THE UNIVERSE is ~300).
Since the cpu always accesses a full cache line (64 bytes) at a time, you might as well search the entire cache line (it’s practically free once the data is on-cpu). So I’d like to try a ‘binary’ search that tests all the values in the ‘middle cache line’ and then chooses to go left or right if none match. You can do the cache line search as a single 512bit simd instruction. A cache line is 64 bytes (or 32 16-bit integers); such a search might well be almost 32 times faster than simple binary search; at least it’ll do 32x less memory accesses, which will dominate in most realistic programs.
Searching the upper cache lines in your binary search tree (sorted vector) for your target is unlikely to yield results. Instead you want to use the extra data in the line to shorten the search, which leads you to a B-Tree or B+tree.
For 4 byte keys and 4 byte child pointers (or indexes in to an array) your inner nodes would have 7 keys, 8 child pointers and 1 next pointer, completely filling a 64 byte cache-line and your tree depth for 1 million entries would go down from ~20 to ~7, the top few levels of which are likely to remain cache resident.
With some thought, it's possible to use SIMD on B-tree nodes to speed up the search within the node, but it's all very data dependent.
Binary searching a sorted array is isomorphic to a sorted binary tree with implicit child pointers.
It seems to me like there should be a sort order that stores the items as a fully-dense left-shifted binary tree from top-to-bottom (e.g. like the implicit heap in an in-place heap sort, but a binary search tree instead of a hea). Is there a name for this? Does it show any performance wins in practice?
If you are talking smaller arrays, linear search with a sentinel value at the end is already tough to beat. The thing that sucks about that claim, is that "smaller" is such a nebulous qualifier that it is really hard to internalize.
Prior to the current generation Intel designs, Apple’s branch predictor tables were a good deal larger than Intel’s IIRC, so depending on benchmarking details it’s plausible that Apple Silicon was predicting every branch perfectly in the benchmark, while Intel had a more real-world mispredict rate. Perf counters would confirm.
The algorithm description was a bit confusing for me.
The SIMD part is just in the last step, where it uses SIMD to search the last 16 elements.
The Quad part is that it checks 3 points to create 4 paths, but also it's searching for the right block, not just the right key.
The details are a bit interesting. The author chooses to use the last element in each block for the quad search. I'm curious how the algorithm would change if you used the first element in each block instead, or even an arbitrary element.
The classical canonical Comp Sci algorithms are effectively "designed" for CPUs with no parallelism (either across multiple cores, via Hyper-threading technology, or "just" SIMD style instructions), and also where all memory accesses take the same amount of time (so no concept of L1/L2/L3/etc caches of varying latencies). And all working on general/random data.
As soon as you move away from either (or both) of these assumptions then there are likely to be many tweaks you can make to get better performance.
What the classical algorithms do offer is a very good starting point for developing a more optimal/efficient solution once you know more about the specific shape of data or quirks/features of a specific CPU.
When you start to get at the pointy end of optimising things then you generally end up looking at how the data is stored and accessed in memory, and whether any changes you can make to improve this don't hurt things further down the line. In a job many many years ago I remember someone who spent way too long optimising a specific part of some code only to find that the overall application ran slower as the optimisations meant that a lot more information needed later on had been evicted from the cache.
This reminds me of two excellent articles[1][2] by Paul Khuong, in which he talks about using size-specialized binary search for power-of-two sized arrays (special-casing the first iteration for other sizes).
He uses conditional moves and defines the number of iterations in advance to ellide the often mispredicted branch, and in the second article goes on to fix cache aliasing issues for large vectors using ternary search.
As a teenager I spent a weekend thinking that if binary search was good, because it cuts the search space in half at every step, then wouldn't a ternary search be better? Because we'd cut it into thirds at every step.
So instead of just comparing the middle value, we'd compare the one at the 1/3 point, and if that turns out to be too low then we compare the value at the 2/3 point.
Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.
EDIT: See zamadatix reply, it's actually 5/3 as many comparisons because 2/3 of the time you have to do 2.
Isn't it a bit better on average, although not as much as you'd hoped? For example 19 steps of binary search get you down to 1/524288 of the original search space with 19 comparisons. 12 steps of ternary search get you down to 1/3^12 = 1/531441 of the original search space with, on average, 12 * 3/2 = 18 comparisons.
> Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.
True, but is there some particular reason that you want to minimize the number of comparisons rather than have a faster run time? Daniel doesn't overly emphasize it, but as he mentions in the article: "The net result might generate a few more instructions but the number of instructions is likely not the limiting factor."
The main thing this article shows is that (at least sometimes on some processors) a quad search is faster than a binary search _despite_ the fact that that it performs theoretically unnecessary comparisons. While some computer scientists might scoff, I'd bet heavily that an optimized ternary search could also frequently outperform.
Those algorithms may not be doing any pairwise comparisons (e.g. between elements being sorted) but they still do plenty of comparisons.
And some of the algorithms, as described, still end up doing pairwise comparisons in all-but-optimal cases.
(Bucket sort requires items that end up in the same bucket to be sorted. This doesn't happen automatically via the algorithm as stated. Radix sort requires the items at each "level" to be sorted. Neither algorithm specifies how this should be done without pairwise comparisons.)
Counting Sort does work without pairwise comparisons, but is only efficient for small ranges of values, and if that's the case then it's obvious you don't need to apply a traditional sort if the number of elements greatly outnumbers the number of possible values.
Also, the algorithms still require some form of comparisons, just not pairwise comparisons.
> All other people live in the real world, and care about real-world performance, and modern computer scientists know that.
Yes, completely agree with that, but traditional "Comp Sci" is built on small building blocks of counting "comparisons" or "memory accesses". It's not designed to analyse prospective performance given modern processors with L1/L2/L3 caches, branch prediction, SIMD instructions, etc.
This ternary approach doesn't actually average 3/2 comparisons per level:
- First third: 1 comparisons
- Second third: 2 comparisons
- Third third: 2 comparisons
(1+2+2)/3 = 5/3 average comparisons. I think the gap starts here at assuming it's 50% of the time because it feels like "either you do 1 comparison or 2" but it's really 33% of the time because "there is 1/3 chance it's in the 1st comparison and 2/3 chance it'll be 2 comparisons".
This lets us show ternary is worse in total average comparisons, just barely: 5/3*Log_3[n] = 1.052... * Log_2[n].
In other words, you end up with fewer levels but doing more comparisons (on average) to get to the end. This is true for all searches of this type (w/ a few general assumptions like the values being searched for are evenly distributed and the cost of the operations is idealized - which is where the main article comes in) where the number of splits is > 2.
If you sort of squint this idea does work in cases where the cost of comparison is dominated by the cost of going down a level. And that leads you to things like b-trees where fetching a page from disk is expensive but doing the comparisons within that page is basically free.
When you can't seek quickly, e.g. on a disk, you can use a B-tree with say 128-way search. Fetching 128 keys doesn't cost much more than fetching 1 but it saves an additional 7 fetches.
Note that CPUs have also gotten dramatically wider in both execution width and vector capability since you were a teenager. The increased throughput shifts the balance more toward being able to burn operations to reduce dependency chains. It's possible for your idea to have been both non-viable on the CPUs at the time and more viable on CPUs now.
Not for the ternary version of the binary search algorithm, because what you had is just a skewed binary search, not an actual ternary search. Because comparisons are binary by nature, any search algorithm involving comparisons are a type of binary search, and any choice other than the middle element is less efficient in terms of algorithmic complexity, though in some conditions, it may be better on real hardware. For an actual ternary search, you need a 3-way comparison as an elementary operation.
Where it gets interesting is when you consider "radix efficiency" [1], for which the best choice is 3, the natural number closest to e. And it is relevant to tree search, that is, a ternary tree may be better than a binary tree.
This idea is closely related to the famous "Stooge Sort", which is basically quicksort with the pivot at 1/3 rather than 1/2. Naively, one might think it is faster than Quicksort, but of course it isn't.
For years--maybe still?--analyzing its running time was a staple of the first or second problem set in a college-level "Introduction to Algorithms" course.
> the famous "Stooge Sort", which is basically quicksort with the pivot at 1/3 rather than 1/2. Naively, one might think it is faster than Quicksort, but of course it isn't.
The pivot in quicksort isn't located anywhere fixed. You can do quicksort by always choosing the pivot to be the first element in the (sub)list before you do the swapping. If it happens that the pivot you choose always belongs 1/3 of the way through the sorted sublist... that won't even affect the asymptotic running time; it will still be O(n log n).
Stoogesort is nothing like that. It's worse than quadratic.
In quicksort, though, once you've done your thing with the pivot, you break the list into two non-overlapping sublists. If it happened that the pivot broke the list into 1/3 and 2/3, you'd then recur with one sublist of length n and one of length 2n.
Stoogesort has a different recursion pattern: you recur on the first 2/3, then you recur on the second 2/3, then you recur on the first 2/3 again.
The processing step isn't similar to quicksort either - in quicksort, you get a sublist, choose an arbitrary pivot, and then process the list such that every element of the list is correctly positioned relative to the pivot, which takes O(n) time. In stoogesort, you process the list by swapping the beginning with the end if those two elements are out of order, which takes O(1) time.
Is it possible to do some sort of Binary* Search (Binary Star, as in A* star search algorithm, where we use heuristics).
a: [1,3,5,7,8,9,10,15]
x: 8 (query value)
For this array, we would compare a[0], a[3], a[7] (left/mid/right) by subtracting 9.
And we would get d=[-7, -1, 7]
Now, normally, with binary search, because 8 > mid, we would go to (mid+right)/2, BUT we already have some extra information: we see that x is closer to a[3] (diff of 1) than a[7] (diff of 7), so instead of going to the middle between mid and right, we could choose a new "mid" point that's closer to the desired value (maybe as a ratio of (d[right]-d[mid]).
so left=mid+1, right stays the same, but new mid is NOT half of left and right, it is (left+right)/2 + ratioOffset
Where ratioOffset makes the mid go closer to left or right, depending on d.
The idea is quite obvious, so I am pretty sure it already exists.
But what if we use SIMD, with it? So we know not only which block the number is in in, but also, which part of the block the number is likely in. Or is this what the article actually says?
Oh, that's what the article was referring to with "interpolation".
Weird that I didn't hear about it before, it's not that used in practice?
One reason I could see is that binary search is fast enough and easy to implement. Even on largest datasets it's still just a few tens of loop iterations.
I thought this would be about how you can beat binary search in the 'Guess Who?' game. There's a cool math paper about it [0] and an approachable video by the author. [1]
You can't beat binary search in Guess Who. From the abstract:
>> Instead, the optimal strategy for the player who trails is to make certain bold plays in an attempt catch up.
The reason that's optimal, if you're losing, is that you assume that your opponent, who isn't losing, is going to use binary search. They're going to use binary search because it's the optimal way to find the secret.
Since you're behind, if you also use binary search, both players will progress toward the goal at the same rate, and you'll lose.
Trying to get lucky means that you intentionally play badly in order to get more victories. You're redistributing guesses taken between games in a negative-sum manner - you take more total guesses (because your search strategy is inferior to binary search), but they are unevenly distributed across your games, and in the relatively few games where you perform well above expectation, you can score a victory.
You're mixing two different objectives the paper presents. You can't beat binary search when the objective is to minimise the expected number of turns in a single player setting.
However, in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.
What do you think you're saying that I didn't already say?
> in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.
This is technically true. But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search. For the remainder, you're redistributing luck around between potential games in a way that is negative-sum, exactly like I just said.
> But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search
Although I think I get your point, saying 'You can't beat binary search in Guess Who' is misleading, considering you would probably describe yourself the optimal strategy as 'play binary search when ahead, when behind, don't'.
> Trying to get lucky means that you intentionally play badly in order to get more victories
That's quite an uncommon definition of good and bad.
The (AI generated?) image on this article is absolutely not helpful, and I think it's wrong based on how I read the article. Better not to have an image at all.
Seriously. It makes it seem like this is going to be a blog post either intended for elementary school students, or more likely for teachers on how to better explain some arithmetic concept to elementary school students.
It's absolutely bizarre. Images communicate meaning. Much better to have no image than to have an image that is completely misleading about the target audience or level of technical sophistication.
Seems like you can use the core intuition here of SIMD comparison of multiple elements on more than just the terminal scale.
The outline would be:
a) use a gather to grab multiple elements from 16 evenly spaced locations
b) compare these in parallel using a SIMD instruction
c) focus in on the correct block
d) if the block is small revert to linear search, else repeat the gather/compare cycle
Even though the gather instruction is reading from non-contiguous memory and reading more than you normally would need to with a binary sort, enabling a multiway compare and collapsing 4 levels of binary search should be a win on large tables.
You also may not be able to do a full 16-way comparison for all data types. Searching for float64 will limit you to 8 way comparisons (with AVX-512) but int32, float32 will allow 16 way comparisons.
The range is 1..4096, so 4096 bits = 512 byte bitmap would suffice.
That is, if you're only ever going to test for membership in the set.
If you need metadata then ... You could store that in a packed array and use a population count of the bit-vector before the lookup bit as index into it.
For each word of bits, store the accumulated population count of the words before it to speed up lookup.
Modern CPU's are memory-bound so I don't think SIMD would help much over using 64-bit words. For 4096 bits / 64, that would be 64 additional bytes.
I did little experiments with search in small arrays (16-32 items) and binary search is one of the worst methods because it requires lot of branches. The fastest method for small arrays was linear branchless search (you walk over all elements without breaking out of the loop. For example, if you want to know whether the array contains a number, you logically OR the checks for all items). I didn't use SIMD though, but the branches are very expensive for small arrays and simply checking all elements without branching is faster.
I think the problem is with branches misprediction. Binary and linear searches use a lot of unpredictable branches that ruin performance. No-branch versions of search work faster. I wrote the details here: https://news.ycombinator.com/item?id=47983102
How did you do it? Hopefully you generated thousands of such arrays and measured the cost of searching all of them as a single iteration? Because depending on what you use to measure time, the overhead of reading the timer would likely dwarf the cost of searching such a small array. The best case is a dedicated cycle counter instruction/register (e.g. rdtsc) and even that may cost hundreds of cycles. Cache hits cost less than a cycle so if your timing code gated a small number of searches you essentially didn't measure the search at all.
That aside your findings point to the prefetcher having identified your linear searches as sequential access so practically every single access was a cache hit (you effectively measured the latency between a CPU and its L1 cache). If you wanted to test this you could do something like make each element some number of cache lines wide. Stream prefetchers have a maximum stride, so variables more than that many bytes apart won't be prefetched. Google's multichase benchmark uses 256 bytes IIRC.
I often benchmark my DIY data structures, as well as check the assembly on godbolt. I think I wrote the code using ML model (they save lot of time), and then fixed the issues and extended it myself. I have the code here: https://pastebin.com/GzY4HMsZ
I do the search multiple (thousands) times and calculate total time spent, so the timer is called only 2 times. I search across the same array, but for the different number each time. So the array should be completely within L1 cache. It's also important to use the result of the search otherwise the compiler simply omits the code.
I use clock_gettime() which takes on order of 1us. Timestamp counter (which is faster) is not available for me because every CPU core has its own counter and they show different values, so Linux refuses to use it. First core is 600 ms ahead of other cores. How in 2025 we cannot make a boring counter amazes me. Or, maybe this is intentional to prevent using cheaper consumer hardware for professional purposes.
Here are some numbers:
Array size 8, 200 000 iterations, time ns/seach: linear 8 ns, linear no-branch <1 ns (I wonder if it is an error), binary 11 ns, binary no-branch 7 ns.
Array size 128, 100 000 iterations, time: linear 34 ns, linear no-branch 22 ns, binary 35 ns, binary no-branch 16 ns.
As you can see, the branches is what ruins the performance here. Every time the CPU mis-predicts a branch, it wastes several cycles.
Looking through disassembly on godbolt, the compiler inlined and vectorized the measurement loop for no-branch code, which might explain the numbers. The SIMD assembly is hard to read and I don't quite understand what it does: https://godbolt.org/z/dofKnT3W3
I'll be happy to read if you point me at any mistakes in my benchmark. If you want to compile the code, I used gcc with "-O3" and maybe with "-march=native".
Interesting, thanks for sharing. I actually assumed ?: desugared to an if-else and never checked. I was a bit confused by the if-else chains you have e.g. on line 111 where it looks like you could just use the else branch for every case?
You could circumvent the counter issue by pinning to a specific core, otherwise you'd have to use scheduler events (ftrace/perf on Linux) to know which CPU you were running on at specific times and when so you could subtract the right counters (I've had to do this before and it isn't pretty). It would also prevent issues where your task gets preempted and moved to another core and you have to warm the caches again. If you use Linux, options like isolcpus and nohz_full ensure you're not measuring the scheduler/other tasks but have to be set at boot. But if your loops between clock_gettime take substantially more than clock_gettime itself then at least the timer overhead is probably not that important.
As for the branch prediction, I would think most of the branches would be predicted correctly because usually you'll have runs of "not x" before each "is x". Switching between many loops of course hurts the prediction.
My SIMD knowledge only extends as far as which compiler options make my code faster.
No shame using LLMs, I use them extensively, but I find I have to write some code yourself because I make noticeably more mistakes coding if I have let the LLM do everything for a couple of weeks.
> I was a bit confused by the if-else chains you have e.g. on line 111 where it looks like you could just use the else branch for every case?
The point of "ifs" there is to replace a variable "size" with a constant like 8, 16 so that the compiler can hardcode it in the code and maybe make it more efficient (for example, instead of counting the iterations the compiler can just repeat the instruction 8 times when iteration count is 8). I expect that the compiler would generate customized "linear_search" function versions instead of using generic version with unknown number of iterations. You can see the example in godbolt assembly code at line 825, where the compiler unrolled a loop for "linear_search" function with size = 8 - there is no loop, just 8 comparisons.
> As for the branch prediction, I would think most of the branches would be predicted correctly because usually you'll have runs of "not x" before each "is x". Switching between many loops of course hurts the prediction.
I don't think so. In binary search there is no predictable pattern, so the predictor would miss in 50% of cases. And in linear search there is a guaranteed miss in the last iteration.
> You could circumvent the counter issue by pinning to a specific core,
I think I'll try this in future.
> but I find I have to write some code yourself
I can write it myself, but it would take 20-30 minutes including time to find and read the docs for functions while for LLM it takes less than a minute. But anyway I have to edit the result a lot, for example LLM "forgot" that you need to use the result of the search to prevent compiler from throwing the code away. Also, free version of ChatGPT became much dumber couple of months ago.
Some of the plots would have been much more helpful if instead of absolute value in seconds, the y-axis were the multiplier w.r.t binary search (and eyeballing suggests a relatively constant multiplier).
Obviously, this isn't changing the big-Oh complexity, but in the "real world", still nice to see a 2-4x speedup.
One thing that makes me nervous about the increased use of SIMD in new ways is that more processes will be using SIMD. This, in turn, make context switches that much more expensive as the SIMD registers must be saved and restored when switching to a new process that also uses SIMD.
I once did have a need for binary search in memory mapped files and I experimented with Eytzinger layout (which I learned from https://bannalia.blogspot.com/2015/06/cache-friendly-binary-...). It turned out that it was slower than plain binary search, I think because keys I was looking up were often clumped together thus it played quite well with cache anyway.
The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)). This is still some "divide and conquer" style algorithm just with bunch of CPU specific optimizations. Also this works well with simple data structures, like integers, with more complex objects (custom comparators) it matters less.
> The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)).
I think the title is not misleading since the Big O notation is only supposed to give a rough estimate of the performance of an algorithm.
(I agree though that binary search is already extremely fast, so making something twice as fast won't move the needle for the vast majority of applications where the speed bottleneck is elsewhere. Even infinite speed, i.e. instant sorted search, would likely not be noticeable for most software.)
For me it's slightly misleading because it's almost like saying "I wrote a faster quicksort implementation, so it beats quicksort!". In this case the binary search fundamental idea of "divide and conquer" is still there, the article just does microptimizations (which seem to be not very portable and are less relevant/applicible for more complex data structures) in order to reduce the constant part.
Yes, algorithmic complexity is theoretical, it often ignores the real world constants, but they are usually useful when comparing algorithms for larger inputs, unless we are talking about "galactic algorithms" with insanely large constants.
The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n). This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096 elements.
> The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n)
> So not exactly "n" as in O(n).
For large enough inputs the algorithm with better Big O complexity will eventually win (at least in the worst cases). Yes, sometimes it never happens in practice when the constants are too large. But say 100 * n * log(n) will eventually beat 5 * n for large enough n. Some advanced algorithms can use algorithms with worse Big O complexity but smaller constants for small enough sub-problems to improve performance. But it's more like to optimization detail rather than a completely different algorithm.
> This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096
Yes, that's my point. It's basically "I made binary search for integers X times faster on some specific CPUs". "Beating binary search" is somewhat misleading, it's more like "microptimizing binary search".
I remember I had a pedagogy class in Uni taught by psychology faculty, and was messing with them by proposing a mock syllabus where we'd teach students binary search, then the advanced advanced ones ternary search, and the very advanced, Quaternary, with a big Q, as in the geological period. Jokes on me now, I suppose.
So is the SIMD the magic piece here, or is it the interpolation search? If the data is evenly distributed, that is pretty optimal for the interpolation search..
This was the entry level project we did in a hardware optimization course I took maybe 15 years ago, using SIMD instructions. Lots of things can be naively optimized by unrolling any loops like this. Compilers do some of this themselves.
TL;DR the author developed an algorithm to solve this specific problem:
> The popular Roaring Bitmap format uses arrays of 16-bit integers of size ranging from 1 to 4096. We sometimes have to check whether a value is present.
There's no claim that this algorithm is universal and performs equally well for other problems.
In fact, note how the compare operation on the data types involved (16-bit integers) is quite cheap for modern CPUs. A similar problem with strings instead of integers would get no benefits from the author's ideas and would actually fare worse, due to useless comparisons per cycle.
You can improve interpolated search by monitoring progress and if it's not converging fast enough, alternate with bisection steps. (and, as clear from the article, switch to linear/vector scanning when the range is small emough).
Often when an interpolated search is wrong the interpolation will tend to nail you against one side or the other of the range-- so the worst case is linear. By allowing only a finite number of failed probes (meaning they only move the same boundary as last time, an optimally working search will on average alternate hi/lo) you can maintain the log guarantee of bisection.
>"Virtually all processors today have data parallel instructions (sometimes called SIMD) that can check several values at once.
[...]
The binary search checks one value at a time. However, recent processors can load and check more than one value at once. They have excellent memory-level parllelism. This suggest that instead of a binary search, we might want to try a quaternary search..."
First of all, brilliant observations! (Overall, a great article too!)
Yes, today's processors indeed have a parallelism which was unconceived of at the time the original Mathematicians, then-to-be Computer Scientists, conceived of Binary Search...
Now I myself wonder if these ideas might be extended to GPU's, that is, if the massively parallel execution capability of GPU's could be extended to search for data like Binary Search does, and what such an appropriately parallelized algorithm/data structure would look like... keep in mind, if we consider an updateable data structure, then that means that parts of it may need to be appropriately locked at the same time that multiple searches and updates are occurring simultaneously... so what data structure/algorithm would be the most efficient for a massively parallel scenario like that?
40x Faster Binary Search -
This talk will first expose the lie that binary search takes O(lg n) time — it very much does not! Instead, we will see that binary search has only constant overhead compared to an oracle. Then, we will exploit everything that modern CPUs have to offer (SIMD, ILP, prefetching, efficient caching) in order to gain 40x increased throughput over the Rust standard library implementation.
/* Author: aam@fastmail.fm
*
* Apple M4 Max (P-core) variant of simd_quad which uses a key spline
* to great effect (blog post summary incoming!)
/
bool simd_quad_m4(const uint16_t carr, int32_t cardinality, uint16_t pos) {
enum { gap = 64 };
if (cardinality < gap) {
if (cardinality >= 32) {
// 32 <= n < 64: NEON-compare the first 32 as a single x4 load,
// sweep the remainder.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t v = vld1q_u16_x4(carr);
uint16x8_t hit = vorrq_u16(
vorrq_u16(vceqq_u16(v.val[0], needle), vceqq_u16(v.val[1], needle)),
vorrq_u16(vceqq_u16(v.val[2], needle), vceqq_u16(v.val[3], needle)));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 32; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 16) {
// 16 <= n < 32: paired x2 load + sweep tail.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x2_t v = vld1q_u16_x2(carr);
uint16x8_t hit = vorrq_u16(vceqq_u16(v.val[0], needle),
vceqq_u16(v.val[1], needle));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 16; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 8) {
// 8 <= n < 16: single 128-bit compare + sweep tail.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8_t v = vld1q_u16(carr);
if (vmaxvq_u16(vceqq_u16(v, needle)) != 0) return true;
for (int32_t j = 8; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
for (int32_t j = 0; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
int32_t num_blocks = cardinality / gap;
int32_t base = 0;
int32_t n = num_blocks;
while (n > 3) {
int32_t quarter = n >> 2;
int32_t k1 = carr[(base + quarter + 1) * gap - 1];
int32_t k2 = carr[(base + 2 * quarter + 1) * gap - 1];
int32_t k3 = carr[(base + 3 * quarter + 1) * gap - 1];
int32_t c1 = (k1 < pos);
int32_t c2 = (k2 < pos);
int32_t c3 = (k3 < pos);
base += (c1 + c2 + c3) * quarter;
n -= 3 * quarter;
}
while (n > 1) {
int32_t half = n >> 1;
base = (carr[(base + half + 1) * gap - 1] < pos) ? base + half : base;
n -= half;
}
int32_t lo = (carr[(base + 1) * gap - 1] < pos) ? base + 1 : base;
if (lo < num_blocks) {
const uint16_t *blk = carr + lo * gap;
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t a = vld1q_u16_x4(blk);
uint16x8x4_t b = vld1q_u16_x4(blk + 32);
uint16x8_t h0 = vorrq_u16(
vorrq_u16(vceqq_u16(a.val[0], needle), vceqq_u16(a.val[1], needle)),
vorrq_u16(vceqq_u16(a.val[2], needle), vceqq_u16(a.val[3], needle)));
uint16x8_t h1 = vorrq_u16(
vorrq_u16(vceqq_u16(b.val[0], needle), vceqq_u16(b.val[1], needle)),
vorrq_u16(vceqq_u16(b.val[2], needle), vceqq_u16(b.val[3], needle)));
return vmaxvq_u16(vorrq_u16(h0, h1)) != 0;
}
for (int32_t j = num_blocks * gap; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
/*
* Spine variant, M4 edition.
*
* pack the interpolation probe keys into a dense contiguous region so the
* cold-cache pointer chase streams through consecutive cache lines:
*
* n=4096 -> 64 spine keys -> 128 B = 1 M4 cache line
* n=2048 -> 32 spine keys -> 64 B = half a line
* n=1024 -> 16 spine keys -> 32 B
*
* The entire interpolation phase for a max-sized Roaring container now
* lives in one cache line. The final SIMD block check still loads from
* carr.
*
* The num_blocks <= 3 fallback:
* with very few blocks the carr-based probes accidentally prime the final
* block's lines, which the spine path disrupts.
/
bool simd_quad_m4_spine(const uint16_t carr, const uint16_t spine,
int32_t cardinality, uint16_t pos) {
enum { gap = 64 };
if (cardinality < gap) {
// Same fast paths as simd_quad_m4 -- spine is irrelevant here.
if (cardinality >= 32) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t v = vld1q_u16_x4(carr);
uint16x8_t hit = vorrq_u16(
vorrq_u16(vceqq_u16(v.val[0], needle), vceqq_u16(v.val[1], needle)),
vorrq_u16(vceqq_u16(v.val[2], needle), vceqq_u16(v.val[3], needle)));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 32; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 16) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x2_t v = vld1q_u16_x2(carr);
uint16x8_t hit = vorrq_u16(vceqq_u16(v.val[0], needle),
vceqq_u16(v.val[1], needle));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 16; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 8) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8_t v = vld1q_u16(carr);
if (vmaxvq_u16(vceqq_u16(v, needle)) != 0) return true;
for (int32_t j = 8; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
for (int32_t j = 0; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
int32_t num_blocks = cardinality / gap;
if (num_blocks <= 3) {
return simd_quad_m4(carr, cardinality, pos);
}
int32_t base = 0;
int32_t n = num_blocks;
// Pull the whole spine into L1 up front. For n in [256, 4096] this is
// 1 line (128 B); for smaller n it is a partial line. Cheap on cold.
__builtin_prefetch(spine);
while (n > 3) {
int32_t quarter = n >> 2;
int32_t k1 = spine[base + quarter];
int32_t k2 = spine[base + 2 * quarter];
int32_t k3 = spine[base + 3 * quarter];
int32_t c1 = (k1 < pos);
int32_t c2 = (k2 < pos);
int32_t c3 = (k3 < pos);
base += (c1 + c2 + c3) * quarter;
n -= 3 * quarter;
}
while (n > 1) {
int32_t half = n >> 1;
base = (spine[base + half] < pos) ? base + half : base;
n -= half;
}
int32_t lo = (spine[base] < pos) ? base + 1 : base;
if (lo < num_blocks) {
const uint16_t *blk = carr + lo * gap;
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t a = vld1q_u16_x4(blk);
uint16x8x4_t b = vld1q_u16_x4(blk + 32);
uint16x8_t h0 = vorrq_u16(
vorrq_u16(vceqq_u16(a.val[0], needle), vceqq_u16(a.val[1], needle)),
vorrq_u16(vceqq_u16(a.val[2], needle), vceqq_u16(a.val[3], needle)));
uint16x8_t h1 = vorrq_u16(
vorrq_u16(vceqq_u16(b.val[0], needle), vceqq_u16(b.val[1], needle)),
vorrq_u16(vceqq_u16(b.val[2], needle), vceqq_u16(b.val[3], needle)));
return vmaxvq_u16(vorrq_u16(h0, h1)) != 0;
}
for (int32_t j = num_blocks * gap; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
// Build the spine for a given carr. Caller allocates cardinality/64 u16s.
void simd_quad_m4_build_spine(const uint16_t </i>carr, int32_t cardinality,
uint16_t spine) {
enum { gap = 64 };
int32_t num_blocks = cardinality / gap;
for (int32_t i = 0; i < num_blocks; i++) {
spine[i] = carr[(i + 1) gap - 1];
}
}
I dunno but I once didn't get a job because I argued with the interviewer about my (Perl) implementation of binary search[0] - he said it was buggy, I proved it wasn't, he insisted it was, I proved it wasn't some more, I was correct, he was miffed. No job for me.
[0] A nonsense thing to ask people to implement in an interview
Since binary search is already very fast with its O(log n) time complexity: are there any real world applications which could practically benefit from this improvement?
With "practically benefit" I meant a speedup that is noticable. Is there any software that is significantly bottlenecked by the speed of sorted search?
I guess it matters if you have to do lookup in a tight loop. If you do this occasionally, I think it's not worth it, especially for complex objects with custom comparators. The algorithm is still O(log(n)) just a more advanced "divide and conquer" with smaller constant.
I would expect the standard library of various languages to provide an optimised implementation such as this. Then everyone downstream benefits, and benefits from future improvements when compiled for a newer version of the language / executed under a newer version of the runtime.
You see this in rust, where they replaced the hash tables many years ago, the channel a couple of years ago, and most recently the sort implementations for both stable and unstable sort. I expect other languages / runtimes do similar things over time as well as CPUs change and new approaches are discovered.
Some languages, such as C++, allow for specialisation via templates and compile time evabulation (constexpr). It would be possible to detect when the size of the data type matches one of the integer types, is a POD, is comparable via memcmp, etc to use SIMD optimised algorithms.
It is looking like C++ 26 will get compile time reflection, which would make things like this even more feasible.
Daniel Lemire's points about low-level hardware optimization notwithstanding, it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know nothing about the data beyond the fact that it is sorted / monotonic.
If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better. eg: a human searching a physical paper dictionary can zoom into the right bunch of pages faster than pure idealized binary search; it's a separate matter it's hard for humans to continue binary search till the very end and we might default to scanning linearly for the last few iterations (cognitive convenience / affordances of human wetware / etc).
In mathematical language, searching a sorted list is basically inverting a monotonic function, by using a closed-loop control algorithm. Often, we could very well construct a suitable cost function and use gradient descent or its accelerated cousins.
More generally, the best bet to solving a problem more efficiently is always to use more information about the specific problem you want to solve, instead of pulling up the solution for an overly abstract representations. That can offer scalable orders of magnitude speedup compared to constant factor speedups from just using hardware better.
Furthermore, with the vast and immediate knowledge that LLMs have, we could see a proliferation of domain-specific sorting algorithms designed for all types of purposes.
> If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better.
You don't even need priors. See interpolation search, where knowing the position and value of two elements in a sorted list already allows the search to make an educated guess about where the element it's searching for is by estimating the likely place it would be by interpolating the elements.
> knowing the position and value of two elements in a sorted list
That's a prior about the distribution, if a relatively weak one (in some sense, at least).
https://en.wikipedia.org/wiki/Empirical_Bayes_method
This relies on knowledge of the distribution, just querying in the middle of A = [1, 2, 4, 8, 16, ..., 2^(n-1)] is slower than binary search
> just querying in the middle
It's an interpolation search. You interpolate the values you evaluated by whatever method you'd like. No one forces you to do linear interpolation. You can very easily fit a quadratic polynomial with the last 3 points, for example.
Interpolation search seems to have a convergence rate of log log n. That's pretty efficient.
> use that extra information to perform MUCH better
Do you mean using a better estimator for the median value? Or something else?
Say, if you know the function is a polynomial of degree N, with N+1 datapoints you can find it – e.g. with Lagrange's polynomial, although the finite precision of computer numbers might make that more complex.
I swear I read an article about treaps but instead of being used to balance the tree, they used the weights to Huffman encode the search depth to reduce the average access time for heterogenous fetch frequencies.
I did not bookmark it and about twice a year I go searching for it again. Some say he’s still searching to this day.
https://arxiv.org/abs/2206.12110 ?
Huffman coding assumes your corpus is a string of discrete elements (symbol strings) without any continuous structure (eg. topology/geometry). With that fairly mild assumption, it gives a recipe to reorganize (transform/encode) your data as a prefix-tree, to minimize the bits of information needed to communicate the contents of your corpus i.e. reducing (on average) the bits of information you need to identify a specific item. Eg. To go back to the analogy from my previous comment above... if the function you are inverting via search has long plateaus then you could simply front-load those as guesses; that's roughly the spirit of Huffman coding, except it eschews monotonicity.
> In mathematical language, searching a sorted list is basically inverting a monotonic function, by using a closed-loop control algorithm.
Never thought about it this way. Brilliant!
> it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know nothing about the data beyond the fact that it is sorted / monotonic
Also if you do not learn anything about the data while performing the binary search, no? Like, if you are constantly below the estimate, you could gess that the distribution is biases toward large values and adjust your guess based on this prediction.
It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.
If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right. And if we guess incorrectly, in the worst case, the algorithm degrades to a linear scan.
Unless we have prior knowledge. For example: if there is a particular distribution, or if we know we're dealing with integers without any repetition (i.e. each element is strictly greater than the previous one), etc.
> It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.
You have another piece of information, you don't only know if the element was before or after the compared element. You can also know the delta between what you looked at and what you're looking for. And you also have the delta from the previous item you looked at.
And you always start off knowing the total length of the array, and the width of the datatype.
Actually deciding what to do with that information without incurring a bunch more cache misses in the process may be tricky.
Is the disconnect here that in many datasets there is some implicit distribution? For example if we are searching for english words we can assume that the number of words or sentences starting with "Q" or "Z" is very small while the ones starting with "T" are many. Or if the first three lookups in a binary search all start with "T" we are probably being asked to search just the "T" section of a dictionary.
Depending on the problem space such assumptions can prove right enough to be worth using despite sometimes being wrong. Of course if you've got the compute to throw at it (and the problem is large) take the Contact approach: why do one when you can do two in parallel for twice the price (cycles)?
Assuming your key space is anything like randomly distributed.
Thinking about it--yeah, if you can anticipate anything like a random distribution it's a few extra instructions to reduce the number of values looked up. In the old days that would have been very unlikely to be a good deal, but with so many algorithms dominated by the cache (I've seen more than one case where a clearly less efficient algorithm that reduced memory reads turned out better) I suspect there's a lot of such things that don't go the way we learned them in the stone age.
> If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right.
This is true for abstract and random data. I don't think it's true for real world data.
For example, python's sort function "knows nothing" about the data you're passing in. But, it does look for some shortcuts and these end up saving time, on average.
For a list of sorted values with no other knowledge, the binary search is optimal. Provably, it is simple information theory on binary information.
You can do better if the list is stable by reusing information.
But gathering that information during searches is going to require great complexity to leverage, as searches are an irregular information gathering scheme.
So create RAM for speedup optimizations up front.
1) Create a table that maps the first 8 bits to upper and lower indexes in the list. Then binary search over the last 8 bits. That reduces the search time in half.
2) Go all the way, and create an array of 32,768 indexes, with all 1's for misses. Either way, search returns O(1).
Stable lists allow for sliding parametric trade offs between RAM-lookup vs. binary search. From full lookup, to full binary.
> Also [IFF] you do not learn anything about the data while performing the binary search, no?
Yes, absolutely!
I forgot to share this general perspective above, and it's too late to edit, so I'll add it here...
Since binary search assumes only monotonicity; splitting your interval into two equal parts extracts one bit of information per step, and any other choice would extract less information on average. One bit of information per step is how you end up needing log(n) steps to find the answer.
To accelerate your search, you basically need to extract those log(n) bits as fast as you can. You can think of that as leveraging both the prior, and everything you learn along the way -- to adaptively design each step to be the optimal experiment to extract maximum amount of information. And adaptive local models of your search space (gradient / hessian / etc) allow you to extract many more bits of information from each query / experiment, provided the function you are inverting has some local structure.
PS: That is why we leverage these ideas to "search" for the optimum, among a space of solutions.
I've spent some brainpower on binary search and have not been able to beat this:
https://github.com/protocolbuffers/protobuf/blob/44025909eb7...
1. Check for dense list O(1) 2. Check upper bound 3. Constant trip count binary search
The constant trip count is great for the branch predictor, and the core loop is pretty tightly optimized for the target hardware, avoiding multiplies. Every attempt to get more clever made the loop worse and did not pay for itself. It's hard because it's an array-of-structs format with a size of 12, and mostly pretty small N.
I know protobuf code is extremely high quality, but I really can't stand the c-style naming conventions.
I know people train themselves into grokking this and reading and emitting this way, but it sounds like writing "bork bork bork bork" runes to me.
I'm glad Rust feels more like Ruby and Python and that method and field names are legible.
My eyes just glaze over:
I think this needs way more "upb" and "UPB" to make it clear that it is, in fact, dealing with UPBs. Whatever these are.
> μpb (often written 'upb') is a small protobuf implementation written in C.
Yeah namespaces and public/private would be quite nice, but C doesn't have them, so they get hacked on via macros and prefixing. The syntax was not the hard part of working or analyzing this code, though.
> I really can't stand the c-style naming conventions.
Honestly I don't see much difference between
and
Those are fairly indistinguishable. It's when they start removing letters from words to save... debug symbol bytes or something? That's when c-style naming annoys me.
In their defence "hi" sounds very much like "high" in my mind's ear and "lo" like "low" :)
This is also my pet peeve with a lot of code as well as commands like
Like why would you teach people to do this? I understand people needed to save precious bytes in the sixties so we have cat and ls but saving 192 bytes or whatever with shorter variable names is not a worthwhile tradeoff anymore.
What exactly bothers you about this and what would you prefer to see?
I would prefer to see full names like
At least the hi and lo can get more meaningful names. And over time we can write this in another language with private/scoped methods.
For a pretty small N I've found that less clever can be quite a bit faster. I'd try a linear search - possibly SIMD if you can change the data format to struct-of-arrays. An adaptive approach that uses linear search up to a certain N can also yield some benefit.
If you control the layout, eytzinger layout typically will give you the best of both worlds. As fast as a linear scan for small N, much faster than binary search over a sorted array for large N.
The first implementation I encountered was a linear search, starting at the last-found field. Empirically it performed better to do a binary search with early exit and branchless bounds selection, I think due to branch predictor pressure. The data representation could be changed but it's tricky, as there are other traversals that want to go in sorted order, and there are lots of places that pass just one pointer for fields. But I agree any further improvement will probably have to come from that.
SIMD is tricky even with SoA because there is significant latency going between the general registers and the vector units, plus arm little cores can be configured to share a vector unit with another core.
> SIMD is tricky even with SoA because there is significant latency going between the general registers and the vector units
My experience is mostly limited to AMD64, but libraries like glibc use SIMD in many places for faster linear search. Presumably they've done testing and found it worth while.
Yeah arm little cores are a very different story - they aren't superscalar out of order architectures, they can dispatch up to two operations per cycle.
Big cores are more like that dispatching 8 or more operations per cycle, but they're also more expensive, larger, etc.
Sure, but the whole point is that you often don't know anything further about the data.
That's why b-trees are the standard in databases. The data could be anything, and its characteristics could massively change at any time, as you suddenly import a whole bunch of new rows at once.
And while you can certainly design algorithms around e.g. gradient descent to try to accelerate lookup, b-trees are already incredibly fast, and have lots of other benefits like predictable worse-case performance and I/O requirements, supporting range scans, ordered traversal, prefix conditions, etc.
So yes, you can certainly design lookup algorithms that are more efficient for particular data distributions, but they will also often lack other important properties. And b-trees are already so fast, improvements are often negligible -- like even if another algorithm produces a closer initial guess, it may be slower to locate the final item, or it may be faster on average but have horrible worst-case performance that makes it unusable.
Even with a paper dictionary, I've always used pretty much a binary search beyond the first initial guess, which only saves you a couple of hops. And actually, once I get to the right handful of pages I'm probably more linear than I should be, and I'd probably be faster if I tried to do a rigorous binary search, but I have to balance that with how long it takes to flip pages.
Databases often use table statistics to try to do better at generating query plans. I wonder if they use them to make indexes faster as well?
The cost plan is a crude approximation of the actual query cost. Sometimes, the query planner makes a terrible guess. Your resident DBA won't appreciate being sometimes paged at 3 AM on a Sunday. A good strategy is to freeze the query plan once you have sufficient sample size of data in the involved tables.
Fritz Henglein has done some interesting work on fast sorting/grouping. I think Generic Discrimination Sorting and Partitioning Unshared Data in Linear Time[1] is the main paper. Ed Kmett took those ideas and refined them into the discrimination[2] library for Haskell, and gave a very interesting tech talk about it[3].
[1]: https://dl.acm.org/doi/epdf/10.1145/1411203.1411220
[2]: https://hackage.haskell.org/package/discrimination
[3]: https://www.youtube.com/watch?v=cB8DapKQz-I
For humans binary searching a dict is slower because it requires a different physical action vs. scanning and we have advanced OCR and flipping through capabilites. Especially recognizing when something hasnt changed e.g. still on Es keep flipping is maybe 10ms.
No the point is you know exactly where T is just by looking at the dictionary (or at least, you learn this if you use a dictionary a lot).
IOW your prior on the data distribution lets you skip the first 4-5 binary chops.
>More generally, the best bet to solving a problem more efficiently is always to use more information about the specific problem you want to solve
It is both obvious and profound, the more information you already have, the more information you already have.
More accurately, binary search is optimal only if you cannot determine distance between the data points (you can compute `<` but not `-`). It's inaccurate to say it's optimal if you know "nothing" about the distribution. If you encounter a point that's much closer to your high pivot vs much closer to your low pivot, there is no possible prior knowledge state uninformed enough to conclude that the best place to search in both cases is in the middle.
Isn't "quaternary" just sort of unrolling the binary search loop by one level? I mean, to find the partition in which the item is located, you still do roughly the same rough number of comparisons. You're just taking them 4 at a time, not 2 at a time. Seems like loop unrolling would give you the same.
Yes, this can be seen as unrolling the loop a bit. It improves performance not by significantly reducing the number of instructions or memory reads, but by relaxing the dependencies between operations so that it doesn't have to be executed purely serially. You could also look at it as akin to speculatively executing both sides of the branch.
Quaternary search effectively performs both of the next loop iteration’s possible comparisons simultaneously with the current iteration’s comparison. This is a little more complex than simple loop unrolling.
Regardless, both kinds of search are O(log N) with different constants. The constants don’t matter so much in algorithms class but in the real world they can matter a lot.
Sort of, yes, but you're also removing a data dependency between the unrolled stages.
It's trickier than that. Modern processors are speculative, which means that they guess at the result for a comparison and keep going along one side of a branch as far as they can until they are told they guessed wrong or hit some internal limit. If they guessed wrong, they throw away the speculative work, take a penalty of a handful of cycles, and do the same thing again from a different starting point.
Essentially, this means that all loops are already unrolled from the processors point of view, minus a tiny bit of overhead for the loop itself that can often be ignored. Since in binary search the main cost is grabbing data from memory (or from cache in the "warm cache" examples) this means that the real game is how to get the processor to issue the requests for the data you will eventually need as far in advance as possible so you don't have to wait as long for it to arrive.
The difference in algorithm for quad search (or anything higher than binary) is that instead of taking one side of each branch (and thus prefetching deeply in one direction) is that you prefetch all the possible cases but with less depth. This way you are guaranteed to have successfully issued the prefetch you will eventually need, and are spending slightly less of your bandwidth budget on data that will never be used in the actual execution path.
As others are pointing out, "number of comparisons" is almost useless metric when comparing search algorithms if your goal is predicting real world performance. The limiting factor is almost never the number of comparisons you can do. Instead, the potential for speedup depends on making maximal use of memory and cache bandwidth. So yes, you can view this as loop unrolling, but only if you consider how branching on modern processors works under the hood.
Yea, I get that the actual comparison instruction itself is insignificant. It's everything that goes along with it. Seems like quaternary is fetching more data, however.
For instance, if you have 8 elements, 01234567, and you're looking for 1, with binary, you'd fetch 4, 2, and then 1. With quaternary, you'd fetch 2, 4, 6, then 1. Obviously, if you only have 8 elements, you'd just delegate to the SIMD instruction, but if this was a much larger array, you'd be doing more work.
I guess on a modern processor, eliminating the data dependency is worth it because the processor's branch prediction and speculation only follows effectively a single path.
Would be interesting to see this at a machine cycle level on a real processor to understand exactly what is happening.
It's not about doing more or less work; it's about doing the work faster. For instance, it's relatively common to discover that some recomputation can be faster than caching or lookup tables. Similarly, fetching more from memory also can be faster if it means you make less roundtrips.
Well that's where I thought this link was going to go before it went down the simd path... We have a way to beat binary search, it is called b-trees, it has the same basic insight that you can easily take 64 elements from your data set evenly spaced, compare against all of those rapidly, and instead of bifurcating your search space once, you do the same as six times, but because you store the 64 elements in an array in memory, they only take one array fetch and you get cache locality... But as you have more elements, you need to repeat this lookup table like three or four or five times, so it costs a bit of extra space, so what if we make it not cost space by just storing the data in these lookup tables...
A B-tree is not a search algorithm though, it is a data structure. While it would nice to be able to somehow instantly materialize a B-tree from a linear array, CPUs aren't quite there yet. It would also be nice not to have to deal with linear arrays where B-trees would be better fit in the first place, but we are not quite there yet either.
It is because processors do not do what one might naively think they do.
I also wrote recently [1] about Exponential Search [2] which is another algorithm if you need to repeatedly binary search in an array where the elements you're searching are themselves are sorted. It allowed for an 8x speedup in our workload!
[1] https://lalitm.com/post/exponential-search/ [2] https://en.wikipedia.org/wiki/Exponential_search
Exponential search is useful when you're querying a REST API that addresses resources with sequential IDs, and need the last ID, but there's no dedicated endpoint for it:
And then a binary search between 2048 and 4096 to find the most recent user (and incidentally, the number of users). Great info to have if you're researching competing SaaS companies.
I'm guessing you don't really do this for users since the response for all of them should be 401 on any user that you aren't logged in as? I would argue even for IDs that don't exist, you should get the same error whether they don't exist or you just aren't authorised to see them. It's been a few years since I worked in web but I think that's what I would have done, GitHub does similar for private repos.
I'd like to point out that all the graphs only go to 4,000 elements, which is basically non-data. Basically it'd be like measuring which car wins a 1cm race.
For small workloads binary search is slower than just checking every element.
To add to this, I think people can forget how small log(n) is... it can practically be seen as constant (as the log base 2 of ATOMS IN THE UNIVERSE is ~300).
Since the cpu always accesses a full cache line (64 bytes) at a time, you might as well search the entire cache line (it’s practically free once the data is on-cpu). So I’d like to try a ‘binary’ search that tests all the values in the ‘middle cache line’ and then chooses to go left or right if none match. You can do the cache line search as a single 512bit simd instruction. A cache line is 64 bytes (or 32 16-bit integers); such a search might well be almost 32 times faster than simple binary search; at least it’ll do 32x less memory accesses, which will dominate in most realistic programs.
Searching the upper cache lines in your binary search tree (sorted vector) for your target is unlikely to yield results. Instead you want to use the extra data in the line to shorten the search, which leads you to a B-Tree or B+tree.
For 4 byte keys and 4 byte child pointers (or indexes in to an array) your inner nodes would have 7 keys, 8 child pointers and 1 next pointer, completely filling a 64 byte cache-line and your tree depth for 1 million entries would go down from ~20 to ~7, the top few levels of which are likely to remain cache resident.
With some thought, it's possible to use SIMD on B-tree nodes to speed up the search within the node, but it's all very data dependent.
Binary searching a sorted array is isomorphic to a sorted binary tree with implicit child pointers.
It seems to me like there should be a sort order that stores the items as a fully-dense left-shifted binary tree from top-to-bottom (e.g. like the implicit heap in an in-place heap sort, but a binary search tree instead of a hea). Is there a name for this? Does it show any performance wins in practice?
There's Eytzinger order: https://algorithmica.org/en/eytzinger
Thanks for that name; that's the exact layout I was considering.
See also https://arxiv.org/abs/1509.05053
If you are talking smaller arrays, linear search with a sentinel value at the end is already tough to beat. The thing that sucks about that claim, is that "smaller" is such a nebulous qualifier that it is really hard to internalize.
This is simply not true - if you look at this article’s excellent benchmarking, linear search falls behind somewhere around 200-400 elements.
In general I love this article, it took what I’ve often wondered about and did a perfect job exploring with useful ablation studies.
Except on Apple, where binary search always wins. Does anyone know why?
Prior to the current generation Intel designs, Apple’s branch predictor tables were a good deal larger than Intel’s IIRC, so depending on benchmarking details it’s plausible that Apple Silicon was predicting every branch perfectly in the benchmark, while Intel had a more real-world mispredict rate. Perf counters would confirm.
For that machine and compiler version, yes.
I don't think std::find typically uses a sentinel, though?
I don't really see how this implies the above commenter's statement is "simply not true".
That's not what the article is about.
Fair. I had meant my point to be an "in addition" and a pointer to more fun things to look up on it.
The algorithm description was a bit confusing for me.
The SIMD part is just in the last step, where it uses SIMD to search the last 16 elements.
The Quad part is that it checks 3 points to create 4 paths, but also it's searching for the right block, not just the right key.
The details are a bit interesting. The author chooses to use the last element in each block for the quad search. I'm curious how the algorithm would change if you used the first element in each block instead, or even an arbitrary element.
The classical canonical Comp Sci algorithms are effectively "designed" for CPUs with no parallelism (either across multiple cores, via Hyper-threading technology, or "just" SIMD style instructions), and also where all memory accesses take the same amount of time (so no concept of L1/L2/L3/etc caches of varying latencies). And all working on general/random data.
As soon as you move away from either (or both) of these assumptions then there are likely to be many tweaks you can make to get better performance.
What the classical algorithms do offer is a very good starting point for developing a more optimal/efficient solution once you know more about the specific shape of data or quirks/features of a specific CPU.
When you start to get at the pointy end of optimising things then you generally end up looking at how the data is stored and accessed in memory, and whether any changes you can make to improve this don't hurt things further down the line. In a job many many years ago I remember someone who spent way too long optimising a specific part of some code only to find that the overall application ran slower as the optimisations meant that a lot more information needed later on had been evicted from the cache.
(This is probably just another way of stating Rob Pike's 5th rule of programming which was itself a restatement of something by Fred Brooks in _The Mythical Man Month_. Ref: https://www.cs.unc.edu/~stotts/COMP590-059-f24/robsrules.htm...)
This reminds me of two excellent articles[1][2] by Paul Khuong, in which he talks about using size-specialized binary search for power-of-two sized arrays (special-casing the first iteration for other sizes).
He uses conditional moves and defines the number of iterations in advance to ellide the often mispredicted branch, and in the second article goes on to fix cache aliasing issues for large vectors using ternary search.
[1]: https://pvk.ca/Blog/2012/07/03/binary-search-star-eliminates...
[2]: https://pvk.ca/Blog/2012/07/30/binary-search-is-a-pathologic...
As a teenager I spent a weekend thinking that if binary search was good, because it cuts the search space in half at every step, then wouldn't a ternary search be better? Because we'd cut it into thirds at every step.
So instead of just comparing the middle value, we'd compare the one at the 1/3 point, and if that turns out to be too low then we compare the value at the 2/3 point.
Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.
EDIT: See zamadatix reply, it's actually 5/3 as many comparisons because 2/3 of the time you have to do 2.
Did you continue by fantasizing about CPU's that contain ternary comparators?
Isn't it a bit better on average, although not as much as you'd hoped? For example 19 steps of binary search get you down to 1/524288 of the original search space with 19 comparisons. 12 steps of ternary search get you down to 1/3^12 = 1/531441 of the original search space with, on average, 12 * 3/2 = 18 comparisons.
Maybe! But you can see the other comment that points out I was wrong and it is actually 5/3 comparisons so it still works out worse.
> Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.
True, but is there some particular reason that you want to minimize the number of comparisons rather than have a faster run time? Daniel doesn't overly emphasize it, but as he mentions in the article: "The net result might generate a few more instructions but the number of instructions is likely not the limiting factor."
The main thing this article shows is that (at least sometimes on some processors) a quad search is faster than a binary search _despite_ the fact that that it performs theoretically unnecessary comparisons. While some computer scientists might scoff, I'd bet heavily that an optimized ternary search could also frequently outperform.
You normally measure runtime of a sorting algorithm in terms of the number of comparisons it has to do.
Obviously real-world performance depends on other things as well.
Not “normally”, but “in computer science” and even then, mostly “in the past” and even then, only “typically” (there are sorting algorithms that make zero comparisons. See for example https://pages.cs.wisc.edu/~paton/readings/Old/fall01/LINEAR-...)
All other people live in the real world, and care about real-world performance, and modern computer scientists know that.
Those algorithms may not be doing any pairwise comparisons (e.g. between elements being sorted) but they still do plenty of comparisons.
And some of the algorithms, as described, still end up doing pairwise comparisons in all-but-optimal cases.
(Bucket sort requires items that end up in the same bucket to be sorted. This doesn't happen automatically via the algorithm as stated. Radix sort requires the items at each "level" to be sorted. Neither algorithm specifies how this should be done without pairwise comparisons.)
Counting Sort does work without pairwise comparisons, but is only efficient for small ranges of values, and if that's the case then it's obvious you don't need to apply a traditional sort if the number of elements greatly outnumbers the number of possible values.
Also, the algorithms still require some form of comparisons, just not pairwise comparisons.
> All other people live in the real world, and care about real-world performance, and modern computer scientists know that.
Yes, completely agree with that, but traditional "Comp Sci" is built on small building blocks of counting "comparisons" or "memory accesses". It's not designed to analyse prospective performance given modern processors with L1/L2/L3 caches, branch prediction, SIMD instructions, etc.
Imagine if you split the search space N times, no middles. Then you could just compare the value.
This ternary approach doesn't actually average 3/2 comparisons per level:
- First third: 1 comparisons
- Second third: 2 comparisons
- Third third: 2 comparisons
(1+2+2)/3 = 5/3 average comparisons. I think the gap starts here at assuming it's 50% of the time because it feels like "either you do 1 comparison or 2" but it's really 33% of the time because "there is 1/3 chance it's in the 1st comparison and 2/3 chance it'll be 2 comparisons".
This lets us show ternary is worse in total average comparisons, just barely: 5/3*Log_3[n] = 1.052... * Log_2[n].
In other words, you end up with fewer levels but doing more comparisons (on average) to get to the end. This is true for all searches of this type (w/ a few general assumptions like the values being searched for are evenly distributed and the cost of the operations is idealized - which is where the main article comes in) where the number of splits is > 2.
Oh yeah!
If you sort of squint this idea does work in cases where the cost of comparison is dominated by the cost of going down a level. And that leads you to things like b-trees where fetching a page from disk is expensive but doing the comparisons within that page is basically free.
When you can't seek quickly, e.g. on a disk, you can use a B-tree with say 128-way search. Fetching 128 keys doesn't cost much more than fetching 1 but it saves an additional 7 fetches.
Note that CPUs have also gotten dramatically wider in both execution width and vector capability since you were a teenager. The increased throughput shifts the balance more toward being able to burn operations to reduce dependency chains. It's possible for your idea to have been both non-viable on the CPUs at the time and more viable on CPUs now.
It turns out that teenager you had something.
Not for the ternary version of the binary search algorithm, because what you had is just a skewed binary search, not an actual ternary search. Because comparisons are binary by nature, any search algorithm involving comparisons are a type of binary search, and any choice other than the middle element is less efficient in terms of algorithmic complexity, though in some conditions, it may be better on real hardware. For an actual ternary search, you need a 3-way comparison as an elementary operation.
Where it gets interesting is when you consider "radix efficiency" [1], for which the best choice is 3, the natural number closest to e. And it is relevant to tree search, that is, a ternary tree may be better than a binary tree.
[1] https://en.wikipedia.org/wiki/Optimal_radix_choice
This idea is closely related to the famous "Stooge Sort", which is basically quicksort with the pivot at 1/3 rather than 1/2. Naively, one might think it is faster than Quicksort, but of course it isn't.
For years--maybe still?--analyzing its running time was a staple of the first or second problem set in a college-level "Introduction to Algorithms" course.
https://en.wikipedia.org/wiki/Stooge_sort
> the famous "Stooge Sort", which is basically quicksort with the pivot at 1/3 rather than 1/2. Naively, one might think it is faster than Quicksort, but of course it isn't.
The pivot in quicksort isn't located anywhere fixed. You can do quicksort by always choosing the pivot to be the first element in the (sub)list before you do the swapping. If it happens that the pivot you choose always belongs 1/3 of the way through the sorted sublist... that won't even affect the asymptotic running time; it will still be O(n log n).
Stoogesort is nothing like that. It's worse than quadratic.
In quicksort, though, once you've done your thing with the pivot, you break the list into two non-overlapping sublists. If it happened that the pivot broke the list into 1/3 and 2/3, you'd then recur with one sublist of length n and one of length 2n.
Stoogesort has a different recursion pattern: you recur on the first 2/3, then you recur on the second 2/3, then you recur on the first 2/3 again.
The processing step isn't similar to quicksort either - in quicksort, you get a sublist, choose an arbitrary pivot, and then process the list such that every element of the list is correctly positioned relative to the pivot, which takes O(n) time. In stoogesort, you process the list by swapping the beginning with the end if those two elements are out of order, which takes O(1) time.
Is it possible to do some sort of Binary* Search (Binary Star, as in A* star search algorithm, where we use heuristics).
For this array, we would compare a[0], a[3], a[7] (left/mid/right) by subtracting 9.
And we would get d=[-7, -1, 7]
Now, normally, with binary search, because 8 > mid, we would go to (mid+right)/2, BUT we already have some extra information: we see that x is closer to a[3] (diff of 1) than a[7] (diff of 7), so instead of going to the middle between mid and right, we could choose a new "mid" point that's closer to the desired value (maybe as a ratio of (d[right]-d[mid]).
so left=mid+1, right stays the same, but new mid is NOT half of left and right, it is (left+right)/2 + ratioOffset
Where ratioOffset makes the mid go closer to left or right, depending on d.
The idea is quite obvious, so I am pretty sure it already exists.
But what if we use SIMD, with it? So we know not only which block the number is in in, but also, which part of the block the number is likely in. Or is this what the article actually says?
Yeah this is basically interpolation search
Oh, that's what the article was referring to with "interpolation".
Weird that I didn't hear about it before, it's not that used in practice?
One reason I could see is that binary search is fast enough and easy to implement. Even on largest datasets it's still just a few tens of loop iterations.
I thought this would be about how you can beat binary search in the 'Guess Who?' game. There's a cool math paper about it [0] and an approachable video by the author. [1]
[0] https://arxiv.org/abs/1509.03327
[1] https://www.youtube.com/watch?v=_3RNB8eOSx0
You can't beat binary search in Guess Who. From the abstract:
>> Instead, the optimal strategy for the player who trails is to make certain bold plays in an attempt catch up.
The reason that's optimal, if you're losing, is that you assume that your opponent, who isn't losing, is going to use binary search. They're going to use binary search because it's the optimal way to find the secret.
Since you're behind, if you also use binary search, both players will progress toward the goal at the same rate, and you'll lose.
Trying to get lucky means that you intentionally play badly in order to get more victories. You're redistributing guesses taken between games in a negative-sum manner - you take more total guesses (because your search strategy is inferior to binary search), but they are unevenly distributed across your games, and in the relatively few games where you perform well above expectation, you can score a victory.
You're mixing two different objectives the paper presents. You can't beat binary search when the objective is to minimise the expected number of turns in a single player setting.
However, in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.
Here's another visual demonstration: https://www.youtube.com/watch?v=zmvn4dnq82U
What do you think you're saying that I didn't already say?
> in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.
This is technically true. But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search. For the remainder, you're redistributing luck around between potential games in a way that is negative-sum, exactly like I just said.
> But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search
Although I think I get your point, saying 'You can't beat binary search in Guess Who' is misleading, considering you would probably describe yourself the optimal strategy as 'play binary search when ahead, when behind, don't'.
> Trying to get lucky means that you intentionally play badly in order to get more victories
That's quite an uncommon definition of good and bad.
The (AI generated?) image on this article is absolutely not helpful, and I think it's wrong based on how I read the article. Better not to have an image at all.
Agreed, it threw me off at first but the rest of the article was quite nice.
Seriously. It makes it seem like this is going to be a blog post either intended for elementary school students, or more likely for teachers on how to better explain some arithmetic concept to elementary school students.
It's absolutely bizarre. Images communicate meaning. Much better to have no image than to have an image that is completely misleading about the target audience or level of technical sophistication.
Seems like you can use the core intuition here of SIMD comparison of multiple elements on more than just the terminal scale.
The outline would be:
a) use a gather to grab multiple elements from 16 evenly spaced locations
b) compare these in parallel using a SIMD instruction
c) focus in on the correct block
d) if the block is small revert to linear search, else repeat the gather/compare cycle
Even though the gather instruction is reading from non-contiguous memory and reading more than you normally would need to with a binary sort, enabling a multiway compare and collapsing 4 levels of binary search should be a win on large tables.
You also may not be able to do a full 16-way comparison for all data types. Searching for float64 will limit you to 8 way comparisons (with AVX-512) but int32, float32 will allow 16 way comparisons.
Gather is extremely slow. Anyone aiming for efficiency will avoid gathers.
I bet you a binary search is in fact faster than any gather based methodology.
If you are storing 16-bit integers, wouldn't an 8kB bitmap be even faster?
The range is 1..4096, so 4096 bits = 512 byte bitmap would suffice.
That is, if you're only ever going to test for membership in the set. If you need metadata then ... You could store that in a packed array and use a population count of the bit-vector before the lookup bit as index into it. For each word of bits, store the accumulated population count of the words before it to speed up lookup. Modern CPU's are memory-bound so I don't think SIMD would help much over using 64-bit words. For 4096 bits / 64, that would be 64 additional bytes.
The size is 1..4096, the range is implied to be the full 16-bit integer range.
The library the author is talking about selects between bitmap and array dynamically depending on density.
https://roaringbitmap.org/
That explains the maximum size of 4096 elements (exactly where a bitmap would be smaller).
I did little experiments with search in small arrays (16-32 items) and binary search is one of the worst methods because it requires lot of branches. The fastest method for small arrays was linear branchless search (you walk over all elements without breaking out of the loop. For example, if you want to know whether the array contains a number, you logically OR the checks for all items). I didn't use SIMD though, but the branches are very expensive for small arrays and simply checking all elements without branching is faster.
I wonder if this is faster because it makes the prefetcher happy.
I think the problem is with branches misprediction. Binary and linear searches use a lot of unpredictable branches that ruin performance. No-branch versions of search work faster. I wrote the details here: https://news.ycombinator.com/item?id=47983102
How did you do it? Hopefully you generated thousands of such arrays and measured the cost of searching all of them as a single iteration? Because depending on what you use to measure time, the overhead of reading the timer would likely dwarf the cost of searching such a small array. The best case is a dedicated cycle counter instruction/register (e.g. rdtsc) and even that may cost hundreds of cycles. Cache hits cost less than a cycle so if your timing code gated a small number of searches you essentially didn't measure the search at all.
That aside your findings point to the prefetcher having identified your linear searches as sequential access so practically every single access was a cache hit (you effectively measured the latency between a CPU and its L1 cache). If you wanted to test this you could do something like make each element some number of cache lines wide. Stream prefetchers have a maximum stride, so variables more than that many bytes apart won't be prefetched. Google's multichase benchmark uses 256 bytes IIRC.
I often benchmark my DIY data structures, as well as check the assembly on godbolt. I think I wrote the code using ML model (they save lot of time), and then fixed the issues and extended it myself. I have the code here: https://pastebin.com/GzY4HMsZ
I do the search multiple (thousands) times and calculate total time spent, so the timer is called only 2 times. I search across the same array, but for the different number each time. So the array should be completely within L1 cache. It's also important to use the result of the search otherwise the compiler simply omits the code.
I use clock_gettime() which takes on order of 1us. Timestamp counter (which is faster) is not available for me because every CPU core has its own counter and they show different values, so Linux refuses to use it. First core is 600 ms ahead of other cores. How in 2025 we cannot make a boring counter amazes me. Or, maybe this is intentional to prevent using cheaper consumer hardware for professional purposes.
Here are some numbers:
Array size 8, 200 000 iterations, time ns/seach: linear 8 ns, linear no-branch <1 ns (I wonder if it is an error), binary 11 ns, binary no-branch 7 ns.
Array size 128, 100 000 iterations, time: linear 34 ns, linear no-branch 22 ns, binary 35 ns, binary no-branch 16 ns.
As you can see, the branches is what ruins the performance here. Every time the CPU mis-predicts a branch, it wastes several cycles.
Looking through disassembly on godbolt, the compiler inlined and vectorized the measurement loop for no-branch code, which might explain the numbers. The SIMD assembly is hard to read and I don't quite understand what it does: https://godbolt.org/z/dofKnT3W3
I'll be happy to read if you point me at any mistakes in my benchmark. If you want to compile the code, I used gcc with "-O3" and maybe with "-march=native".
Interesting, thanks for sharing. I actually assumed ?: desugared to an if-else and never checked. I was a bit confused by the if-else chains you have e.g. on line 111 where it looks like you could just use the else branch for every case?
You could circumvent the counter issue by pinning to a specific core, otherwise you'd have to use scheduler events (ftrace/perf on Linux) to know which CPU you were running on at specific times and when so you could subtract the right counters (I've had to do this before and it isn't pretty). It would also prevent issues where your task gets preempted and moved to another core and you have to warm the caches again. If you use Linux, options like isolcpus and nohz_full ensure you're not measuring the scheduler/other tasks but have to be set at boot. But if your loops between clock_gettime take substantially more than clock_gettime itself then at least the timer overhead is probably not that important.
As for the branch prediction, I would think most of the branches would be predicted correctly because usually you'll have runs of "not x" before each "is x". Switching between many loops of course hurts the prediction.
My SIMD knowledge only extends as far as which compiler options make my code faster.
No shame using LLMs, I use them extensively, but I find I have to write some code yourself because I make noticeably more mistakes coding if I have let the LLM do everything for a couple of weeks.
> I was a bit confused by the if-else chains you have e.g. on line 111 where it looks like you could just use the else branch for every case?
The point of "ifs" there is to replace a variable "size" with a constant like 8, 16 so that the compiler can hardcode it in the code and maybe make it more efficient (for example, instead of counting the iterations the compiler can just repeat the instruction 8 times when iteration count is 8). I expect that the compiler would generate customized "linear_search" function versions instead of using generic version with unknown number of iterations. You can see the example in godbolt assembly code at line 825, where the compiler unrolled a loop for "linear_search" function with size = 8 - there is no loop, just 8 comparisons.
> As for the branch prediction, I would think most of the branches would be predicted correctly because usually you'll have runs of "not x" before each "is x". Switching between many loops of course hurts the prediction.
I don't think so. In binary search there is no predictable pattern, so the predictor would miss in 50% of cases. And in linear search there is a guaranteed miss in the last iteration.
> You could circumvent the counter issue by pinning to a specific core,
I think I'll try this in future.
> but I find I have to write some code yourself
I can write it myself, but it would take 20-30 minutes including time to find and read the docs for functions while for LLM it takes less than a minute. But anyway I have to edit the result a lot, for example LLM "forgot" that you need to use the result of the search to prevent compiler from throwing the code away. Also, free version of ChatGPT became much dumber couple of months ago.
Some of the plots would have been much more helpful if instead of absolute value in seconds, the y-axis were the multiplier w.r.t binary search (and eyeballing suggests a relatively constant multiplier).
Obviously, this isn't changing the big-Oh complexity, but in the "real world", still nice to see a 2-4x speedup.
One thing that makes me nervous about the increased use of SIMD in new ways is that more processes will be using SIMD. This, in turn, make context switches that much more expensive as the SIMD registers must be saved and restored when switching to a new process that also uses SIMD.
Is this an actual, measurable, major issue or just a gut feeling? Context switches in general are suboptimal but pretty normal.
If your data follows a relatively uniform distribution, you can try out interpolation search: https://en.wikipedia.org/wiki/Interpolation_search
On optimizing binary search: https://en.algorithmica.org/hpc/data-structures/binary-searc...
I once did have a need for binary search in memory mapped files and I experimented with Eytzinger layout (which I learned from https://bannalia.blogspot.com/2015/06/cache-friendly-binary-...). It turned out that it was slower than plain binary search, I think because keys I was looking up were often clumped together thus it played quite well with cache anyway.
The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)). This is still some "divide and conquer" style algorithm just with bunch of CPU specific optimizations. Also this works well with simple data structures, like integers, with more complex objects (custom comparators) it matters less.
> The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)).
I think the title is not misleading since the Big O notation is only supposed to give a rough estimate of the performance of an algorithm.
(I agree though that binary search is already extremely fast, so making something twice as fast won't move the needle for the vast majority of applications where the speed bottleneck is elsewhere. Even infinite speed, i.e. instant sorted search, would likely not be noticeable for most software.)
For me it's slightly misleading because it's almost like saying "I wrote a faster quicksort implementation, so it beats quicksort!". In this case the binary search fundamental idea of "divide and conquer" is still there, the article just does microptimizations (which seem to be not very portable and are less relevant/applicible for more complex data structures) in order to reduce the constant part.
Yes, algorithmic complexity is theoretical, it often ignores the real world constants, but they are usually useful when comparing algorithms for larger inputs, unless we are talking about "galactic algorithms" with insanely large constants.
The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n). This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096 elements.
So not exactly "n" as in O(n).
Also: only for 16-bit integers.
> The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n)
> So not exactly "n" as in O(n).
For large enough inputs the algorithm with better Big O complexity will eventually win (at least in the worst cases). Yes, sometimes it never happens in practice when the constants are too large. But say 100 * n * log(n) will eventually beat 5 * n for large enough n. Some advanced algorithms can use algorithms with worse Big O complexity but smaller constants for small enough sub-problems to improve performance. But it's more like to optimization detail rather than a completely different algorithm.
> This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096
Yes, that's my point. It's basically "I made binary search for integers X times faster on some specific CPUs". "Beating binary search" is somewhat misleading, it's more like "microptimizing binary search".
A beautiful algorithm.
Would there be any value in using simd to check the whole cache line that you fetch for exact matches on the narrowing phase for an early out?
…for 16-bit integers, and it’s still a binary search with the same asymptotic complexity, just a constant-factor speedup.
See also: Static search trees: 40x faster than binary search
- https://curiouscoding.nl/slides/p99-text/
- https://curiouscoding.nl/posts/static-search-tree/
- https://news.ycombinator.com/item?id=42562847 (656 points; 232 comments)
I remember I had a pedagogy class in Uni taught by psychology faculty, and was messing with them by proposing a mock syllabus where we'd teach students binary search, then the advanced advanced ones ternary search, and the very advanced, Quaternary, with a big Q, as in the geological period. Jokes on me now, I suppose.
So is the SIMD the magic piece here, or is it the interpolation search? If the data is evenly distributed, that is pretty optimal for the interpolation search..
In the Intel CPU + cold cache case, the quad search matters. In the other three cases, only the SIMD matters.
To put it another way: this is addressed in the article.
If you know about the distribution of keys you can do even better by factoring that knowledge into where you split.
This was the entry level project we did in a hardware optimization course I took maybe 15 years ago, using SIMD instructions. Lots of things can be naively optimized by unrolling any loops like this. Compilers do some of this themselves.
And here I thought this was going to be related to quaternions
TL;DR the author developed an algorithm to solve this specific problem:
> The popular Roaring Bitmap format uses arrays of 16-bit integers of size ranging from 1 to 4096. We sometimes have to check whether a value is present.
There's no claim that this algorithm is universal and performs equally well for other problems.
In fact, note how the compare operation on the data types involved (16-bit integers) is quite cheap for modern CPUs. A similar problem with strings instead of integers would get no benefits from the author's ideas and would actually fare worse, due to useless comparisons per cycle.
Surprisingly simple concept!
I always wondered if we could get any faster than O(log n). Glad we're making progress!
What about non-binary search?
you know things are bad when lemire.me feels he needs to post an AI slop image :(
You can improve interpolated search by monitoring progress and if it's not converging fast enough, alternate with bisection steps. (and, as clear from the article, switch to linear/vector scanning when the range is small emough).
Often when an interpolated search is wrong the interpolation will tend to nail you against one side or the other of the range-- so the worst case is linear. By allowing only a finite number of failed probes (meaning they only move the same boundary as last time, an optimally working search will on average alternate hi/lo) you can maintain the log guarantee of bisection.
>"Virtually all processors today have data parallel instructions (sometimes called SIMD) that can check several values at once.
[...]
The binary search checks one value at a time. However, recent processors can load and check more than one value at once. They have excellent memory-level parllelism. This suggest that instead of a binary search, we might want to try a quaternary search..."
First of all, brilliant observations! (Overall, a great article too!)
Yes, today's processors indeed have a parallelism which was unconceived of at the time the original Mathematicians, then-to-be Computer Scientists, conceived of Binary Search...
Now I myself wonder if these ideas might be extended to GPU's, that is, if the massively parallel execution capability of GPU's could be extended to search for data like Binary Search does, and what such an appropriately parallelized algorithm/data structure would look like... keep in mind, if we consider an updateable data structure, then that means that parts of it may need to be appropriately locked at the same time that multiple searches and updates are occurring simultaneously... so what data structure/algorithm would be the most efficient for a massively parallel scenario like that?
Anyway, great article and brilliant observations!
Previous related: https://news.ycombinator.com/item?id=47726340
40x Faster Binary Search - This talk will first expose the lie that binary search takes O(lg n) time — it very much does not! Instead, we will see that binary search has only constant overhead compared to an oracle. Then, we will exploit everything that modern CPUs have to offer (SIMD, ILP, prefetching, efficient caching) in order to gain 40x increased throughput over the Rust standard library implementation.
Here's my version with a key spline improvement. I should really write this up...
#include <stdbool.h> #include <stdint.h> #include <arm_neon.h>
/* Author: aam@fastmail.fm * * Apple M4 Max (P-core) variant of simd_quad which uses a key spline * to great effect (blog post summary incoming!) / bool simd_quad_m4(const uint16_t carr, int32_t cardinality, uint16_t pos) { enum { gap = 64 };
}
/* * Spine variant, M4 edition. * * pack the interpolation probe keys into a dense contiguous region so the * cold-cache pointer chase streams through consecutive cache lines: * * n=4096 -> 64 spine keys -> 128 B = 1 M4 cache line * n=2048 -> 32 spine keys -> 64 B = half a line * n=1024 -> 16 spine keys -> 32 B * * The entire interpolation phase for a max-sized Roaring container now * lives in one cache line. The final SIMD block check still loads from * carr. * * The num_blocks <= 3 fallback: * with very few blocks the carr-based probes accidentally prime the final * block's lines, which the spine path disrupts. / bool simd_quad_m4_spine(const uint16_t carr, const uint16_t spine, int32_t cardinality, uint16_t pos) { enum { gap = 64 };
}
// Build the spine for a given carr. Caller allocates cardinality/64 u16s. void simd_quad_m4_build_spine(const uint16_t </i>carr, int32_t cardinality, uint16_t spine) { enum { gap = 64 }; int32_t num_blocks = cardinality / gap; for (int32_t i = 0; i < num_blocks; i++) { spine[i] = carr[(i + 1) gap - 1]; } }
Will I get a job if i say i can beat binary search?
I dunno but I once didn't get a job because I argued with the interviewer about my (Perl) implementation of binary search[0] - he said it was buggy, I proved it wasn't, he insisted it was, I proved it wasn't some more, I was correct, he was miffed. No job for me.
[0] A nonsense thing to ask people to implement in an interview
Since binary search is already very fast with its O(log n) time complexity: are there any real world applications which could practically benefit from this improvement?
This is a drop-in improvement for essentially any binary search over 16-bit integer members.
With "practically benefit" I meant a speedup that is noticable. Is there any software that is significantly bottlenecked by the speed of sorted search?
I think it's possible to come up with a situation where you want to do a sorted search per every pixel in the screen, for every frame.
That sounds promising. I think ray tracing checks for ray intersection over an unsorted polygon soup. Sorted data seems hard to come by.
I guess it matters if you have to do lookup in a tight loop. If you do this occasionally, I think it's not worth it, especially for complex objects with custom comparators. The algorithm is still O(log(n)) just a more advanced "divide and conquer" with smaller constant.
I would expect the standard library of various languages to provide an optimised implementation such as this. Then everyone downstream benefits, and benefits from future improvements when compiled for a newer version of the language / executed under a newer version of the runtime.
You see this in rust, where they replaced the hash tables many years ago, the channel a couple of years ago, and most recently the sort implementations for both stable and unstable sort. I expect other languages / runtimes do similar things over time as well as CPUs change and new approaches are discovered.
I wouldn't. This is very specialized to the type of the elements.
Some languages, such as C++, allow for specialisation via templates and compile time evabulation (constexpr). It would be possible to detect when the size of the data type matches one of the integer types, is a POD, is comparable via memcmp, etc to use SIMD optimised algorithms.
It is looking like C++ 26 will get compile time reflection, which would make things like this even more feasible.