I realized as I was digging into the skip list implementation that the time complexity for ZCOUNT could be reduced to O(log(N)+log(M)) if it were to take into account the skip values and do another binary search for the end element.
I'm not sure how much sense it makes to do this, given that min and max in my use-cases are often pretty close together, but counting across large datasets could yield better average performance.
Is this worthwhile? I don't consider myself skilled enough with C to make this change and test the performance implications--I'd probably spend a day getting hung up on why my pointers aren't working.