, Sven Groot wrote

@evildictaitor: Your implementation means that both Count and getting an item are no longer O(1), and that the items are not stored in contiguous memory (which is what is usually expected from a List<T>).

In the worst case, with a list constructed by adding items one by one, these operations now have a worst case complexity of O(n).

I was optimising for the benchmark Smiley

But more seriously, Count can be cached, so in a loop for(int i = 0; i < Count; i++) { Something(i); } you get an amortized cost of O(1). Indexing is more of a problem because you can't have both the indexer and the adder at O(1), but what you can do is you can realize that the constant in O(cn) for indexing is the reciprocal of the depth of the prefixes. By collapsing up to adjacent small prefixes' "item" you incur an O(P) operation every singular Adds - i.e. an amortized O(1) for Adding. Your indexer remains O(n), but the constant here is small if you make P large

If you get caught up in trying to reduce the big Oh for indexes (and miss the point that the constant parameter dwarfs until you get really big lists, and most lists are small), you could make P grow with nmaking the indexer O(lg n), but you'd then incur a really expensive aggregation operation on some Adds, putting your Add back to O(lg n) as well.

Hopefully the lists are heavily benchmarked Smiley