###

Question:

We have large (100,000+ elements) ordered vectors of structs (operator < overloaded to provide ordering):

`std::vector < MyType > vectorMyTypes; std::sort(vectorMyType.begin(), vectorMyType.end()); `

My problem is that we're seeing performance problems when adding new elements to these vectors while preserving sort order. At the moment we're doing something like:

`for ( a very large set ) { vectorMyTypes.push_back(newType); std::sort(vectorMyType.begin(), vectorMyType.end()); ... ValidateStuff(vectorMyType); // this method expects the vector to be ordered } `

This isn't *exactly* what our code looks like since I know this example could be optimised in different ways, however it gives you an idea of how performance could be a problem because I'm sorting after every `push_back`

.

I think I essentially have two options to improve performance:

Use a (hand crafted?)

*insertion sort*instead of`std::sort`

to improve the sort performance (insertion sorts on a partially sorted vector are blindingly quick)Create a heap by using

`std::make_heap`

and`std::push_heap`

to maintain the sort order

My questions are:

**Should I implement an insertion sort? Is there something in Boost that could help me here?****Should I consider using a heap? How would I do this?**

**Edit:**

Thanks for all your responses. I understand that the example I gave was far from optimal and it doesn't fully represent what I have in my code right now. It was simply there to illustrate the performance bottleneck I was experiencing - perhaps that's why this question isn't seeing many up-votes :)

Many thanks to you Steve, it's often the simplest answers that are the best, and perhaps it was my over analysis of the problem that blinded me to perhaps the most obvious solution. I do like the neat method you outlined to insert directly into a pre-ordered vector.

As I've commented, I'm constrained to using vectors right now, so std::set, std::map, etc aren't an option.

###

Solution:1

Ordered insertion doesn't need boost:

`vectorMyTypes.insert( std::upper_bound(vectorMyTypes.begin(), vectorMyTypes.end(), newType), newType); `

`upper_bound`

provides a valid insertion point provided that the vector is sorted to start with, so as long as you only ever insert elements in their correct place, you're done. I originally said `lower_bound`

, but if the vector contains multiple equal elements, then `upper_bound`

selects the insertion point which requires less work.

This does have to copy O(n) elements, but you say insertion sort is "blindingly fast", and this is faster. If it's not fast enough, you have to find a way to add items in batches and validate at the end, or else give up on contiguous storage and switch to a container which maintains order, such as `set`

or `multiset`

.

A heap does not maintain order in the underlying container, but is good for a priority queue or similar, because it makes removal of the maximum element fast. You say you want to maintain the vector in order, but if you never actually iterate over the whole collection in order then you might not need it to be fully ordered, and that's when a heap is useful.

###

Solution:2

According to item 23 of Meyers' Effective STL, you should use a sorted vector if you application use its data structures in 3 phases. From the book, they are :

Setup. Create a new data structure by inserting lots of elements into it. During this phase, almost all operation are insertions and erasure. Lookups are rare on nonexistentLookup. Consult the data structure to find specific pieces of information. During this phase, almost all operations are lookups. Insertion and erasures are rare or nonexistent. There are so many lookups, the performance of this phase makes the performance of the other phases incidental.Reorganize.Modify the content of the data structure. perhaps by erasing all the current data and inserting new data in its place. Behaviorally, this phase is equivalent to phase 1. Once this phase is completed, the application return to phase 2

If your use of your data structure resembles this, you should use a sorted vector, and then use a binary_search as mentionned. If not, a typical associative container should do it, that means a *set, multi-set, map or multimap* as those structure ** are ordered by default**

###

Solution:3

Why not just use a binary search to find where to insert the new element? Then you will insert exactly into the required position.

###

Solution:4

If you need to insert a lot of elements into a sorted sequence, use `std::merge`

, potentially sorting the new elements first:

`void add( std::vector<Foo> & oldFoos, const std::vector<Foo> & newFoos ) { std::vector<Foo> merged; // precondition: oldFoos _and newFoos_ are sorted merged.reserve( oldFoos.size() + newFoos.size() ); // only for std::vector std::merge( oldFoos.begin(), oldFoos.end(), newFoos.begin(), newFoos.end(), std::back_inserter( merged ); // apply std::unique, if wanted, here merged.erase( std::unique( merged.begin(), merged.end() ), merged.end() ); oldFoos.swap( merged ); // commit changes } `

###

Solution:5

Using a binary search to find the insertion location isn't going to speed up the algorithm much because it will still be O(N) to do the insertion (consider inserting at the beginning of a vector - you have to move every element down one to create the space).

A tree (aka heap) will be O(log(N)) to insert, much better performance.

See http://www.sgi.com/tech/stl/priority_queue.html

Note that a tree will still have worst case O(N) performance for insert unless it is balanced, e.g. an AVL tree.

###

Solution:6

Why not to use boost::multi_index ?

NOTE: `boost::multi_index`

does not provide memory contiguity, a property of `std::vectors`

by which elements are stored adjacent to one another in a single block of memory.

###

Solution:7

There are a few things you need to do.

You may want to consider making use of

`reserve()`

to avoid excessive re-allocing of the entire vector. If you have knowledge of the size it will grow to, you may gain some performance by doing`resrve()`

s yourself (rather than having the implemetation do them automaticaly using the built in heuristic).Do a binary search to find the insertion location. Then

`resize`

and shift everything following the insertion point up by one to make room.Consider: do you really want to use a vector? Perhaps a

`set`

or`map`

are better.

The advantage of binary search over `lower_bound`

is that if the insertion point is close to the end of the vector you don't have to pay the theta(n) complexity.

###

Solution:8

If you want insert an element into the "right" position, why do you plan on using

**sort**. Find the position using`lower_bound`

and insert, using, well, `insert' method of the vector. That will still be O(N) to insert new item.heap is not going to help you, because heap is not sorted. It allows you get get at the smallest element quickly, and then quickly remove it and get next smallest element. However, the data in heap is not stored in sort order, so if you have algorithms that must iterate over data in order, it will not help.

I am afraid you description skimmed to much detail, but it seems like list is just not the right element for the task. `std::deque`

is much better suited for insertion in the middle, and you might also consider `std::set`

. I suggest you explain why you need to keep the data sorted to get more helpful advice.

###

Solution:9

You might want to consider using a BTree or a Judy Trie.

- You don't want to use contiguous memory for large collections, insertions should not take O(n) time;
- You want to use at least binary insertion for single elements, multiple elements should be presorted so you can make the search boundaries smaller;
- You do not want your data structure wasting memory, so nothing with left and right pointers for each data element.

###

Solution:10

As others have said I'd probably have created a BTree out of a linked list instead of using a vector. Even if you got past the sorting issue, vectors have the problem of fully reallocating when they need to grow, assuming you don't know your maximum size before hand.

If you are worried about a list allocating on different memory pages and causing cache related performance issues, preallocate your nodes in an array, (pool the objects) and insert these into the list.

You can add a value in your data type that denotes if it is allocated off the heap or from a pool. This way if you detect that your pool runs out of room, you can start allocating off the heap and throw an assert or something to yourself so you know to bump up the pool size (or make this a command line option to set.

Hope this helps, as I see you already have lots of great answers.

**Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com**

EmoticonEmoticon