I have a small segment of code that is designed as a sorting algorithm using std::set. The goal is to sort the dictionary file and, using set, removing any duplicates that might show up along the way. I figured the way to do it was to put the elements of the dictionary into a vector, then putting it into a set, then putting it back into the vector again.
I also have a timing algorithm that calculates the ms elapsed, it's currently giving me around 850 ms. I want to try to optimize that as much as possible, and I'm not sure how to go about it.
Here's the code:
1 2 3 4 5 6 7 8 9 10 11
template <typename T>
void
sorting (vector<T> & dict)
{
set<T> s;
for (size_t i = 0; i < dict.size(); ++i)
{
s.insert (dict[i]);
}
std::copy (s.begin(), s.end(), dict.begin());
}
Any help to optimize this is greatly appreciated! Thank you!
Indeed, I would follow norm b's advice and use std::sort in combination with std::unique, rather than trying to insert every element into a set and then copying every element back.
I can't imagine it being as slow as what you're doing.