Depends on what those quantities represent. Now I see that ne555 may have had a point when he said that you may need to transform the data somehow. Specifically, if the leading numbers are more significant than the trailing ones, i.e. the first column is in different scale from the last column, or something like that - then lexicographical sort will not be stable. But you will need to work harder to make the sort consistent.
For example. Let's suppose that the tolerance is .1 and we have
0.9 1.5
1.0 1.0
1.1 0.5
0.9 ~= 1.0 and 1.5 > 1.0, so you have (0.9 1.5) > (1.0 1.0), which is good, because the 0.1 difference is probably error.
1.1 ~= 1.0 and 0.5 < 1.0, so you have (1.0 1.0) > (1.1 0.5), which is good, because the 0.1 difference is probably error again.
From the two above follows that (0.9 1.5) > (1.1 0.5). Which is however inconsistent, because 1.1 - 0.9 exceeds the tolerance and 0.9 < 1.1.
One solution is to compute some kind of overall value for the entire bin and sort with this value as sorting key, so to speak. For example, you may accumulate the columns with different scales. You may say - why severe useful information by merging the columns like that. Because it is not entirely useful. Otherwise you wouldn't be having this problem to begin with. You simply produce one useful overall from a set of arbitrarily unreliable sources.
Also, it depends if you are padding to the left (like decimal numerals) or to the right (like language dictionary)?
Other than that, the simple and naive solution with numeral-like comparison is this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
bool operator () (const Bin& left, const Bin& right)
{
if(left.vItems.size() == right.vItems.size())
{
for(unsigned long int i = 0; i < left.vItems.size(); ++i)
{
if(left.vItems[i].Duzina != right.vItems[i].Duzina)
{
return left.vItems[i].Duzina < right.vItems[i].Duzina;
}
}
return false;
}
return left.vItems.size() < right.vItems.size();
}
|
But if you need robustness you will need to do much more.
Regards