std::sort assert error

I got following error message hen trying to sort:

...file...algorithm
Expression: invalid operator <
ABORT ... RETRY...


This is relevant code that fails:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

bool CloseEnough(double a, double b)
{
	return (a + 0.000001 >= b) && (a - 0.000001 <= b);
}

//functor passed to std::sort
class HaveSameElements
{
public:
	bool operator () (const Bin& left, const Bin& right)
	{
		bool result = false;
		if(left.vItems.size() == right.vItems.size())
		{
			for(unsigned long int i = 0; i < left.vItems.size(); ++i)
			{
				result = CloseEnough(left.vItems[i].Duzina, right.vItems[i].Duzina);
				if(!result)
				{
					break;
				}
			}
		}
		return result;
	}
};

I was looking & looking & looking... and can't see what the hell is wrong!
Last edited on
Which compiler/version/options are you using?
How are you calling std::sort?
MSVC++ 2010 express edition
Debug build / default options
calling std::sort is not a problem, but here you go:

1
2
3
std::vector<Bin> vBIN;
...
std::sort(vBIN.begin(), vBIN.end(), HaveSameElements ());
The problem is that CloseEnough is not the right kind of relation. First, it resembles equivalence relation, and the sorting routine expects strict weak ordering. It is like the difference between == and < . Second, CloseEnough is not even equivalence relation, because it lacks transitivity. Namely, a and b may be close enough, and b and c may also be close enough, but nonetheless a and c may be not close enough.

Regards
But what then should i use for "operator ==" on double type?
I see your point. Actually, using tolerance with floating point numbers is not something that you do everywhere. There is no recipe for working with inexact computations by always giving them tolerance. You decide when and how to apply tolerance, and if the tolerance must be symmetric or what.

Particularly in your case, since you are sorting, you simply can use the built-in less-then operator (<) for double. You will not sort better in any measurable sense if you provide tolerances. (Leaving aside the fact that it also breaks the algorithm.) And again - sorting requires less-than, not equals-to comparison. I mean, it's understandable. Sorting is ordering things according to some pre-defined order, which requires testing this order, not testing distinctiveness.
To compare doubles, that is fine. If transitivity is really an issue, choose another tolerance (or transform the data in a way that transitivity stops being an issue)

What is the output that you expect? Do you want to remove "repeated" elements?
@ne555

But why would anyone use tolerances when sorting, especially with a non-stable sort. What is the point - ideological consistency? And I don't want to sound overly pedantic, but transitivity is always an issue when you sort. I mean, the whole point of ordering things is that the order is something transitive.
Ok. suppose i have this:
1
2
3
4
5
Bin[0] = 2.1, 1.3, 1.0, 0.2
Bin[1] = 9.5, 4.8, 3.0, 1.6
Bin[2] = 2.1, 1.3, 1.0, 0.2
Bin[3] = 9.5, 4.8, 3.0, 1.6
Bin[4] = 8.4, 6.2, 3.3, 2.2


i need sort to get this:
1
2
3
4
5
Bin[0] = 2.1, 1.3, 1.0, 0.2
Bin[1] = 2.1, 1.3, 1.0, 0.2
Bin[2] = 9.5, 4.8, 3.0, 1.6
Bin[3] = 9.5, 4.8, 3.0, 1.6
Bin[4] = 8.4, 6.2, 3.3, 2.2

How?

EDIT: Bin doesn't have fixed items size! Could look like this also:
1
2
3
4
5
Bin[0] = 8.1, 1.3 
Bin[1] = 4.1, 8.3, 1.0, 0.2
Bin[2] = 6.5, 9.8, 3.0
Bin[3] = 9.5, 4.8, 3.0, 1.6
Bin[4] = 1.4
Last edited on
Depends on what those quantities represent. Now I see that ne555 may have had a point when he said that you may need to transform the data somehow. Specifically, if the leading numbers are more significant than the trailing ones, i.e. the first column is in different scale from the last column, or something like that - then lexicographical sort will not be stable. But you will need to work harder to make the sort consistent.

For example. Let's suppose that the tolerance is .1 and we have
0.9 1.5
1.0 1.0
1.1 0.5

0.9 ~= 1.0 and 1.5 > 1.0, so you have (0.9 1.5) > (1.0 1.0), which is good, because the 0.1 difference is probably error.
1.1 ~= 1.0 and 0.5 < 1.0, so you have (1.0 1.0) > (1.1 0.5), which is good, because the 0.1 difference is probably error again.
From the two above follows that (0.9 1.5) > (1.1 0.5). Which is however inconsistent, because 1.1 - 0.9 exceeds the tolerance and 0.9 < 1.1.

One solution is to compute some kind of overall value for the entire bin and sort with this value as sorting key, so to speak. For example, you may accumulate the columns with different scales. You may say - why severe useful information by merging the columns like that. Because it is not entirely useful. Otherwise you wouldn't be having this problem to begin with. You simply produce one useful overall from a set of arbitrarily unreliable sources.

Also, it depends if you are padding to the left (like decimal numerals) or to the right (like language dictionary)?

Other than that, the simple and naive solution with numeral-like comparison is this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
bool operator () (const Bin& left, const Bin& right)
{
	if(left.vItems.size() == right.vItems.size())
	{
		for(unsigned long int i = 0; i < left.vItems.size(); ++i)
		{
			if(left.vItems[i].Duzina != right.vItems[i].Duzina)
			{
					return left.vItems[i].Duzina < right.vItems[i].Duzina;
			}
		}

	        return false;
	}

        return left.vItems.size() < right.vItems.size();
}

But if you need robustness you will need to do much more.

Regards
First, thanks for your help so far and for your time.
I am not native english writer so i might have difficulty explaining what i need.

This is related to my previous topic with carpenter & beams problem, with witch you helped me.


Depends on what those quantities represent...

It represents lengths needed to cut from a beam. I'll put some more code to clear this out:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// in this case this represents length to cut
class Item
{
public:
	double Duzina;//  is length
        ...//and other methods/operators
};

// this represents beam
class Bin
{
public:
	double CurrentLength;//sum of lengths (Duzina)
	std::vector<Item> vItems;
        ...//and other methods/operators
};


before i sort on Bin-s i was sort on each Bin.vItems vector (longer first: 9.3, 6.5, 4.1, 2.0...)
and then tried sort on vBIN witch produced error.
I need some sort of "equal operator" on doubles with some tolerance because this is real world problem and something that isn't actually might be considered equal.
Its quite late now so i'll try your "naive" solution tomorow.
Thanks again
Topic archived. No new replies allowed.