A question on using "++" and "+="

Pages: 12
the thing is that beginners (like myself) should avoid using those expresions if they are not really necesary, first u need to learn how they work.
justAbeginner, sorry, but you translated it in Google Translate? I have some trouble believing that. What language was this in originally? Czech?

And what was the name of the guy who wrote it?

-Albatross
it was in Croatian, and i did translate it in google...
(you were close with the Czehch :P)

He's name is Dr. Željko Jurić, he is the "programming techniques" proffesor at the faculty of Electrical Engineering and Computer Science in Sarajevo.

No, I believe the part where you said you translated it in Google. It was shabby enough Screw up on my part. Ah... when will I ever manage to get the English language straight in my head?

I checked this guy's name, and he has some impressive credentials. That said, the last thing that's needed to convince me he's right is an experiment...

-Albatross
Last edited on
i don't really get what u are trying to say...
Albatross
You need to back off. Though badly-translated (as is always the case with machine translation), the text is correct, though a bit scattered (it thinks across a wide range of theory without being careful to distinguish where the underlying assumptions change).

The problem is that justAbeginner has misunderstood some of the information given to him. For example
thing is that in some complex calculations as b=a++*c you do not know what will happen.
The thing is that that calculation is precicely defined; the value of b is deterministic. Where this not the case then no program could be reasoned about, and programming would be just luck, instead of careful thought.

What is not defined is when the variable a's value is actually modified by the computer: before, during, or after the calculation is finished and assignment to b is done. In this particular calculation, it makes absolutely no difference to the programmer -- so long as everything is properly completed before the next line of code begins.

When it does make a difference is when you endeavor to apply multiple side effects to the same object between two adjacent sequence points, such as:
b = a++ * ++a;
You cannot know what b's value will be because you are relying on a's value during a time when its value is not clearly defined -- remember, the compiler can modify a at any point before, during, or after the calculation. And since it can be modified that way, there is no way for the computer to reasonably know what values to use in the calculation.

Hope this helps.
Duoas

thx :)
Disclaimer:
After I realized my mistake earlier (my mental parser for the English language shuts down at about 8:30), I would like to point out that I wasn't attempting to bash justAbeginner, only Google Translate for its successive failures to correctly translate... stuff. I do appreciate the fact that you attempted to save either the theory or justAbeginner, whichever you perceived as being bashed.

As for the theory, once I manage to get all of it through my head, I will try to disprove it. I do this for all theories, and no exception. I'm a scientist as well as an engineer. If I cannot disprove it in a reasonable amount of time, I will accept it... for your example, is a defined to be an integer or has it not yet been defined but only declared? If it has been defined as an integer, then at least on i686-darwin10 using GCC 4.2.1, it's (a+2)*(a+1).

-Albatross

EDIT: P.S.- Are you Croatian, justAbeginner? Just wondering.
Last edited on
nop I 'm Bosnian, just another student...

This isn't theory. It is fact. The C++ language standard states quite explicitly that the behavior is undefined when modifying (and accessing) the same object more than once between adjacent sequence points. Hence, my helpful link waay back in post #4 of this thread:
http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.15

By "undefined" is meant that the compiler can behave however it wants to accomplish its goal. Hence, only a specific compiler can be said to behave a specific way. In the case of the GCC on every platform the compiler is designed to have a non-deterministic behavior -- meaning that the next time you compile it on an i686-darwin10 using GCC 4.2.1, it could be (a+1)*(a+2), and the next time (a+2)*(a+2). That is a specific feature of a specific compiler. A simpler compiler may do the same thing each time.


The whole point of this behavior has absolutely nothing to do with the language. It has everything to do with how the language is implemented on specific hardware or in specific systems, which is a whole field of study in itself, and is usually not of interest to the programmer. C and C++ (and other derived languages) are unique in that the issue does creep into the programmer's domain, and so the language standard is careful enough to say, "Don't do that!"


Further, my response was not to a "perceived" attack, but to your actual aggressive responses to justAbeginner's posts. Rather than simply "bash" and complain, why not offer real help: explain to justAbeginner where he was misunderstanding the text.

It is and was not my intention to "bash" you, only to ask you to calm down a bit.


Hope this helps.
Topic archived. No new replies allowed.
Pages: 12