Methods have preconditions and postconditions. When the precondition of a method is not met, there is a choice whether to handle that error( e.g. throw an exception, return a bool value indicating the operation's success state) or not to handle that error, leading to undefined behaviour.
According to good software engineering practice, it is good to have a class that has low coupling that is independent of any other class.
Now my question is is it better to handle that error or not?
because the way I see it, handling the error would be more elegant but at the cost of efficiency and there might be duplication of error checking among the different classes(since each class is treated independently hence no assumption must be made as to whether the value that class is holding is valid).
Typically, an error handling policy (exception, error code, halt, etc.) is put into place for a given module. The policy should only be changed at module boundaries.
There is no single "best practice" as to which is appropriate. C code doesn't understand exceptions, etc..
As far as coupling, one thing about propagating errors (equates or exceptions) is that those must might have to be accessible (increasing coupling). For exceptions, I believe inheriting from std::exception (virtually) is the norm. I'd like to hear more about this myself, because while you'd be able to "ignorantly" catch exceptions by the base class, you probably can't do anything with it (aside from printing an error message). Unfortunately, I have no professional experience with exceptions--they're forbidden in my past positions.
Undefined behavior is not your friend. If a function cannot perform its task because a precondition is not met, it must inform the caller of that fact. The penalty for not doing so is in a large application, when something goes wrong, you'll spend days tracking it down to the fact the someone passed a bad parameter to a function which then silently did nothing at all, but pretended it worked.
You are right that there will be error checking at multiple levels if there is nested function calls going on. But the recovery at each step will be slightly different. Besides, the penalty for checking for the error is simply a comparison of a return value (or maybe catching an exception -- see my response to OP's other post here: http://www.cplusplus.com/forum/general/36402/).
Software must always be in a coherent, known state. If you call a member function of an object and that member function cannot do what it is supposed to do, it needs to leave the object in a well-defined state. Perhaps it leaves it unchanged. This goes hand-in-hand with the levels of exception guarantees (http://en.wikipedia.org/wiki/Exception_guarantees). You should always strive for the no-throw guarantee (ie, nothing can go wrong) or the strong guarantee. The basic guarantee is terrible, because essentially it leaves you with a time bomb for an object -- you can't even safely destroy it.
Best practice is to prevent as much undefined behavior as is possible. Validate all preconditions and post-conditions. Always maintain class invariants.
However, there are times where the class must meet certain performance criteria which prevent the precondition and post-condition checks. Good examples here are STL containers. Those cases where the contract cannot be verified need to be clearly spelled out for the user of that class.
Error codes and exceptions are essentially different beasts, so you must try to discriminate them.
Error codes are for events that inhibit the main operation, but are mild and probably common. They are very close to a class of exceptions called runtime exceptions, but are supposed to be milder. It is all in the details though. Some people will even say it is subjective. I mean, if the user supplies you with some file name and you try to open that file, and there is no file with that name, then fine. This is just an error, and you will report it to the user, and overall all is good. On the other hand, if you try to open some resource that must come with every distribution of the program and it is not there, than this is not fine. This is rare event and causes major damage to the functionality of your program, so this is runtime exception. So, how do you approach the problem. Well, why not create two interfaces: OpenUserFile that returns error code and OpenResourceFile that throws exceptions.
Errors are supposed to be checked immediately (using return values or other facilities) after executing the respective operation. Generally speaking, you may or may not delegate the error to higher levels on the call stack, depending on how your algorithm handles the condition. As I said, it is supposed to be something mild, so you may recover on the same level. An error may turn into exception, if the mild condition of the error is exceptional in the context of your operation's assumptions. For example OpenFile may return error if the file is missing, and OpenResourceFile may turn the result from OpenFile into exception.
Exceptions correspond to unlikely events and obstruct the functions of the software. There is no point in checking them on each level, because it is hard to recover immediately from them. This is why they are designed as multi-level transfer of the control flow. Runtime exceptions are those that occur under stress from external factors, and logic exceptions occur due to design flaws. You can try to partially recover the system from both, but you must log logic exceptions somewhere and fix them.
Error and exception handling are vital. A project can be saved, even resurrected due to improvements in error feedback, etc. The system will converge to stability much faster. Grace of degradation is usually counted as secondary by some projects, but you must have some, or a system with the slightest bug will be totally dysfunctional, and the financial penalties will be much more severe. Working with exception handling mechanisms is hard. There are different types of reactions, called "guarantees" that must be declared in the specification. They determine the consistency of the system after failure.
That being said, exception handling can increase the coupling, not decrease it. Exceptions are not output. They are side-effects. And the problem with side-effects is that they are (generally) not part of the output specification, but exactly "side-" effects from the implementation. You can not say, this function will never throw overflow exception. What if tomorrow the implementation of the function changes and now it can throw it. Would you change the specification? What if the implementation of some supporting function changes. Do you change the specification of exceptions again? If this is the case, then the interface will be altered every day. Not to mention that it is hard to trace all those dependencies. Also, what if a function may throw some arithmetic exception. Does that mean that you will handle arithmetic exceptions specifically. What will you do? Try to disable the FPU?
That is why, IMO, exceptions must be handled broadly, without discriminating the exact point of origin and the exact cause. You must have the proper hierarchy set-up for this to work. It must allow you to discriminate when you want to (, because there is specific recovery mechanism for some circumstances), and to generalize when you want to handle entire classes of situations. It is not so important what is the nature of the operation that causes the exception. Every exception must be designed with specific purpose. There is no point of having missing file exception, as I said before. But having, missing resource file exception is different, because now you know that the condition is permanent (or at least until repairing the installation). In other words, the infrastructure requires design separate from any individual function.
And, as closing remark. Error handling mechanisms were implemented in C differently. They used methods that always incurred penalty to the running code no matter whether errors were present or not. C++ on the other hand can use techniques that supposedly handle the issue without penalties in the absence of exceptions. It is called zero cost exception handling model or smth. I think it uses some form of analysis of the call stack, but I am not knowledgeable on the matter. The theory is that the performance suffers only in case of exceptions. But since, exceptions are (watch for the pun) exceptional, the amortized cost should not be much.
Regards
P.S. I am still trying to understand exceptions myself. Side effects in general do not go along with the input-output specification methods. So I still have unresolved mental issue with this.
P.S. 2 There are certain error situations that can not be detected without extreme effort from the called function. For example, whether the vector argument passed is sorted. Consequently, unless you are dealing with unsecured entry point to the system, you can leave the behavior undefined. Usually, to counteract those, you can catch the error through its consequences in the surrounding code. Second, you can have types that reflects the state of the vector, use the state design pattern, etc.
You can not validate the preconditions in general. For various reasons:
- It could be very slow. Like testing if an array is sorted before performing binary search or testing if some graph is acyclic or has no negatively weighted edges, etc.
- It is not always possible. For example, testing whether a pointer points at existing object (not dangling), or at object of the appropriate type if only the client knows the dynamic type, but there is no RTTI. Also, testing if the pointer points to dynamically allocated memory, or whether it was allocated on the heap or the free store.
- Sometimes the preconditions and postconditions concern entire systems.
- There are implicit requirements. Like having enough resources or starting from consistent state. The former can be considered implementation related issue, but it makes all the difference in some cases. Say between quicksort and merge sort.
Even if you did check all those requirements, you still can't promise fulfill or die behavior. Because not all assumptions are centered around the function's entry point. Do you check the result from each call you make?
In other words, you have to admit to the fact that the inconsistencies will travel through the system. Here are some things that would counteract:
- Encapsulation. Helps identifying the source of the problem based on the type of the corrupted data. With OOP, you can also provide mandatory initialization.
- Use unit tests. Perform the expensive checks then.
- Enforce programming style. Like, set pointers to null after freeing the objects, set variables that are overwritten by functions to (preferrably) impossible output value before the call. None of this saves you entirely, but at least some more errors may be triggered.
- Just in time checks - for bounds, for null pointers, for division by zero, etc.
- Perform checks in the later stages of the algorithm. For example, when an array is not sorted or the graph is not acyclic or has negative edges, this can show naturally during computations.
Error discovery and handling must work together. There is no point of handling errors if you don't discover them, and there is no point of discovering them if you are going to ignore them. Error detection helps you. It allows grace of degradation (smaller financial losses and better customer satisfaction), and faster convergence (errors are discovered sooner).
@simeonz, you clearly need to work more on learning than rationalizing the world into order. If you don't know an answer, don't give one.
All properly functioning systems work by satisfying the appropriate requirements specifications. Whether they are informal (as in, "here's my hobbyist program") or formal (as in, "because I have a job doing this stuff"), failure to satisfy and validate the appropriate requirements means that your system doesn't do what it is supposed to do and it will fail when that happens.
Competent programming always puts checks in place to make sure bad things don't happen. Even within the same system different checks may be performed, and different requirements may apply.
This is actually quite normal.
@unregistered
What do you mean by "handle the error"?
Where the error occurs, where it is noticed, and where it is handled properly are all different things. Part of the design of moderate to large systems is how to incorperate error handling, and the methodology used may differ between projects.
For example, the STL's random access containers provide two indexing methods: operator[]() and at(). The first does not perform range checking while the second does. Use the method appropriate to your requirements. That is, make sure you do verify conditions, but the choice of when and how you do it is not a black and white thing (unless your functional requirements say it is, that is...).
you clearly need to work more on learning than rationalizing the world into order
Help me learn then. With what exactly do you disagree? I claim:
- that checking all preconditions is generally not feasible.
- that there are other checks, not related to preconditions, that verify the consistency of the system.
- that error propagation has certain complexities, like interactions and effects that go beyond the scope of the function's specification.
However, I want to make one thing clear. I did not advice against run-time checking of whatever. I support the notion.
If you don't know an answer, don't give one.
You think that my opinion was not justified? Would you care to be specific.
All properly functioning systems work by satisfying the appropriate requirements specifications.
... Even within the same system different checks may be performed, and different requirements may apply.
I agree more or less. Is that supposed to be argument against something I said?