Tutorial :Which school of reporting function failures is better



Question:

Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.

First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error

int foo(int arg) {      if (everything fine)          return some_value;      return -1; //on failure  }  

The scond approach is to return a function status and pass the result within a reference

bool foo(int arg, int & result) {       if (everything fine) {           result = some_value;           return true;       }       return false;  //on failure  }  

Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?


Solution:1

Don't ignore exceptions, for exceptional and unexpected errors.

However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...

  1. If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
  2. You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
  3. There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).

Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).


Solution:2

boost optional is a brilliant technique. An example will assist.

Say you have a function that returns an double and you want to signify an error when that cannot be calculated.

double divide(double a, double b){      return a / b;  }  

what to do in the case where b is 0;

boost::optional<double> divide(double a, double b){      if ( b != 0){          return a / b;      }else{          return boost::none;      }  }  

use it like below.

boost::optional<double> v = divide(a, b);  if(v){      // Note the dereference operator      cout << *v << endl;  }else{      cout << "divide by zero" << endl;  }  


Solution:3

The idea of special return values completely falls apart when you start using templates. Consider:

template <typename T>  T f( const T & t ) {     if ( SomeFunc( t ) ) {        return t;     }     else {         // error path       return ???;  // what can we return?     }  }  

There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..


Solution:4

Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.

While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,

 account1.rate = account2.rate = current_rate();  

vs.:

set_current_rate(account1.rate);  account2.rate = account1.rate;  

or:

set_current_rate(account1.rate);  set_current_rate(account2.rate);  

The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.

Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.


Solution:5

In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).

Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.

The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.


Solution:6

There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.

If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.


Solution:7

C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.

Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.

And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()


Solution:8

It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:

  • Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.

  • If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.

  • When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.

These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.


Solution:9

For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.

For C, I find the fugly out parameters less offensive than fugly "magic numbers".


Solution:10

You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.

There's a lot to be said for this.

Example:

int count;  if (!TryParse("12x3", &count))    DisplayError(GetLastError());  

edit

This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:

HKEY key;  long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);  if (errcode != ERROR_SUCCESS)    return DisplayError(errcode);  

Contrast this with:

HKEY key;  if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))    return DisplayError(GetLastError());  

(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)

In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:

HKEY key;  if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)    return DisplayGenericError();  

edit

Looking at R.'s request, I've found a scenario where it can actually be satisfied.

For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.

However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.

I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.

In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.


Solution:11

I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.


Solution:12

I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.


Solution:13

If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.


Solution:14

Well, the first one will compile either in C and C++, so to do portable code it's fine. The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.


Solution:15

I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.

Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.

Thus when you get a negative return to a function call, you can check the context from the GError variable.

A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.


Solution:16

For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?


Solution:17

Apart from doing it the correct way, which of these two stupid ways do you prefer?

I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.

Does additional parameter in the second method bring notable performance overhead?

Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »