Exceptions Are For Exceptional Circumstances Is Not A Value-Add Answer
In the world of programming, there are a lot of "answers" that feel like they lack any substantial value-add. "It depends," for example, is a response that typically doesn't clarify anything. Another response that sticks in my craw is that "exceptions are for exceptional circumstances." This is a response often given to the question, "When should I throw an exception?" But it doesn't mean anything; it does nothing more than answer the question with another, unstated question, "What is exceptional?"
This morning, I came across an old blog post by Scott Hanselman that I believe provides real, actual, value-add thoughts about exception management. The one that adds the most clarity for me is:
If your functions are named well, using verbs (actions) and nouns (stuff to take action on) then throw an exception if your method can't do what it says it can... For example, SaveBook(). If it can't save the book - it can't do what it promised - then throw an exception. That might be for a number of reasons.
This is probably the most concrete advice I've ever seen given on the topic.
I know a lot of people will tell me that Exceptions are very costly and will hurt performance. But, honestly, this has never been a pain point for me. Maybe after I fix all the JavaScript memory leaks, browser repaint bottlenecks, database locking, thread exhaustion, and file IO contention problems in my application, I'll consider reducing the number of exceptions my application raises. Until then, I believe that the above advice is working out quite nicely.
Reader Comments
Some exceptions tend to scare end users. My end users are largely loan officers, clerks and data entry specialists (not techy at all). If I don't handle an exception from the database, for example, the nerdy column-does-not-allow-null and syntax-error-at-procedure-line-57 messages downright terrify my users. They're afraid they broke something and will be blamed for it.
So if I throw an exception, it's in plain English. If you put a little effort into your exception messages and details, it's better than a crash or silently doing nothing.
How you affect others is a good reason to throw an exception.
@WebManWalking,
Yeah, as much as possible, I'd like to try to shield the end-user from any "raw" error. When I throw errors, I try to either include a meaningful message; or, I try to catch it somewhere, based on Type, and then return a meaningful error message to the user.
I really like this definition!
One thing I notice a lot is methods that return a boolean for "success". I can understand how this might be needed in a REST API or something but the rest of the time I would rather have my methods return a meaningful value or nothing at all. I assume it will work. Otherwise it should raise an exception.
As you said, other code can handle exceptions in a way that the user isn't exposed to a raw error message.
If it fails, though, I want my system to get some information on the failure that I can use to make sure it doesn't fail again.
I had been trying to formulate a guideline in my head for this, and this wording really hits the nail on the head for me.
@WebManWalking & @Ben: your exceptions should NEVER get to the UI anyhow. Exceptions are for developers. Error pages are for users. Indeed generally any developer-useful exception information would actually be a security consideration if bled to the user.
So if you're doing that: you're doing it wrong.
My chief use for exceptions is to mask side effects of unexpected behaviour in my code (my code's "fault"), and bubble back something useful to my user... where my user is *the developer* not the end user.
I would not bubble back a "division by zero" exception to my user if the fact something was divided by zero because my code did it (say an unexpected 0 was passed as an argument which is then used as a divisor. I'd catch that, and raise an ImvalidArgumentException (perhaps with a message "argument x must be a positive integer").
I use exceptions to *clarify* exceptional situations.
--
Adam
I am with Adam.
When I learnt programming back in the late 80's, that was the one thing that was drummed into my head. If your block of code can error, then you need to throw the right exception and handle it upstream.
Can you imagine clients connecting remotely, then the application errors, but the client doesn't get the error and just hangs. The client needs to know that an error occurred, the only way to do this is throw an exception and provide the details the client needs to be satisfied.
Common sense, dictates that you need to hide most exceptions in your code, that means also using try/catch blocks more as well. Without it, as Adam states, you're not doing your job right.
@All,
Ben asked for a reason to thrown an error, and ameliorating a raw exception message by substituting a friendlier one is a reason. But here's an idea some of you haven't considered. Why does it have to be either/or?
You control your code. You set your own conventions that make for a more stable environment. Why not define your own response object that contains properties (or struct keys) for both plain-English and technical error messages? That way they travel together (and can be serialized together if you so choose). One message may ultimately find its way to the screen, the other to an email to the developers. That too could be part of your coding conventions.
If you know some stable categories of common errors and a wide customer base, maybe internationalize them to avoid scaring your Spanish-speaking audience too.
There's no reason the same error couldn't have multiple audiences.
The easiest way not to let a specific error reach the end user is the way HTTP does it (500 Internal Server Error) or MSIE's generic "an error occurred" screen that helps no one. Do you *like* unhelpful emails that say "Your page doesn't work. Fix it!" with no details to go on? Then fine, at the time you actually know what you were doing that caused the error, don't propagate it. Just block it.
But if you want to encourage your users to report back something that gets you into the ballpark of where/what/why the error occurred, but NOT the raw message, you have to perform that translation yourself.
I have to live in my future. Part of that is having to support the code I write. So at original coding time, I like to put in some extra effort to make my future a happier place.
A well-designed app handles exceptions and errors to prevent app crashes.
For example
public function ReadAll(fileToRead)
{
try {
if (fileToRead == null || fileToRead == "") {
Debug.Log("Idiot you must pass a valid filename to use!!");
Throw(type="myApplication.ReadAll",message="Filename can't be null or empty"); ;
}
// Code block to read filestream
} catch (e) {
Debug.Log("Something happened reading file stream...");
Throw(type="myApplication.ReadAll",message="Something interrupted the reading of the filestream"); ;
}
return something;
}
In this example, the user of the Application doesn't need to know a developer screwed up, the Application itself needs to know and handle it accordingly. I do this all the time with my Applications, although this is maybe not the best example of using exceptions. In a case like this, you could have a central exception handling, that logs or does what it needs to do. Provided that the code is allowed to bubble back up, not throwing exceptions and dealing with them, is just bad practice in a lot of languages period.
If done right, all the user of the Application will see, is an error message tat something went wrong and is still free to click on other aspects of the Application. If it's serious enough, you point blank log the frigging thing, but under all circumstances you must deal with exceptions and stop the Application from stopping on exceptions where possible.
Throwing a 500 server error in cases like this is, stupid to say the least.
It seems to me that we are all in agreement that exceptions shouldn't make it all the way to the user and that our applications shouldn't crash.
Yet the word "stupid" has made it into the conversation and the tone no longer seem friendly. I'm confused by how so much agreement seems to be generating so much heat.
I'll see myself out.
I think you're being a bit precious Steve. If Andrew things something is stupid, why should he *not* say that? Why would you - indirectly - suggest he should not express his opinion?
Hardly any discussions which contain no dissenting opinions are worth actually having, IMO. They're just self-congratulatory circle-jerks.
Instead of shrinking away, why don't you weigh up what people have said, then offer your own conclusions? That'll help both you and the discussion. Don't be worried about disagreeing with people, and don't worry too much about people disagreeing with *you*. This is how we all fine-tune our understanding of things.
So... what do you think?
--
Adam
@Steve,
Tone!
I think throwing a 500 server error is stupid, but as the post was about the way we think about the use of exceptions and not whether we capture them or not, I think it is fair to say that we should not need any excuse or reason to throw an exception that we need to catch later.
I also gave a very good example, of when one should think about adding exceptions to their code.
And yes, throwing a 500 server error, is the worst thing you can do, which by definition is just being plain stupid.
@All,
One thing that I have been doing lately is that if an error does make it to the top of the application without being caught and dealt with more locally, I will log the error and then return a friendly error message with the newly-generated error ID. Such that the error messages are something like:
> Oops! For some reason, we couldn't process your request [ ID: {{ errorID }} ]
... where {{ errorID }} is the ID in the database.
This has been really helpful because when a user reports that something isn't working (usually with a screenshot), we can quickly look up the error based on the reported ID.
And, if the log never made it server-side, the ID is reported as "-1". In this way, we know that there is no server-side evidence of the error as it happened entirely on the client.
Ben,
That's the sort of thing that I do as well, for example, if we go back to my earlier example. There are times when we throw an exception deliberately, with specific values. One is to identify the function and the other is to identify the reason/severity, as in the example I posted earlier, the first thrown error would not really be logged as this is really a developer being stupid. But for Unit Testing, when the error is thrown we can identify straight away what is going on.
But for logging and other general visual displaying, we use the other attributes as mentioned to say oh we deliberately threw this error and it's somehow escaped our traps further up the chain.
I guess there are many ways that work for everyone, but the one thing that I should reiterate, is that we should not be scared to throw exceptions that we need our application to handle and take the correct action, where the user doesn't even need to know about it. These are usually our warning type messages, or even decision exceptions for the application. Basically anything that is not meant for the users eyes is caught, logged and we move on with the show.
Anything that is meant for the user, is then logged and a user friendly message is sent back to the client that issued the request.
Sorry I guess I expressed myself badly. I wasn't trying to criticize the tone (though I do find it a bit frustrating) so much as I was trying to understand the nature of the disagreement.
It seems like we are all saying that it is helpful to have exceptions return enough information for us to debug the problem but that those exceptions shouldn't make it to the end user. Not so?
@Steve,
I think we are all basically on the same page here, from what I have read.
@All,
One of the things that I never quite felt "right" about, but it works for me is when I develop a component like a 3rd-party component, but then I also want to expose more error information so I can debug it later on down the road.
For example, if I have a Gateway that communicates with a 3rd-party remote API, often times, the HTTP request will never "fail" - it will just return an error message. But, since the Gateway is meant to abstract the communication, I'll turn around and throw an error, like:
if ( ! reFind( "2\d\d", apiResponse.statusCode ) ) { ... throw error ... }
But, if I do that, I lose the reason as to why the HTTP communication failed, which I'll want for subsequent debugging. As such, in these 3rd-party gateways, I tend to serialize the HTTP message as part of the error message:
throw(
type = "SomeError",
message = "Some error message.",
extendedInfo = apiResponse.fileContent
);
... where the "extendedInfo" contains the underlying error response, which I can use to debug later.
Anyway, I never quite felt "right" about that, since it felt a bit "leaky". But, at the same time, it's been wonderful for debugging.
Ben,
Why do you think that feels "leaky"? It sounds like a good solution to me.
@Steve,
I think my emotions are based on the order in which it was built. Meaning, if I had built the error handling that way from the start, it would have seemed fine. But, the truth is, it kinds went in reverse. Meaning, when the gateway was launched, it didn't have the "extendedInfo" value. Then, requests were failing, and I had no idea why since the error didn't have any valuable information. At that point, I went back to the gateway and update the error to expose more data.
This made me feel a bit odd since I felt like I was retrofitting the gateway to expose more implementation details.
But, like I said, had I done that from the get-go, I probably would have felt fine :)