BHXSpecter wrote: |
---|
If something is designed with six things you have to do in order before using it or else it breaks when you try to use it. That isn't prone to error, that is human error for not understanding the order you had to turn the safety levers off before using it. |
I could not disagree with you more.
Quite frankly I'm not even sure I understand how you could possibly disagree with what I said regarding the connection between difficulty and probability for human error. It's a very clear correlation to me: More difficult things are less likely to be accomplished successfully.
If the user makes a mistake... yes it's the user's fault. I'm not saying programmers don't need to know what they're doing. What I'm saying is a properly written lib will
reduce the probability of human error.
This is the entire concept behind encapsulation. The inner workings of complex functionality remain
hidden and inaccessible to the user. Instead... a simplified interface is given to the user.
Wouldn't the lib be more powerful if all the functionality was exposed and the user could manipulate it as they wanted? Of course it would... but that's a bad idea because it greatly increases the risk of misuse / user error.
Take a look at any established C/C++/Java/C#/<insert language here> library to see examples of this in action. They all encapsulate complex functionality on some level.
If you take the easier way out, you will still have to learn the hard way eventually |
Again I'm not claiming the programmer shouldn't have to know what he's doing.
But if he's given the option of doing something the easy way vs. the hard way... he'd be a fool to do it the hard way (unless there were very compelling reasons why he couldn't). (EDIT: or unless it's academic /EDIT)
A lot of programming is debugging. Companies spend millions and millions of dollars every year trying to fix human error. Any techniques/approaches that can be employed to reduce the frequency/risk of programmer error is to be embraced.