Signs of Improvement

Pages: 12
So chrisname, you finally have a "taste" of fixing programs left by others. It is not exactly an easy task, sometimes it is even harder than writing a new program from scratch. And in employment, your superior insist you are NOT supposed to write a new program but to fix the existing despite you telling them in the long run it is *cheaper* to write a new program.

Well some programmers quit and some stay like me while waiting for better opportunities outside. Some employers pay top dollars not to ask you write new programs but to clean up *shIt* left by others. This is something very prevalent in my country :P
closed account (z05DSL3A)
I'm a big believer in the "do one thing" and "fits in one screenful" kind of function design.

So am I, I bought a bigger screen and already see the improvements in my code. :0)
sohguanh wrote:
Some employers pay top dollars not to ask you write new programs but to clean up *shIt* left by others. This is something very prevalent in my country :P
This is ubiquitous practice. The management is happy with failure, as long as it according to plan. It helps their risk assessment.

I am not even emphasizing the irony here. It is actually understandable. But Leonardo da Vinci would not start by first sketching Mona Lisa in the canvas's background for all his portraits, would he? (And it is not a failed painting.)
speaking of fits on one screen... I was going to rehash some libraries of mine because they are a little (well a lot) bloated as far as fits on one screen type stuff.. Well, there are lots of other things too, but I wanted to know about classes also. If you have large classes, is it better to split the classes into smaller portions as well. I was going to do this, but wanted to know what people thought of that as well. I think it would be better, but maybe there are some issues I'm missing.

-- edit: i said "a large classes"
Last edited on
sohguanh wrote:
So chrisname, you finally have a "taste" of fixing programs left by others.

Well, not really, since I failed and gave up.
Often I come back to very old programs that I made in like 2007 and fix them up. It's fun to marvel at the crazy hacks that I thought were clever back then.
Yeah, it's fun to look at your old code and think "What the hell is this doing?".
That's my instinctive reaction when I look at any code.
i know i am improving because I have been attacking my language knowledge weaknesses. each day I try and make sure there is less and less about c++ i dont understand. Reading forums keeps me humble because i know i still dont always know the answer.

although i have noticed many questions asked have very little to do with language rules and more so proper use of library functions.


It doesn't bother you at all to see Python or Ruby code that accomplishes in 10 lines what takes several pages in Java.


Replace Java with C++ and it is also true; even more - C++ code is usually even longer than Java.
Anyway, it doesn't really bother me, because Python or Ruby are dynamically typed. So maybe you can write code faster but then you spend about 5x as much time debugging all the runtime failures. Failures that would not be possible in C++ or Java because they would be caught at compile-time immediately. Thus, the productivity gain of these languages over Java is very debatable, except for very small projects. So I'd write:

It doesn't bother you at all to see Scala or Clojure code that accomplishes in 10 lines what takes several pages in Java.


Now it is much harder to find an excuse. ;)
Last edited on
Anyway, it doesn't really bother me, because Python or Ruby are dynamically typed. So maybe you can write code faster but then you spend about 5x as much time debugging all the runtime failures.

In my own experience, that's just not true. Getting things done in Python is generally a lot faster than in C++. Using C++ or Java just because of the alleged protection of the type system is a mistake, IMHO. The vast majority of the runtime errors I get when writing in Python are logic errors and would not have been caught by a static type check anyway.

The vast majority of the runtime errors I get when writing in Python are logic errors and would not have been caught by a static type check anyway.


I seriously doubt that. I presume, most of these errors you call "logic" errors are ordinary type errors, like getting out of bounds of an array, dereferencing a null, incorrect modifying shared state, etc. Any contract violation is a type error. It is just type systems of some commonly used languages like C++ or Java are just too weak to detect all of them (Haskell or Eiffel type systems are much better in this regard).

I've seen large systems written in dynamically typed languages and they were always hard to maintain and hard to understand (lack of types means lot of valuable information is not present in the code). This was one of the reasons Twitter started rewriting their Ruby code.


Getting things done in Python is generally a lot faster than in C++.

Agreed, but I compared Python to Java, not C++. Coding in Java is "slightly" faster than in C++.
Last edited on
@Rapidcoder, it is slightly faster to get the job done in java, but you're code will be slightly slower at run time. But thats the price we pay. Personally I prefer C# over java for most general purpose applications, for web applications I prefer java to c# (with the exception of purely using it for developing dynamic webpages), but when I favor speed of execution over speed of application, at a VERY focused level, I go with c/c++ or assembly, depending on how much of a speed boost I'm looking for.
I seriously doubt that. I presume, most of these errors you call "logic" errors are ordinary type errors

No, they're just the kind of logic errors you get while trying to really understand the problem you're dealing with. I do get some type errors, but they're few and usually easy to fix. Python doesn't do static type checking, but it does enforce strong typing, so if you mess up your types you get a nice descriptive exception and a traceback indicating exactly where it happened. The fact that you don't get the error until you try to run the code is not much of a problem if you keep your unit tests up to date, which you should anyway.

Granted, I have never dealt with big systems and I can imagine their large scale brings out different problems which static typing helps minimize. But a lot of ordinary apps just don't need it. I'm very much in agreement with what Bruce Eckel says here: http://www.artima.com/intv/typing.html

No, they're just the kind of logic errors you get while trying to really understand the problem you're dealing with.


Ok, agreed. These errors are unavoidable. I was speaking of "the code doesn't do what I think it does" kind of bugs. It is just nice to have some classes of bugs made impossible. Impossible is still better than infrequent (however I still claim that typing bugs are quite common - my compiler catches me quite often - probably some of these bugs would be ok, but some of them would manifest themselves in the least excpected moment).


The fact that you don't get the error until you try to run the code is not much of a problem if you keep your unit tests up to date, which you should anyway


This is nice in theory. But I have yet to see a project that has up to date unit tests with good code coverage after a year or two since its start. ;) Anyway, static types help not only with catching bugs. Some other reasons for static typing are:
1. IDE autocompletion support
2. Performance
3. Documentation

Don't get me wrong: I find Python a great scripting tool for administrative tasks or smaller projects, however for larger things that have to scale with respect to number of modules and programmers engaged, I prefer something statically typed. Scala and Clojure have almost all advantages of Python (except fast startup), you write the code just as fast, it is just as short, but you additionally have all benefits of static typing.


@Rapidcoder, it is slightly faster to get the job done in java, but you're code will be slightly slower at run time


Yes, but this usually doesn't matter as much as getting job done faster. Also, getting job done faster means more time and budget for code optimisation, like employing harder to write but better algorithms.

A simple example: Bekeley SPICE simulator, written in C, builds a Jacobian matrix for equations created from the circuit. Then it inverses this matrix for every sample, which costs about O(n^1.2). We work on a piece of software written in Java that analyses the circuit symbolically and builds a matrix only for the part that could not be symbolically reduced. This is of course lot faster than SPICE, because the matrix is much smaller. Probably our matrix inversion code is slower than the C-version for the same matrix, but it is given a 3 times smaller matrix and it wins.

Back to the original topic: I think that you can learn a lot from trying to optimise some larger piece of code, but without breaking its readability (no manual loop unrolling or things like that allowed). By doing this, you often come the conclusion, that the most clear design was also the easiest to make fast. At least I came to such conclusion many times...




Last edited on
Topic archived. No new replies allowed.
Pages: 12