So you are saying every book, site, and instructor I've had is wrong. You are also saying wikipedia, which is basically saying the same thing I've read and been told, is wrong?
WikipediaSyntaxError wrote:
For compiled languages syntax errors occur strictly at compile-time. A program will not compile until all syntax errors are corrected. For interpreted languages, however, not all syntax errors can be reliably detected until run-time, and it is not necessarily simple to differentiate a syntax error from a semantic error; many don't try at all.
WikipediaLogicError wrote:
In computer programming, a logic error is a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash). A logic error produces unintended or undesired output or other behavior, although it may not immediately be recognized as such.
Logic errors occur in both compiled and interpreted languages. Unlike a program with a syntax error, a program with a logic error is a valid program in the language, though it does not behave as intended. The only clue to the existence of logic errors is the production of wrong solutions.
Neither of those definitions contradict my statements.
The first one says that syntax errors occur only at compile time. That doesn't imply that syntax errors are the only type of errors that occur at compile time. E.g. link errors.
The second one says that a logic error causes a program to operate incorrectly.
There's two things a language can do with a type error at compile time: reject or accept the program. If the program is rejected, it can't operate or be caused to operate in any way, so the definition is irrelevant. If the program is accepted and the type error occurs at run time, the language can either crash the program, which is a logic error, or do a type conversion with weak typing semantics, which may or may not cause a logic error down the line.
In other words, type errors become potential logic errors when you ignore them at compile time, which is what I said
I wrote:
[Static compilation] sure does [find logic errors].
I mainly disagree because, for as long as I've been programming, I've always considered compile time to but the step of compiling (checking syntax). Then I considered running the linker to be a separate step from compile time since you are no longer compiling the C/C++ file, but rather just linking it to create the executable. This is how I've always viewed it so I don't consider link errors to be compile errors.
If the fact that people disagree with you and try to prove you wrong offends you, perhaps you shouldn't engage in arguments. That you were offended doesn't imply that they actually tried to offend you, though.
Being wrong is not big deal and not why I got offended. I'm married, and as the old saying goes "Your wife is right, even when she is wrong, she is right." I'm offended because :
Is this program syntactically valid? Does it compile?
and
What about this one? What's wrong with it?
Come across as condescending towards me just to prove your point.
They were actual questions. I wanted to know what kind of errors you considered them to be, since it would let me elucidate your own definition of "syntax error".
A program is a function that takes some input and produces some output. The type of this function describes every single valid output for every valid input (set of valid inputs is restricted by input type). Thus every logic error is technically a type error. (see Curry-Howard isomorphism).
The only problem is that type systems are to weak to fully describe and typecheck every possible function you could write in a Turing-complete language. Essentially creating such a type system would mean solving the halting problem. Therefore, practical type systems typically degenerate to coarse-grained describing of the allowed input domain and output domains, which is much weaker. Some of the best type systems (Scala, Haskell, Coq) can do much more, though, e.g. dependent types, but this is still far from proving for absence of all logical bugs.
It seems most people here think static typing is superior to dynamic typing. Personally I'm not sure, and I haven't seen any convincing arguments in this thread as to why the former should be better than the latter.
I haven't used languages such as Scala or Haskell, so I won't comment on them. I also haven't worked on very big projects - maybe static typing is a better option for them.
I'm pretty familiar with Ada, which is strongly (some say strongest), statically typed. While it could possibly lead to better programs, in practice it is a pain to use, at last for me. When I program in Ada, I have a feeling that I overengineer everything. That is, spend too much time defining types, and adding unnecessary complexity. (It is much worse than in C/C++, which are rather weakly typed.) If you would like, you could easily play with the type system indefinitely, without ever accomplishing anything.
On the other hand, when I program in Python, I can get things done very fast. Not only I don't have a feeling I overengineer, I can even do the opposite, using lists and hashes for everything. It's no coincidence that Python is regarded as a great language for prototyping.
As a side note, the example of dynamic typing being useless because in Scala you don't need to specify a type of a variable, is quite silly. Dynamic typing is much more than that. Not saying Scala is a bad language through.
So, personally I like to program in dynamically typed languages more. I don't know if they are better in every case - most likely not. But for what I do, they seem to be better. And anyway, most of what the compiler tells you in C or C++ will be an immediate exception in Python, leading you exactly to the problematic line of code. Static typing seems to be more trouble than it is worth, IMHO.
1. Less bugs - ability to mathematically *prove* for absence of some classes of bugs. In Haskell, some people even say "if it compiles, it works".
2. Better performance - easier to optimize for the compiler; JS performance was improved greatly recently, but it is still typically about 2-10x slower than Java and this is not going to change soon. JIT for a dynamic language must be extremely smart and if it is not, it does not help much. And being extrmely smart here = recovering *static types* at runtime.
3. Better tooling. Things like smart autocomplete or error highlighting work much better for statically typed languages than dynamic ones.
4. Refactoring support. There is no safe way to refactor a program in a dynamic language, unless you have millions of unittests for checking even the most trivial properties. And it is very hard to automate it.
5. Better code readability. Types are part of the documentation. Many times you can just look at the signature of the method, without reading any docs and you immediately know what the function expects and what returns. In a dynamic language you need to have much more detailed docs and beware to keep it always up to date; which is not checked by the compiler.
Things were dynamic typing is *not* superior to static typing, contrary to common belief:
6. Verbosity. While some old statically typed languages (C, C++, Java, C#) may look more verbose and less expressive than some popular dynamic ones (Ruby, Python), it is not caused by the type system being static. Take PHP (dynamic language) and Scala or Haskell (static ones). PHP is extremely verbose; Haskell is one of the most terse/expressive languages out there; Scala is only slightly more verbose than Haskell, but still far, far more expressive than PHP.
7. Faster code-compile-run loop. This is partially true - statically typed languages typically involve compilation step which may take some non-negligible amount of time, but some statically typed languages come also with interactive shells: Haskell, Scala, Groovy; Also, due to static code checking, you don't have to compile and run your programs so often for manual testing, as you would do in a dynamically typed language. IDEs find bugs on the fly, when you type.
8. Not compiling valid programs / being too strict about type-correctness. This is not true, because most of the static languages offer some ways to get out of strict type-checking. E.g. type casts or dynamic type support. Therefore it is possible e.g. to write valid Scala code which looks like JavaScript i.e. has no type annotations and everything is dynamic. Also, in good type systems the need to use type-casts is very, very rare.
1. Less bugs - ability to mathematically *prove* for absence of some classes of bugs. In Haskell, some people even say "if it compiles, it works".
I doubt it. Can you provide any evidence? Oh, and those people must be nuts.
2. Better performance
I agree. But it only matters when you need that performance.
3. Better tooling.
4. Refactoring support.
Yeah, you're right. But they're less important for higher-level languages, where you have less developers and less code. They became popular because of Java, which is verbose and where you cannot avoid lots of boilerplate code.
5. Better code readability.
Depends on the developers, really. Dynamic languages are usually more succinct, which adds to readability.
8. Not compiling valid programs / being too strict about type-correctness. This is not true, because most of the static languages offer some ways to get out of strict type-checking.
They have to. Otherwise they would be unusable in some cases. That means every type system has its quirks.
You have some good points, so thanks. The problem is probably that I'm not familiar with any of more advanced statically typed languages. I always felt they belong to academia, not real world. Compared to popular static languages, Python rocks.
I doubt it. Can you provide any evidence? Oh, and those people must be nuts.
E.g. all null-pointer/uninitialized object related bugs. Or bugs stemming from accidentally modifying something that should not be modifiable (const correctness / immutability).
Dynamic languages are usually more succinct, which adds to readability
This has nothing to do with static type checking. OCaml, Haskell, Scala, Kotlin, F# can be at least as succint as Python and Ruby, while being statically typed.
But they're less important for higher-level languages, where you have less developers and less code.
So this means dynamic languages don't scale to larger projects or larger teams. Which is another downside.
Compared to popular static languages, Python rocks.
I coded in Python one small project for a bank and I have totally opposite experience.
E.g. all null-pointer/uninitialized object related bugs. Or bugs stemming from accidentally modifying something that should not be modifiable (const correctness / immutability).
I asked for something like a study that would show programmers make less bugs in static languages.
So this means dynamic languages don't scale to larger projects or larger teams. Which is another downside.
Again, evidence? It may be true, but it may also be false. I just wanted to point out that with dynamic languages, you don't need to (as opposed to can't) write milions of lines of code.
I coded in Python one small project for a bank and I have totally opposite experience.
So this means dynamic languages don't scale to larger projects or larger teams.
Completely agree with it. In my experience Python is good at automatization of routine tasks (script-like), when you need to process some data right now and anything behaving like Linux standard programs: text input → program → text output — solving one problem at time.
Python rocks
It rocks so hard that after Civilisation IV Firaxis dropped it in favor of Lua to make game, you know, playable.
I just wanted to point out that with dynamic languages, you don't need to (as opposed to can't) write milions of lines of code.
It rocks so hard that after Civilisation IV Firaxis dropped it in favor of Lua to make game, you know, playable.
Great argument. For any language, I'm sure you will be able to find quite a few failures.
Why would you need that in static-type languages?
Because they are verbose, because they don't allow you to create abstractions you need, because they limit your expressiveness. It's no coincidence those "milions of lines of code" projects are often associated with Java.
Those millions of lines of code projects solve much harder/bigger problems than those tiny Python scripts. Python is nice to automate a few things here and there - we use it a lot for our internal stuff as a glue language; however it isn't very robust as application development language.
because they don't allow you to create abstractions you need, because they limit what you can do
Which abstractions in particular do you have in mind?
The problem with dynamic languages isn't that you can't write large programs. It's that it becomes very difficult past a certain point. The first example that comes to mind is that you can't automatically check correct usage of changing interfaces by the mere act of changing the interface.
You all say that dynamic languages are not as scalable, and are not appropriate for harder problems, without providing any evidence aside from your feelings. From what I heard "Common Lisp" is a name you can often find next to the phrase "hardest problems", and it's not a static language. My guess is that static typing is overrated, and dynamic languages are actually capable of much more than you think. But it's only a guess, I honestly don't know.