Earn money with programming

Pages: 1... 34567
Bullshit list is bullshit. Even if I assume that the list is correct, how exactly does the platform affect the choice of language (except in special cases) more than the application? Why don't you make a list of preferred languages by age of programmer? Or by gender and sexual preference?
Microsoft Windows: C# and ASP.NET, C++ supported, but not promoted
I wonder why the primary choice when creating projects in Visual Studio is C++, C# and VB.NET are grouped in "Other Languages"

how exactly does the platform affect the choice of language (except in special cases) more than the application?
+1 helios

Why don't you make a list of preferred languages by age of programmer? Or by gender and sexual preference?
+1 helios
closed account (EzwRko23)
http://abstractfactory.blogspot.com/2010/05/how-to-design-popular-programming.html


Fortran (scientific programming)
Ooh. I never heard of that operating system! Where can I get it?

C++ (MS Windows)
Bullshit. Except for MFC, all Windows APIs use C headers. If you can say anything about what the default Windows language is, it would have to be any one of the C family, as opposed to, say, Haskell.

JavaScript (web pages)
Let's get one thing straight: "application" and "platform" are not synonyms.

I didn't even bother to read anything after that.
closed account (EzwRko23)
Fortran was a scientific computing platform. Platform is a more general term than OS. Platform is rather an OS + set of libraries + hardware + tools + compilers etc. In those times you could either program in C (which was slow and error-prone and not always possible), LISP (the AI guys did this) or Fortran (serious numeric computing). It was a default in academia for numerical tasks. Many time-sharing systems had only Fortran compilers in those times.



Except for MFC, all Windows APIs use C headers

MFC was a default way to program Windows for a very long time and is still often used. You must be a masochist to write Windows Apps using pure WinAPI.


Let's get one thing straight: "application" and "platform" are not synonyms.


Web browser is a platform for client-side applications, and probably this is the most ubiquitous programming environment ever. So JavaScript is ok in this context.
There is a difference between the language the OS is written in, and the language the applications are written. The author of the blog states that popularity of the language is driven by applications, not the language that is under-the-hood.
Last edited on
Even if Fortran is a platform that still doesn't change the fact that "scientific computation" is an application, not a platform. Okay, so you have a platform that is best suited for a particular type of application. That's just what I was saying! The application has much more influence on the choice of language than the platform. The platform's influence is only indirect because, for example, no one's going to write a scientific weather simulator for an iPhone, so a language designed specifically to do that will not be popular there. Correlation is not the same as causation.

Web browser is a platform for client-side applications, and probably this is the most ubiquitous programming environment ever. So JavaScript is ok in this context.
I'm aware of that, but that's only a half-truth.
The browser can be considered a platform (some are certainly complex enough), but what I was getting at was that the application -- what's really important in this context -- is "run client-side code embedded in HTML". That is what JavaScript has become the default of. It's become the default way to getting a browser to do something because a browser's only purpose is displaying web pages, so if it wants to display all those web pages that use JS, it has to support it. In this case, it's not the platform (the browser) that has become popular; it's the application (having dynamic content in web pages). The platform and the language have become popular as a consequence of the application becoming popular. If only the platform (a browser that supports JS) became popular, but not the application, you would just have many browsers that support a language that's never used because nobody cares about dynamic content.
closed account (EzwRko23)
Ok, you are right. However this theory does not explain why we observe huge differences between programming languages used on different platforms for similar applications. You can for sure program iPhone Apps in C++, however it is ObjectiveC which is used there. Is ObjectiveC better than C++? In some aspects it is. In aspect of performance it is not. But that doesn't matter. That doesn't mattere because Apple supports ObjectiveC and **not** C++. The same is with Android. You can write apps in C, C++ or Java. People use Java, because it has excellent support from Google, and C / C++ APIs are just an addon. It even didn't matter first versions of Dalvik were extremely slow.

To summarize: In the last decade many new applications and platforms emerged (all this web 2.0 revolution, mobile revolution, multicores) and for none of these* C++ is a language of choice to write applications in. So, what is the future application / platform for C++? The Windows desktop market? Nokia Symbian? Both are rapidly shrinking.


*)
web 2.0 - C++ is non-existent as an app programming language
mobiles - C++ is absent in two most important and rapidly growing platforms these days
multicores - C++ is not suited well for these; even C++0x is years behind the capabilities of e.g. Erlang

Last edited on
However this theory does not explain why we observe huge differences between programming languages used on different platforms for similar applications.
Chalk it up to network effect.

So, you're saying C++ is unpopular in applications it wasn't designed for? *Gasp!*
C++, like C, was designed as a systems programming language. Its key features are efficiency and low abstraction (and don't bother arguing about it being inefficient, because I'm not the mood). You don't use C++ to run client-side code, just like you don't write kernel modules in PHP.
As for concurrency, I don't know what kind of capabilities Erlang has, but C++ is exactly as concurrent as C, and whatever OS you're using right now, it was most likely written in C and can support more than one CPU, so...

The Windows desktop market
Why just Windows? Just because C++ in unpopular in the Unices doesn't mean that they're not possible markets for applications written in C++.
closed account (z05DSL3A)
Has anyone seen my will to live, I seem to have lost it since I found out C++ was unpopular.


Oh there it is under the big pile of I don't give a crap how unpopular you think it is, this is the church of C++, be gone demon spawn.
helios wrote:
Just because C++ in unpopular in the Unices doesn't mean that they're not possible markets for applications written in C++.

I've seen quite a lot of C++ programs on *nix.
closed account (EzwRko23)
I didn't write it IS unpopular, but I wrote it is GETTING less popular every year. This is a big difference. I've seen quite a lot of VB 6 apps in one company or another (probably more than there are C++ applications). Does it mean you should learn VB 6, because it is more popular in companies? This is not an argument. What you should look at are trends.


C++, like C, was designed as a systems programming language


Ok, but it was previously said that you are rather not going to earn money from creating operating systems these days. There are not so many jobs in this market. Most jobs are for application programmers. Also, if you want to program OSes, C knowledge is probably more important than C++. Probably you won't get a chance to use templates or Boost there.


Its key features are efficiency and low abstraction


And micro-efficiency getting is less and less important every year, while computers are getting faster and faster. For low abstraction we have C. What counts more is scalability, maintainability and ability to deal with increasing software complexity. C++ doesn't address these issues as well as most modern languages. This is the primary cause C++ was almost totally ruled out by Java and C# in large scale development. On the other hand these aspects are almost unimportant in gamedev, while micro-efficiency is, so there C++ will probably stay for long.


As for concurrency, I don't know what kind of capabilities Erlang has, but C++ is exactly as concurrent as C, and whatever OS you're using right now, it was most likely written in C and can support more than one CPU, so...


Support of more than one CPU is not sufficient - and virtually every language has it. The problem is that there is no safe way to write scalable multithreaded programs in C++. Threads with shared mutable state is an error-prone, unscalable and inefficient programming model. What is purpose of running an application on 100 cores, if most of the time they will wait one for another? If you ever debugged a multithreaded application in C/C++ you probably know that these kinds of bugs are the most evil ones. And you can never be sure that your application is correct and won't blow away your data after 3 weeks of successfull operation.

Erlang is used to write applications with availability of 99.9999999% which can scale up to millions of concurrent processes (because of lockless concurrency model). All the code can be analyzed sequentially, there is no interleaving of threads, so hard to find bugs are almost not possible. This is in practice currently not possible in C++ and probably will not be in next 10 years. Ofc writing in Erlang style in C++ is probably doable*, but with just as much effort as doing OOP in assembly or FP in Java, so noone sane would go for it.

*) I imagine someone post a 10-liner and say "You see! It is possible!", but what really counts is whether it is convenient and natural way of coding. In C++, almost everything is against this style - it is imperative to death.
Last edited on
Threads with shared mutable state is an error-prone, unscalable and inefficient programming model.
Woah, woah, woah. You can call state whatever you want, except inefficient.
And in any case, the idea in concurrency is usually to avoid shared state precisely to reduce the likelihood of bugs. And obviously to avoid synchronization.

What is purpose of running an application on 100 cores, if most of the time they will wait one for another?
If most of the time in such an application is spent in synchronization, there's probably some very serious design problems in the background aside from the choice of language. Either there's too much global state or the application isn't as concurrent as the designer thought. Or, possibly something else altogether.

If you ever debugged a multithreaded application in C/C++ you probably know that these kinds of bugs are the most evil ones.
Which bugs? The previous sentence is talking about synchronization. This is the first time any type of bug is mentioned.
But no, the most evil ones are the ones that combine the malice of race conditions and the wickedness of memory corruption to produce a malevolent non-deterministic mess.

And you can never be sure that your application is correct and won't blow away your data after 3 weeks of successfull operation.
You can never be sure of that. Unprovability of correctness and all.
closed account (EzwRko23)

Woah, woah, woah. You can call state whatever you want, except inefficient.


I meant inefficient programming model - that is, difficult and slow to program. Just because you need to take care of synchronisation, races etc. 99% of good coders I know can't write even a simple concurrent programs without making serious, hard to find bugs in them. And when they write such program it takes ages, because it is hard to get synchronization right. Many extremely experienced and talented programmers say that multithreading is one of the hardest parts of programming (e.g. http://weblogs.mozillazine.org/roadmap/archives/2007/02/threads_suck.html).

And as of efficiency - mutable state is not scalable. It might be efficient on one or two cores / machines. You can easily see it by how hard it is to scale database servers (stateful) and how easy it is to scale webservers / webservices (stateless). The latter can scale without limits by simply adding more machines. For former lots of scientific papers have been written about but still no open-source database is scalable - it is extremely hard to do.

Mutable state is also unavoidable in C++ programs - because of lack of good immutable data structures in the standard library and lack of language support for FP style of programming.


But no, the most evil ones are the ones that combine the malice of race conditions and the wickedness of memory corruption to produce a malevolent non-deterministic mess.


Memory corruption problems have been successfully solved long time ago.


You can never be sure of that. Unprovability of correctness and all.


Correctness is unprovable only in **general** case - you can't prove correctness of **any** program (especially C++ program), but in most specific cases you can prove correctness or some properties of the program. Most of my programs are easily provable (or at least very easy to analyze) - to the extent that I can write 100 loc without testing, run it and it often works correctly from the first time. Most of multithreaded programs with shared state are unprovable or very hard to analyze. When you add manual memory management to it and pointer arithmetic, they are totally unprovable.

Additionally, there is a tradeoff between efficiency and ease of analyzis of multithreaded program. You can get some kind of assurance of correctness by defensive locking, but this would not scale.


Last edited on
I meant inefficient programming model
[...]
And as of efficiency - mutable state is not scalable.
Well, which is it? Did you mean "inefficient" as in "hard" or as in "not scalable". And why don't you say what you mean instead of assigning random semantics to words?
The problem is not that the resource is mutable, but that it's both mutable and shared. You can have tons of mutable resources, but as long as none of it is shared, you can continue adding threads. For example, take something like ray tracing. Each pixel on the image is independent of its surroundings. You can in theory render each pixel in its own thread to reach top concurrency.
In the case of databases, the problem is not just that the database can change. It's that it can change but also that it has to change in such a way that it remains consistent for the entire system; in other words, it's shared. If each server was allowed its own copy of the database, you could just keep adding servers.
Mutability isn't the problem. It's sharing, and sharing can be avoided in some applications.

Memory corruption problems have been successfully solved long time ago.
Unless you're talking about managed environments -- which would make the sentence completely off-topic because they weren't part of the discussion at that point -- no, they haven't.

100 loc
I really hope you're joking with that figure, or that you forgot two or three orders of magnitude.
closed account (EzwRko23)

Well, which is it? Did you mean "inefficient" as in "hard" or as in "not scalable"


Inefficient and scalable can be spoken in many contexts. Inefficient to code = it takes lots of time to implement. Inefficient to run = it takes lots of CPU time to accomplish a task.
Scalability in context of coding = proportion between amount of resources like people, time, money and size of the programming task (functionality). Scalability in context of execution = proportion between amount of resources such as RAM/CPU and size of the problem being solved.

C++ is efficient only in the context of execution speed / memory (actually it is not because of C++, but because of programmers - C++ just gives them lots of control, but it doesn't magically make programs efficient - C++ compilers are restricted serverely in optimizations they can perform). It is not efficient to code in, doesn't scale well with the size of the system, and in many cases also doesn't scale well in the meaning of "multithreading".


The problem is not that the resource is mutable, but that it's both mutable and shared


Agreed. But if you can't have immutable state in C++ (or at least it is difficult and unnatural), how would your threads communicate? Only a small portion of problems can be parallelized in a way ray-tracers work. Many algorithms require communication between concurrent entities. The default answer in C++ is shared mutable state.


I really hope you're joking with that figure, or that you forgot two or three orders of magnitude


I usually program with small steps, at the single function/class level. Most of the functions are not longer than 15 loc. I write a class and a set of tests. So, no, I never write 2000 loc and **then** debug it. This is not a good way of coding and there is no need to do so if you follow DRY, YAGNI, KISS, BDD rules. Anyway, 100 loc in HLL is often equivalent to 1000 loc in C++ or more, so you have a one order of magnitude. :P

Just as an example, one of the current web mining systems I work on is just about 500 loc (including comments) and includes:
- retrieving documents from the web by following the links
- clearing content of unneeded rubbish
- splitting pages into smaller fragments
- eliminating stop words
- lemmatizing
- calculating tfidf coefficients
- full text indexing of the documents
- searching
- clustering the search results and extracting common content
All this with the standard library only and one library (Lucene) for indexing.
Last edited on
Inefficient to code = it takes lots of time to implement.
That's not "efficient". I don't know what it is, but it's definitely not that. And even if it was, you can't throw that term lightly in a C++ context.
Scalability in context of execution = proportion between amount of resources such as RAM/CPU and size of the problem being solved.
That's space complexity and time complexity, respectively.

The default answer in C++ is shared mutable state.
That's the only answer in anything. If you have something that needs to be modified in parallel, regardless of the language, you're going to have to control access to it. You can't magically remove the synchronization.

I usually program with small steps, at the single function/class level. Most of the functions are [...]
That's not what I meant. 100, even 500 LOC is still quite trivial. Sure, I can prove that a Hello World is correct. When you get into 104 LOC territory, that's when things start getting interesting.
closed account (EzwRko23)

That's not "efficient". I don't know what it is, but it's definitely not that


Ok, so let's call it "productive".


That's space complexity and time complexity, respectively.

Not quite - only on a single core they are the same. Two algorithms with same complexity may scale differently in function of number of cores.


That's the only answer in anything. If you have something that needs to be modified in parallel, regardless of the language, you're going to have to control access to it. You can't magically remove the synchronization.


You can. Immutable state can be shared with no locks. But to do so, the language/library must provide some facilities to deal with immutable state. Given that, you can write complex parallel data processing with no locks and no variables, thus no deadlocks and no race conditions.


When you get into 10^4 LOC territory, that's when things start getting interesting.


No, it really doesn't get any more interesting than at the level of 100 loc, if your app is correctly designed. Probability of creating a bug per line is constant. This holds for languages that are primarily designed to do large scale development, like 10^6 loc or more, a few years long and written by tens of coders. I presume this does not hold for languages with manual memory management and shared mutable state - then the level of 10^4 LOC is much different than 10^2.
Last edited on
Fortran (scientific programming)


So, which science are you talking about? All scientific computing in Lie algebras (my area) is done in C++, or interpretted languages on top of C/C++(example: MAPLE).

Rational points on elliptic curves are computed in C++ or in interpretted languages on top of C/C++ (one of the main authors of the MAGMA calculator had his office down the corridor about two years ago. This same guy happens to be one of the original co-authors of the CLISP compiler).

The only two libraries on polyhedral computations that are useful for what I am doing are written in C++.

One last note: I started my math coding in Visual basic, and switched to C++. I would had switched to Java instead/would switch immediately if Java had
0) option to turn off garbage collection and use delete/free/whatever you call it in a deterministic fashion.
1) templates
2) operator overloading (it is so valuable for code readability that it's worth the performance penalty, which is not-so-small as one could think).

Conversely, I would greatly enjoy if C++ had some of the syntax consistency of Java. I have also heard that programming input/output, file operations and concurrency is written faster (and perhaps easier) in Java.
Last edited on
Two algorithms with same complexity may scale differently in function of number of cores.
That has nothing to do with the amount of necessary resources in relation to the size of the problem. What you're describing there is something else entirely called "concurrency" (the property of being concurrent), or how easily the algorithm can be parallelized.

Immutable state
Well, you can have immutable state in C/++ without having the language hold your hand. All you need to do is not change it.
You can't always have immutable state, though.

No, it really doesn't get any more interesting than at the level of 100 loc, if your app is correctly designed.
Excuse me while I go laugh my ass off.
Yeah, no. Just because you wrote a 100 kLOC system, 100 "correct" lines at a time, you can still develop bugs arising from the interaction between that thousand of supposedly independent modules. More complexity translates to more probable bugs, regardless of how you arrived to that complexity.
closed account (EzwRko23)

That has nothing to do with the amount of necessary resources in relation to the size of the problem. What you're describing there is something else entirely called "concurrency" (the property of being concurrent), or how easily the algorithm can be parallelized.


Ok, so you obviously do not know what scalability is. Please do read any basic book on algorithms or distributed systems before replying. For your convenience, a quote from Wikipedia (I cite it only because it is easy to link to, but you should really get the definition from the book):


In telecommunications and software engineering, scalability is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner or to be readily enlarged.[1] For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added.



Well, you can have immutable state in C/++ without having the language hold your hand. All you need to do is not change it.


Don't reply before you find in the standard C++ library:
1. persistent list implementation
2. persistent vector implementation
3. persistent tree map and hash map implementation
4. persistent queue
5. map, reduce, fold algorithms
6. messaging subsystem / actors implementation

Without these, programming with immutable state is almost not possible, unless you can afford extreme overhead on copying all state every time. Additionally, because STL classes do not enforce immutability, you would get no guarantees about immutability. Happy debugging, when your team mate mutates some objects against the rules breaking **your** code in totally unpredictable manner.


Yeah, no. Just because you wrote a 100 kLOC system, 100 "correct" lines at a time, you can still develop bugs arising from the interaction between that thousand of supposedly independent modules.


This would just mean that some of these 100 liners were not correct OR design was not correct.
Which contradicts the assumption I made at the beginning. But generally you are right: design bugs also happen (the bugs caused by e.g. wrong interactions between modules), so it is somehow more difficult to create a 100kLOC system than a 100LOC system. It is just the design that is much harder in the first case. But coding is just the same - if your language is flexible enough, you can divide work into separate, independent modules. If you have shared mutable state everywhere and hidden dependencies, then 100kLOC system can be a nightmare (I have worked on teams building such large systems both as a programmer and architect so I can have such opinions).
Last edited on
Pages: 1... 34567