most essential algorithms?

Hello guys so a question,what algorithms do you think are the most useful to know off the top of your head,I know many if not most algorithms have already been implemented as a function(s) in libraries such as <algorithm>,but which algorithms do you think are important to know and have a good understanding of?

when is an example when you would use your own algorithm over one that was already written in the standard library?

I know bubble sort of the top of my head as a sorting algorithm and maybe finding biggest or smallest numbers in an array,but thats about it the rest I will use from <algorithm>



also is there any advantages of using your own algorithms over ones that have been already written? I mean why invent the wheel?

thanks

Well, *someone* has to understand this stuff to code it when new languages come out, new libraries, when your system does not support it (some embedded systems have limits for example), when inventing new, better algorithms, and more.

There is generally little point in using your own over provided. The provided ones are usually well tweaked for high performance and fully debugged and flexible.

But there are only a few dozen provided algorithms, in a world with thousands of useful algorithms. So when you need one that isn't provided, what do you do, and where did you get that knowledge? By studying the common ones, in part...

lets talk sorting, since you mentioned sorting.
the standard sort is not multi threaded. What happens if you just take that tool, and write your own merge sort, then split up a multi-billion rows chunk of data and sort it in chunks across 8 CPUS and then put it back together with your merge algorithm. Can you do that faster? Is it better than using std sort for the merge part? Is it worth doing? If you spent the time and effort, you could beat standard sort for some specific large problem with a hand-rolled solution. How often do you really handle enough data with a real time requirement that this is worth a month of effort, though? Its not necessary for sorting 100 items, its a waste of your time to go to all this trouble.

if you code on the bleeding edge of tech, the R&D areas or the very-big data realm or other fringe areas, you need to be creative and willing to build your own algorithms, and then have the background and drive to make them better and dig into the problem. It may not be where you end up for a career, but its nice to be able to apply and say "yes, I can DO that".

bucket sort, which is useful for several one-offs other than actual sorting, hashing, which is also useful for some one-offs, lookup table strategies, modulus tricks … those are the ones that seem to crop up a lot for me. But I think I might be weird, and doing unusual things in my career.
Last edited on
I know bubble sort of the top of my head as a sorting algorithm

I would never consider bubble sort to be an "essential" algorithm, I can't think of any reason to use it other than a beginner learning exercise. Maybe if you know there's less than log(n) adjacent misordered elements, but even then you'd have to give me a pretty good reason.

and maybe finding biggest or smallest numbers in an array

<algorithm> can do that.
http://en.cppreference.com/w/cpp/algorithm/min_element
http://en.cppreference.com/w/cpp/algorithm/max_element
http://en.cppreference.com/w/cpp/algorithm/minmax_element

First thing I thought up when reading your post was trees. The C++ standard libraries uses trees in the implementation of some of its data structures, but there isn't any generic Tree<T> class for a user to have. This is probably for the best, because tree functionality often needs to be very specific and tailored to a particular problem. There's many different types of trees, and hundreds of ways to implement them. And the more generic form of trees is graphs. There are many algorithms associated with graphs, such as A* searching, that you aren't going to find in any standard library.

Other algorithms include NP-complete or NP-hard ones, like solving sudoku, packing problems, travelling salesman, integer factorization (whether it even is NP-complete), and pure-math algorithms like finding prime numbers, or certain classes of prime numbers.
https://en.wikipedia.org/wiki/Bin_packing_problem
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics

As you get into large codebases, organization and code maintenance becomes harder than the algorithms themselves, at times.

also is there any advantages of using your own algorithms over ones that have been already written? I mean why invent the wheel?

I assume here you're talking about algorithms from all available sources, and not just the standard library. I would always recommend trying to find existing libraries to suit your needs before trying to write your own, but I don't really have a good answer to this. Sometimes, in a complicated codebase, it's easier to just make your own implementation than have to rely on code that a third-party wrote. Other times, licensing issues can get in the way of using code in a product you're actually going to sell to someone. Other times, if you run into a bug, it can take an hour to just re-write the functionality instead of 2 days to understand and fix the existing code.

See: https://en.wikipedia.org/wiki/Not_invented_here


jonnin brought up good points with talking about the bleeding-edge of tech. Things like machine learning and related algorithms are not going to found in most standard libraries. Advanced computer imaging algorithms, for example, are not often available for the public, as companies will like to keep those as secret as possible.

https://en.wikipedia.org/wiki/K-means_clustering
https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution
https://en.wikipedia.org/wiki/Deep_learning

+1 for jonnin mentioned multi-threading. However, note that this complicates everything, and proper profiling should be done before you actually decide that multi-threading is what's needed for a task.
C++17 did bring in execution policies: https://en.cppreference.com/w/cpp/algorithm/execution_policy_tag_t
Therefore, some parallelism is creeping in.


I have an algorithm that solves "biological problem". In order to do so, it resorts to algorithms that resort to math, like Fourier correlation or quaternions. Some of those perform their task by calling yet other algorithms for some subtasks. The point is, the word algorithm may mean more than small, confined subtasks.
thanks guys very good info :)

thanks for sharing,Yes graphs and trees are something that I need to learn as they are not implemented in the standard library,

multi threading isn't something I have learned yet but I think I am just about to hit that level although I would like to do some gui programming aswell,the only thing that detracts me from doing so is what type of gui application could I write,I'm sure there are tonnes of ideas but I couldn't do anything complicated like a text editor even a media player to play videos,I've used qt and the interface is pretty straight forward and the library seems pretty extensive.

so yeah I think next on my agenda is to learn multi-threading programming and graphs

just one more thing to ask before closing the thread,do you think there is any point of me learning any of the algorithms already implemented in the stl? I know they may come into play if I was programming embedded systems where these libraries wouldn't be available to me,but apart from that is there much benefits in knowing them?

thanks :)
well, yes :)

one way that you would write new, interesting code that does new, interesting things would be to take what has already been done, understand it, and make it better or make it do something new. If you learn the STL algorithms, you have some building blocks and a starting place to do something new, or do something better. I mean, any kid with a little programming experience can solve a moderately challenging problem via brute force. That is not generally the best way to do stuff, though.

I don't really know how to make this point. Let me put it this way: my old algorithms book from college is as big as my college physics book, its 3 times as thick as my first c++ book, its as big as a GOOD dictionary!! That is stuff we already know how to do, and a lot of its is coded up and ready to use somewhere in cyberspace. But without those, without the breakdown of how it works and what its big-O performance and all are, without some background in the topic.... how would I know when faced with a new problem if an algorithm exists for it, and if not, how do I know if it can be solved in a more elegant way than brute force, and if it can, what is the best way? The knowledge in what has gone before helps there -- how to do the analysis of the problem and algorithm, whether it looks like some other problem with a twist, etc... that is how.

I don't recommend going off to become an algorithm expert or analyst. But I do recommend digging into algorithms a bit. Remember, for that, you are not re-inventing it. You are *studying* it (how it works, and why its the best or not the best, etc) so you invent something else.

This is tied to a concept that I feel is critical to get as a programmer... and it often comes late, rarely in school. Its really simple... if you understand what something DOES, and ignore what it is CALLED or what you were told it was "designed to do", but simply get what it DOES, you can adapt it much easier and use it for more stuff. Things like slope = tan(angle), or x/2 is the same as x >> 1, but at an algorithmic level too. Things like bucket sort being used to count items or deduplicate them...
Last edited on
Topic archived. No new replies allowed.