I know bubble sort of the top of my head as a sorting algorithm |
I would never consider bubble sort to be an "essential" algorithm, I can't think of any reason to use it other than a beginner learning exercise. Maybe if you know there's less than log(n) adjacent misordered elements, but even then you'd have to give me a pretty good reason.
and maybe finding biggest or smallest numbers in an array |
<algorithm> can do that.
http://en.cppreference.com/w/cpp/algorithm/min_element
http://en.cppreference.com/w/cpp/algorithm/max_element
http://en.cppreference.com/w/cpp/algorithm/minmax_element
First thing I thought up when reading your post was trees. The C++ standard libraries uses trees in the implementation of some of its data structures, but there isn't any generic Tree<T> class for a user to have. This is probably for the best, because tree functionality often needs to be very specific and tailored to a particular problem. There's many different types of trees, and hundreds of ways to implement them. And the more generic form of trees is graphs. There are many algorithms associated with graphs, such as A* searching, that you aren't going to find in any standard library.
Other algorithms include NP-complete or NP-hard ones, like solving sudoku, packing problems, travelling salesman, integer factorization (whether it even is NP-complete), and pure-math algorithms like finding prime numbers, or certain classes of prime numbers.
https://en.wikipedia.org/wiki/Bin_packing_problem
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics
As you get into large codebases, organization and code maintenance becomes harder than the algorithms themselves, at times.
also is there any advantages of using your own algorithms over ones that have been already written? I mean why invent the wheel? |
I assume here you're talking about algorithms from all available sources, and not just the standard library. I would always recommend trying to find existing libraries to suit your needs before trying to write your own, but I don't really have a good answer to this. Sometimes, in a complicated codebase, it's easier to just make your own implementation than have to rely on code that a third-party wrote. Other times, licensing issues can get in the way of using code in a product you're actually going to sell to someone. Other times, if you run into a bug, it can take an hour to just re-write the functionality instead of 2 days to understand and fix the existing code.
See:
https://en.wikipedia.org/wiki/Not_invented_here
jonnin brought up good points with talking about the bleeding-edge of tech. Things like machine learning and related algorithms are not going to found in most standard libraries. Advanced computer imaging algorithms, for example, are not often available for the public, as companies will like to keep those as secret as possible.
https://en.wikipedia.org/wiki/K-means_clustering
https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution
https://en.wikipedia.org/wiki/Deep_learning
+1 for jonnin mentioned multi-threading. However, note that this complicates everything, and proper profiling should be done before you actually decide that multi-threading is what's needed for a task.