I thought about two concepts
which come from sorting files
because both linked lists and files has sequential access
Straight merge
Nodes from original list are distributed alternately into two auxiliary lists
We dont care about existing order of nodes,
we just assume that initially each block is
size one and double after merge procedure
If after distribution one of auxiliary list is empty then we stop
Firstly we merge blocks (sorted subsequences) and then we merge lists
It looks like that we will need introduce counters
and it may cause overflow error
(for example older compilers have 16 bit integers,
now compilers have 32 bit or 64 bit integers)
Natural merge
Main difference is that we check existing sorted subsequences
in distribution step
Natural merge does not need counters and overflow error will not occur
Any details how to write working code for these functions
You have not sorted data with insufficient RAM memory
I would like to adapt this concept to linked list because both
linked list and file have sequential access
I have implemented external sort, yes.
I don't understand your description of "natural merge" and "straight merge". Actually, neither algorithm looks like the merge algorithm used in external sort.
I also don't understand why you'd want to use external sort to sort linked lists. External sort is designed to sort sequences using two types of memories with very different speeds. If your input linked list fits entirely in RAM then just use normal merge sort.
Sorting doesn't involve modifying the data, just changing its order. So there is no danger of overflow while sorting. If any overflow occurs, it will happen in whatever code creates the data that you sort. Let that code worry about it.
dhayden you suggest to leave recursion in merge sort and
dont replace it with iteration
Will it be comfortable if we want to sort points for convex hull
or whenever we need own comparing function
In external sort we only create initial runs in internal memory
and if we ignore this step we can adapt this algorithm to linked list
" then just use normal merge sort"
but i would like to avoid recursion and if we ignore creating initial runs
in external sort we can adapt it to linked list
If your goal is just to sort your data then use one of the many existing sort libraries. The authors have probably thought their code through carefully and you'll get your code working faster.
On other hand, if your goal is to learn about sorting algorithms, then I think it would be helpful if you posted some actual code that we can comment on.
there are a number of problems that rewriting non recursively is a royal pain. This is that category of problem -- its doable (last I studied, all recursion can be recreated with iteration, if you really want to go there) but why?
if you keep both lists sorted, there is the special case merge sort of just peeling off the lowest value between the top value in each list until both lists are empty. That means either a relatively inefficient sorted insert list or a different list sorter algorithm.
And there is the slow version of the above where you iterate each unsorted list for the least value and take the least of the 2 (N*N effort) as a combined 'merge via bubble' mess. That would be easy to do without recursion but its terrible. Doing it right without recursion is much more hand waving to get the loops right.
I generally operate on a 'if it needs to be sorted, list is not the best pick' approach. That isnt always true, but I give it a long hard look before I justify a list for that data.
Well, you can sort a list in linearithmic time and linear space if you make an array of pointers to nodes, sort the array (taking the extra level of indirection into account), and then rebuild the list according to the array's order.
So sorting a list isn't necessarily a performance killer (asymptotically speaking), as long as you have enough memory.