Help me with these conceptual algorithm questions please

A. Given a graph and assume we computed the minimum weight paths using Dijkstra's algorithm (we maintain a predecessor array), if we add a constant C to all edge weights and run the algorithm on this updated graph, the same edges will be selected when computing the shortest weight path? T/F?

B. Kruskal's algorithm cannot compute a minimum spanning tree when a negative cycle is present in the graph. T/F?

C. No minimum weight path can be computed if a negative cycle is present in the graph. T/F?

D. Traversing an AVL tree using inorder traversal takes O(logn) time. T/F?

E. A graph with |V| amount of nodes, DFS will have a faster runtime if the graph is sparse vs if the graph is dense. T/F?

F. All trees are graphs. T/F?

G. There is more than one possible minimum spanning tree for a graph if the edge weights are not all unique. T/F?

H. If we use build heap to create a min heap out of an array, we obtain a sorted list in ascending order. T/F?

I. The lower bound for sorting Ω (n log n) which implies that in the worst case no sorting algorithm can have a better runtime than O (n log n). T/F?

J. For separate chaining for hashing, if each entry contained a pointer to the root node of an AVL tree (instead of the head of the linked list), we could improve the search times if many collisions occur. T/F?

K. Given a graph with unique edge weights, and assume we selected the edges for the minimum spanning tree, let's call this set MST, if we were to add some constant C to every edge in the graph and ran Kruskal's algorithm on this updated graph, all of the edges selected would all be in the set MST. T/F?

So far I have:
A. False
B. False
C. False
D. True
E. True
F. False

Are these correct so far? What about the other ones? Just trying to understand these concepts more.
This is what I think and might be wrong or inaccurate, I've coded Dijkstra's algorithm plenty of times before:

A. True. If a constant C is applied to all the edge weights, then there's no reason for the edges selected to change. For example, lets say we had 5 edges with the weights for a path being:

5 2 6 7

Assume this is the shortest path. If we added 10 to all weights, that specific path will still be the shortest path, simple algebra shows:

2 < 3
+10 +10

12 < 13
What about these?

part A:

Select all the correct responses from the list below that causes a find/insert operation in a hash data structure to slow down:

A. If separate chaining is used
B. A hash function that maps the keys to only a few elements in the table
C. High load factor
D. If we use double hashing to map keys only to 1 table

part B:

Choose all the advantages of double hashing over linear probing from the list below:

A. Uses less memory
B. Slower lookup time due to having a second hash function
C. Less likely to have many collisions
D. Always has a faster lookup time
E. Less clustering
If this was my worksheet, I'd have some pretty good guesses, and I'm sure many of these can be Googled.

I don't know the answers to most of these on the top of my head. Perhaps someone else on here can help you more.
Your answer to the original post is wrong.
If you added a positive constant to all edges then the best path may well change, because the adding will provide more relative advantage to paths with less edges.

Trivial example: 1+1 beats 3
However, adding 100 to each edge, find 101+101 does not beat 103.
Yes, that makes sense. I was assuming the same number of edges for all paths for some reason.
Low hanging fruits:

I is not detailed enough. Its technically false: there are O(N) sorting algorithms for certain limited scenarios (integers in a relatively(to computer hardware) small range). If you change it to general purpose algorithms, the NlgN holds.

J .. linked list? Tree?? If you have so many collisions you need a container to deal with it, you have bigger problems than what data structure to use. But yes, a tree searches faster than a linked list.

Last edited on
Topic archived. No new replies allowed.