What's the time complexity of .size() when implementation maintains count when items are inserted/deleted?

I was tempted to say it's linear to the number of .insert() and .delete() calls but this isn't correct because it doesn't take any longer to compute and return the size, despite however many time insert/delete are called.

The cost of "overhead work" is amortized over each call to insert() and delete(). How is that described in terms of Big-O analysis?

The idea of amortized work applies elsewhere [1] @9:20-9:51. Here, it's used to allow an AVL tree node to maintain the "height of every subtree" i.e., the length of the longest path from a specified node to any leaf node. Besides vector and AVL tree, I can see this being useful in a List implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
template <class T> 
class vector {
   T data[MAX_BYTES];
   int count = 0; 
public:
   void insert(T item) {
      // ... Do stuff to insert item into data
      count++;  // Overhead work to reduce complexity of .size()
   }
 
   void delete(T item) {
      // ... Do stuff to delete item from data
      count--;  // Overhead work to reduce complexity of .size()
   }
   
   int size() { return count; } // Is it considered constant time?
}


[1] YT: Lecture 6: AVL Trees, AVL Sort
https://youtu.be/FNeL18KsWPc
Last edited on
You're overthinking this. The time complexity of calling size() is constant ( O(1) ).
It just returns the value of the member variable count which takes the same time for all vectors.
Last edited on
No matter what code you put inside the other member functions, insert() and delete(), it won't affect the time complexity of size().
Last edited on
To avoid this confusion, make sure you know which operation, or sequence of operations, you're analyzing, before you start.

Here's an exercise where amortized analysis gives a surprising (and practically important) result:

The standard vector operates like your vector, except that if there is no more space in the data array, a larger block of memory is allocated and the old elements are copied from the old memory block to the new memory block.

The size of the blocks of memory follow a geometric progression. For example if the the first memory block is of size 2, the second memory block would be proportionally larger, of size a*2. If a = 2 then the sizes of memory blocks would go 2, 4, 8, 16, and so on.

a. What is the worst-case time complexity of a single call to std::vector::push_back?
b. What is the worst-case time complexity of a sequence of N calls to std::vector::push_back?

Last edited on
Peter87 wrote:
The time complexity of calling size() is constant ( O(1) ).
It just returns the value of the member variable count which takes the same time for all vectors.


I wasn't sure how to consider the extra work done in insert/delete to allow size() to be O(1). But it seems it's just we've added "constant overhead" to optimize what would otherwise be an O(N) operation. That the run-time of size() doesn't increase with the number of elements inserted/delete makes it O(1).

mbozzi wrote:

a. What is the worst-case time complexity of a single call to std::vector::push_back?
b. What is the worst-case time complexity of a sequence of N calls to std::vector::push_back?


I hinted at the wrong idea. Adding "constant overhead" has nothing to with amortized analysis.
Topic archived. No new replies allowed.