When should I break up math equations or pre-calculate values, especially when concerning loops?

Pages: 12
closed account (Ezyq4iN6)
@jonnin

Thanks for explaining that. Now that I have more of an idea about what the processor has things for, then I can plan when to actually have lookup tables accordingly. Your factorial example it kind of how I imagined lookup tables - something that basically provides you pre-calculated constants that you use to simplify calculations.

The loop example makes dynamic memory make more sense to me. Keep them out of loops at all possible, and reuse them if you need them in loops. I take it arrays of a set size or standard variables that are declared in loops/functions are ok, because the compiler has already set aside the memory to use each time they are called, right?

One of the more recent places I have seen the pointers with classes are when using SDL2.

1
2
3
4
5
6
7
8
9
10
11
12
  SDL_Renderer *renderer;
  SDL_Window* window;
  TTF_Font *font;

  // When they are no longer needed I do this.
  SDL_DestroyRenderer(renderer);
  renderer = NULL;
  SDL_DestroyWindow(window);
  window = NULL;
  TTF_CloseFont(font);
  font = NULL;


My confusion is why would I need to destroy the pointers and set them to null, especially if they are in the destructor for a class or at the end of main, because wouldn't they automatically be destroyed and the memory reclaimed when the class or program ends? It just seems to me that making a simple object would make more sense, unless as AbstractionAnon pointed out it requires ploymorphism.

So only one thread per core then? Does the programmer choose the number of threads to use, or do they simply say "use as many as you can" in multi-threading? Should all programs be multi-threaded then, or is it a waste of system resources for programs that don't really need an increase in performance?

Would the greyscale tactic still be faster if you had to do a search each game/movie frame, but the frames changed so you would need to compute a greyscale version of the image each time? The scaled version sounds interesting too and I guess also stands to save you a lot of search time. It really seems like the tactic would work great for analyzing non-changing images that you could index and then grab whatever you needed quickly.

Disclaimer: I haven’t read then entire thread... <me bad, I know>

You are unlikely to perform any kind of trigonometric function faster than your CPU can.
The reality is that the CPU itself uses lookup tables plus some other fancy optimizations to get your answer, and the hardware will always beat out software for something like this.


The only difference between a pointer and a reference is from the point of view of the language (C++). Because of the way references are defined (by the language), it is possible for the compiler to optimize them significantly in specific circumstances.

But in general, you cannot expect any difference in speed when copying large objects: the copy itself will dwarf any necessary indirection.


Sorry if I’m repeating stuff jonnin has already said.
closed account (Ezyq4iN6)
@Duthomhas

So is the optimization, even if slight, part of the reason references are preferred over pointers in C++?
No, the preference is for the convenient abstraction it provides. Optimization is a secondary concern (though a big one).
The way variables work in C++ is, AFAIK, fairly unique (and awesome).

All automatic objects are stack-allocated.
However, not all of an object exists on the stack.

Let’s take a vector for an example. The vector class, in a very simplified sense, defines itself like this:

1
2
3
4
5
6
7
8
9
template <typename T>
struct vector
{
  T*     data;      // the actual data -- there are 'capacity' T's available for use
  size_t size;      // the amount of data used
  size_t capacity;  // the amount of data allocated

  // useful stuff goes here, like operator [] to make the vector look like an array, etc. //
};

When you create a local vector, you get that object placed on the stack —a true local object. Very useful!

However, notice that the actual data that the vector manages is not on the stack: it is on the heap (global memory pool). In other words, it must be allocated and deallocated using new[] and delete[] (or malloc() and free())*.

Consequently, doing things like copying a vector is still O(n) memory bound.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
template <typename T>
struct vector
{
  T*     data;      // the actual data -- there are 'capacity' T's available for use
  size_t size;      // the amount of data used
  size_t capacity;  // the amount of data allocated

  void realloc( size_t new_size )
  {
    if (new_size <= capacity)
    {
      // Nothing to do. Capacity does not change, but amount used shrinks.
      size = new_size;
    }
    else
    {
      // We want to use more than we have, so we need to get a bigger chunk of memory
      // and copy everything over, then get rid of the old chunk of memory.

      // Create a new location for the data
      T* new_data = new T[ new_size ];

      // Copy it from the old to the new location
      copy_n( data, new_data, size );

      // Get rid of the old data and adjust our object variables to point to the new stuff
      delete[] data;
      data     = new_data;
      capacity = new_size;
      size     = new_size;
    }
  }

Hence, managing the vector variable itself is easy. But managing the data it points to is hard.

Can you see a potential problem with the above code? What happens if I pass a copy of my vector to a function, and the function asks to resize it to something larger than the original capacity? Yep. Now I have two vector objects, but the memory belonging to one of them has been deallocated without its knowledge.

We fix this one of two ways: copy on assign/copy, and reference counting (copy on write). For our simple example (and the way std::vector does it), we will just copy on assignment/copy operations:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
template <typename T>
struct vector
{
  T*     data;      // the actual data -- there are 'capacity' T's available for use
  size_t size;      // the amount of data used
  size_t capacity;  // the amount of data allocated

  void realloc( size_t new_size )
  {
    ...
  }

  vector& operator = ( const vector& that )  // assignment operator
  {
    this->size     = that.size;
    this->capacity = that.capacity;
    this->data     = new T[ that.size ];
    copy_n( that.data, this->data, that.size );
    return *this;
  }

  vector( const vector& that )  // copy operator
  {
    *this = that;
  }

Heh, it only gets messier the deeper you dig.

The whole point, though, is that for any large data structure**, you are bound to use a pointer to global memory. And you must maintain that data carefully.

Since it is such a pain, the Standard Library provides us useful classes to help, such as std::vector, that keeps track of all this properly in the background for you.

And, from your point of view, you can create and pass around a vector as if it were a local (stack) variable:

1
2
3
4
5
6
7
8
  std::vector <int> primes = { 2, 3, 5, 7, 11, 13, 17 };

  primes = make_N_more_primes( primes, 100 );
  no_primes_with_sevens( &primes );

  cout << "no primes with sevens allowed club: list of the first 100\n";
  for (auto prime : primes) 
    cout << prime << "\n";

If I were to try to maintain this as a dynamically-allocated array (or even a statically-allocated one) it would be a lot messier and harder to read and maintain***.

Hope this helps.

*The actual std::vector is a little more complex than this, giving you the power to provide a different memory manager, among other things. You’ll have to look at its implementation to learn more; that or buy a book.

**Almost.

***Ok, maybe not so much for this tiny example, but complex programs are not built with purely tiny examples.
closed account (Ezyq4iN6)
@Duthomhas

Wow, thanks for that long and detailed explanation about vectors and dynamic memory. I guess I can see how types like that really help us out by doing the complex things for us. I have a hard enough time just trying to figure out what my program needs to do, so I am thankful for anything that reduces the extra steps.
Yer welcome!

But the point was: Using Abstractions Makes Life Easy.
Topic archived. No new replies allowed.
Pages: 12