Redefinition example:
1 2 3 4 5
|
int foo = 7;
{
int foo = 42;
}
cout << foo;
|
What does it print and why?
It may seem simple and nonrelevant, but the main point is that within the scope of foo that is on line 3 you have a variable that is entirely unrelated to the foo on line 1. The outer foo is masked.
Now with classes:
1 2 3 4 5 6 7
|
struct B {
int foo() const { return 7; }
}
struct D : public B {
int foo() const { return 42; }
}
|
Looks good so far. Lets use them:
1 2 3 4 5
|
D bar;
B * gaz = &bar;
cout << bar.foo();
cout << gaz->foo();
|
Different foos get called.
Add that "needless" virtual:
1 2 3 4 5 6 7
|
struct B {
virtual int foo() const { return 7; }
}
struct D : public B {
int foo() const { return 42; }
}
|
Now, repeat the use test. You will see that both times the D::foo() gets called. The classes that do have virtual functions are entirely different under their hood.
That is the whole idea of polymorphism. You can have a lot of objects that all have the same interface, but some of them behave a bit differently. The interface (the base class) does not even need to have implementations for the virtual functions, and thus it will be impossible to instantiate an object with type base; the base is abstract.
You can have your algorithm use the interface. Then you can add and instantiate new derived type objects without touching the algorithm. They fulfill the interface and therefore the algorithm still works, but the new type does something different.
For example, your base class can represent a SQL database. Then you have derived types for MySQL, PostGreSQL, SQLite, etc. The way you use the database object is same no matter which type it really is. That absolutely needs the
virtual
.