for that, use one of the scale algorithms I pointed to if you insist on doing it yourself.
there are hundreds of free to use image libraries that can do this for you. I would just get one.
one cheesy way to do it is to just averages.
that is, new image row 1 red is (old image row1 red + old image row2 red) /2
but you do it in 2-d, so the first red value is (old row 1 + old row 2 + old col1 + old col2) /4
and for cutting in half specifically one of those is redundant (same pixel) so you can reduce it to
(old row 1 + old row 2 + old col 2 ) / 3
and then the next one is old row 3 and 4, col 3 and 4...
alpha will depend on how its used, some use a range and some as binary. if its not using a range, you will need to figure out what to do for that.
no, I am not going to search the web for you.
The right algorithm... this is for you to research a little. 15 min reading the wiki should tell you which one you want... its really a choice of take more time to get better quality or accept a bit lower quality for speed.
or, for the third + time, get a library that supports several of them and compare the output/time taken and choose that way.
yes, many, many open source code image processing programs (gimp, etc) and libraries.
If you want some thing easy to code, 'nearest neighbor' (averages, really) works but it can be ugly results, and its awful for small images. Very large images, the damage is minimal.
Again, the choice is about how good you need the result to be, what your original image is like, and how much effort you want to spend rewriting this stuff. The precise bicubic is the middle ground: it gives decent image results and is a low-medium amount of code effort and it runs PDQ on modern machines for typical sized images. The advanced stuff with ANNs etc is going to take months or even years to reinvent/ rewrite.
-- I do not know how much trouble the FFT based one is. the transforms are simple enough if you have any background in that sort of math (do you?) but if you have not seen this kind of work before it may not make any sense to you (even if you copy the algorithms).
for all the common ones, there is *c++ code*. For the advanced ANN methods, I do not know, but you can run google same as anyone else. An algorithm is a series of steps to do a task. C++ can express some algorithms (everything code related, but it can't bake a cake without more hardware than your PC has). One of the links I gave you IS c++ code for one of them...
I am trying to be patient but at some point its on you to look some of this up for yourself. I am not an image processing expert; in all truth I am not expert at any field at the PHD level. Also, note that I had to deal with all this stuff before google existed, so my patience with people who can't look something up given how easy is is to do so now is limited. Back then if you found code at all, it was in a book and you typed it in yourself from the book... often, you did not have even that, just the basic algorithm in words or very cut down pseudo code. Today, unless it is 1 line trivial or cutting edge / secret sauce code, you can find it ready to go online.
it will work. That is an extremely fast, and terrible looking result. And as I said, the smaller the image, the harder it is to do this and have it look nice. A typical full screen resolution image has millions of pixels, and for that, it isnt too bad, but 200x200 is like an icon. Its going to get fried in the destruction process.
Why not try it to see if you like what you got? On the bright side, that takes less time to code than to post here asking.