Someone apparently downloaded them, but it wasn't me.
Apparently that site deletes the file once it's downloaded.
Maybe try https://ufile.io/ which keeps it around for 30 days.
You have mentioned yuv files for the first time. What's up with that?
Yes, I could download those files.
What's up with the yuv file?
What is it's format? (Image size, number of images, etc)
What do you want to do with it?
I need to use only tga file. Need help to create a copy of actual image to scale down to half the real size of actual TGA( must be uncompressed), and must support for both RGB, RGBA.
for that, use one of the scale algorithms I pointed to if you insist on doing it yourself.
there are hundreds of free to use image libraries that can do this for you. I would just get one.
one cheesy way to do it is to just averages.
that is, new image row 1 red is (old image row1 red + old image row2 red) /2
but you do it in 2-d, so the first red value is (old row 1 + old row 2 + old col1 + old col2) /4
and for cutting in half specifically one of those is redundant (same pixel) so you can reduce it to
(old row 1 + old row 2 + old col 2 ) / 3
and then the next one is old row 3 and 4, col 3 and 4...
alpha will depend on how its used, some use a range and some as binary. if its not using a range, you will need to figure out what to do for that.
Can you share any code for scale down algorithms? Are there any open source codes? I am not sure which is the right algorithm for scale down to half the size of actual size? In my case its a tga file.
no, I am not going to search the web for you.
The right algorithm... this is for you to research a little. 15 min reading the wiki should tell you which one you want... its really a choice of take more time to get better quality or accept a bit lower quality for speed.
or, for the third + time, get a library that supports several of them and compare the output/time taken and choose that way.
yes, many, many open source code image processing programs (gimp, etc) and libraries.
I plan to just write in c++ to scale down the image to half the size, rather than using some 3rd party libraries. do you suggest scale down by pixel average or sampling?
if you want to do it yourself, the precise bicubic is a fair bit of work but doable by a beginner if you just follow what is online. The algorithm and even the code is out there for the taking.
If you want some thing easy to code, 'nearest neighbor' (averages, really) works but it can be ugly results, and its awful for small images. Very large images, the damage is minimal.
Again, the choice is about how good you need the result to be, what your original image is like, and how much effort you want to spend rewriting this stuff. The precise bicubic is the middle ground: it gives decent image results and is a low-medium amount of code effort and it runs PDQ on modern machines for typical sized images. The advanced stuff with ANNs etc is going to take months or even years to reinvent/ rewrite.
-- I do not know how much trouble the FFT based one is. the transforms are simple enough if you have any background in that sort of math (do you?) but if you have not seen this kind of work before it may not make any sense to you (even if you copy the algorithms).
for all the common ones, there is *c++ code*. For the advanced ANN methods, I do not know, but you can run google same as anyone else. An algorithm is a series of steps to do a task. C++ can express some algorithms (everything code related, but it can't bake a cake without more hardware than your PC has). One of the links I gave you IS c++ code for one of them...
I am trying to be patient but at some point its on you to look some of this up for yourself. I am not an image processing expert; in all truth I am not expert at any field at the PHD level. Also, note that I had to deal with all this stuff before google existed, so my patience with people who can't look something up given how easy is is to do so now is limited. Back then if you found code at all, it was in a book and you typed it in yourself from the book... often, you did not have even that, just the basic algorithm in words or very cut down pseudo code. Today, unless it is 1 line trivial or cutting edge / secret sauce code, you can find it ready to go online.
it will work. That is an extremely fast, and terrible looking result. And as I said, the smaller the image, the harder it is to do this and have it look nice. A typical full screen resolution image has millions of pixels, and for that, it isnt too bad, but 200x200 is like an icon. Its going to get fried in the destruction process.
Why not try it to see if you like what you got? On the bright side, that takes less time to code than to post here asking.
I haven't used any 3rd party libraries here. I need to read each and every byte from the original file, and then write it to another file there by making a copy of original file, but with half the reduced size of the original file. how can this be done?