windows.h vs fstream

Pages: 12
> Using OS specific base functions for file i/o will be faster than using C++ streams

The OS functions can give better performance when an appropriate caching hint flag is used.
More information:
https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilea#caching_behavior


Simple "stdio vs. Win32 API" benchmark results for a number of small writes:
1
2
stdio: 108
win32: 20208 (x187)

(results are time to complete all writes, in clock ticks, i.e. smaller is better)

Speaks for itself 😊

The difference for the various caching hint flag that you can pass to CreateFile() was negligible in my test.
...except that with FILE_FLAG_WRITE_THROUGH things are even slower, by an order of magnitude.

Source:
https://pastebin.com/PYe1Sb2u
Last edited on
That's because (I think) WriteFile is being used inefficiently here, only one byte at a time. My understanding is that the C/C++ FILE/stream functionality will build up a buffer before making API calls, so that it's not just one byte being written at a time. That being said, this of course still demonstrates the point that it's less lines of code and easier to correctly use C/C++ library functions than the Win32 API directly.
Last edited on
Yes - but as the size of the objects to be written become larger, then the OS functions out-perform.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <Windows.h>

#define LIMIT 20000U

#include <string>

std::string str(8000, 'a');

static void test1(void) {
    FILE* file = fopen("test1.out", "wb");
    if (!file) {
        abort();
    }

    for (size_t i = 0; i < LIMIT; ++i) {
        if (fwrite(str.data(), str.size(), 1U, file) != 1U)
            abort();
    }

    fclose(file);
}

static void test2(void) {
    HANDLE h = CreateFileA("test2.out", GENERIC_WRITE, 0U, NULL, CREATE_ALWAYS, FILE_FLAG_SEQUENTIAL_SCAN, NULL);
    if (h == INVALID_HANDLE_VALUE)
        abort();

    for (size_t i = 0; i < LIMIT; ++i) {
        DWORD written;

        if (!WriteFile(h, str.data(), str.size(), &written, NULL) || written != str.size())
            abort();
    }

    CloseHandle(h);
}

int main() {
    for (int i = 0; i < 3; ++i) {
        const clock_t a = clock();
        test1();
        const clock_t b = clock();
        test2();
        const clock_t c = clock();

        printf("stdio: %lu\n", b - a);
        printf("win32: %lu (x%.3f)\n", c - b, (c - b) / ((double)(b - a)));
        puts("");
    }
}


which on my laptop give:


stdio: 840
win32: 539 (x0.642)

stdio: 753
win32: 503 (x0.668)

stdio: 728
win32: 509 (x0.699)

Last edited on
The OP wants to have a cross-platform solution, using the WinAPI isn't it.

C++ was designed to work on any OS that has a compiler available. C++ inherited that philosophy from C.

If'n GUI is also thrown into the mix for cross-platform doing native GUI system calls will be a nightmare if having the same code compile for multiple OSes is a consideration*.

There are 3rd party libraries that can be used in C/C++. Bjarne Stroustrup uses one for his programming Uni classes, FLTK (Fast Light Toolkit).
https://www.fltk.org/

There are others in varying degrees of popularity. Two I know of off-hand are:

wxWidgets (https://www.wxwidgets.org/) which is free.

Qt (https://doc.qt.io/qt-5/qtgui-index.html) which has free bits but to use its full functionality has a price.

Doing cross-platform GUI has never been a consideration for me, I've tried FLTK and wxWidgets a bit. Qt I don't have any experience with.

*I haven't even really bothered with creating apps that run on different devices, desktop PCs and smart phones that use WinOS variants. I have really just dinked around with old school Desktop WinAPI code as done the Charles Petzold way back in 1998 with his "Programming Windows, 5th Edition" book. By desktops, for desktops.

I don't have a clue how Mac or Linux does its GUI magic. I doubt (without actual knowledge) they are as massively bloated as the WinAPI. Backwards compatibility can be a PITA.

I do (kinda) understand there is no monolithic GUI setup for Linux as there is for Mac or Windows.

MS decided to push modern WinAPI development away from C/C++ into C# and .NET. So an app written for WinOS can be used on a desktop PC or WinPhone and reasonably have a similar appearance based on screen size and layout.
Last edited on
Qt (https://doc.qt.io/qt-5/qtgui-index.html) which has free bits but to use its full functionality has a price.
No, all the documentation is written to make you think that. Qt is mostly LGPL, so you can use Qt free of charge in all your projects whether open source or closed source. If you're only using the LGPL modules, the only reason you would need to pay for a commercial license is if you need to make modifications to Qt itself and you can't distribute those modifications, or if you need to link Qt statically into your binary (e.g. to have a single-executable application).

There's a few GPL modules. If you used them in a project you intend to distribute publicly you would need to open-source your application or pay for a commercial license.

MS decided to push modern WinAPI development away from C/C++ into C# and .NET.
That's not exactly right. For a good while now Microsoft has been offering pretty much all new APIs through COM, which is a language-agnostic interface. COM is usually less annoying to use from C#, because .NET has facilities to import class hierarchies from COM, so you can use the classes as if they were native .NET classes. Nothing prevents someone from writing a C++ class generator that does something similar, but no one has done it, for some reason.
By the way, C++ is still usually faster to call COM because there's no marshaling of parameters and results and no native-managed transitions.
Last edited on
That's because (I think) WriteFile is being used inefficiently here, only one byte at a time. My understanding is that the C/C++ FILE/stream functionality will build up a buffer before making API calls, so that it's not just one byte being written at a time.

Yeah, that is exactly the point. If you know for sure that you only every need to write big chunks of data at once, you don't have to worry. But, if your application frequently needs to write small quantities of data, you would have to implement your own application-level buffering to efficiently use the "low-level" I/O routines. And that's where you'd start re-inventing what std::frstream or FILE* streams already do out-of-the-box.
Last edited on
On the other hand, streams are only allowed, not required, to do any buffering, so you shouldn't be relying on them to do it.
Last edited on
I think only the default buffer size is implementation-defined. The default buffer mode is _IOFBF – unless the stream is connected to a tty (e.g. terminal), in which case buffering would be inconvenient.

Also, you can always explicitly set the buffer size and mode, e.g. by calling the setvbuf() function:
https://www.cplusplus.com/reference/cstdio/setvbuf/

From the docs:
All files are opened with a default allocated buffer (fully buffered) if they are known to not refer to an interactive device. This function can be used to either redefine the buffer size or mode, to define a user-allocated buffer or to disable buffering for the stream.
Last edited on
Topic archived. No new replies allowed.
Pages: 12