Hello, I'm totally new to programming.
I have a issue with hard disk performance i/o with my windows 2003 servers
one of my vendor sent me the code below, they said it's testing write speed of hard disk but when I run this code on few of my windows 2003 servers, the actual file size are different.
about 60% of them, it creates file size with 55MB but on rest 40% of servers I have, it creates file size of 5 MB. weird thing is that one that generates 5MB text file, it takes me about 2 minutes in order to finish generating file. but when it generates 55MB file, it takes less than 10 seconds.
all of servers are running windows 2003 standard version on a RAID 1 confuguration. I checked if any of windows updates, windows services, or drivers are causing this issue but I can't find any so that I'm hoping if anyone can give me some advice for what's causing different result.
Nothing in the code should attribute to the problem you're seeing.
This code simply opens and closes a file 100,000 times and each time adds a line of text too it. It then finishings by adding a string with the time at the bottom.
//open the file ONCE
FILE* file=fopen("text.txt","at");
for(int i=0;i<100000;++i){//make ++ on the left side - doesn't make a temp obj
fprintf(file,"1234567890feoiv jeoifjonfrjobejrgojbomerjgobogorgmborjbo\n");}
fprintf(file,"VERSION:1:TIME:%d\n",timeGetTime()-dwPre);
//close the file ONCE
fclose(file);
(BTW, use the [ code ] [ /code ] tags (without spaces) to format your code like this ^^^)
@raesoo80: Nope. The code is specifically designed to be inefficient so it can test a higher load on the system. The opening/closing of files 100,000 times is part of the test. Having it do it only once isn't going to be a complete test.
As to my original reply. I see nothing in the code that would allow me to think that. Unless you were running it multiple times. Even variances in block sizes/file systems wouldn't account for such a huge difference. Unless the file system was compressed?