I have just started C++ programming and my first project is a program that will read a text file, print it in the console, then tell me the number of lines. Here is the code I have so far:
#include <cstdlib>
#include <iostream>
using std::cerr;
using std::cout;
using std::endl;
usingnamespace std;
#include <fstream>
using std::ifstream;
int main(int argc, char *argv[])
{
ifstream inFile;
string url;
inFile.open("links.txt");
if(!inFile) {
cout << "Error: file could not be opened" << endl;
system("PAUSE");
return EXIT_SUCCESS;
exit(1);
}
int count = 0;
while (inFile >> url) {
cout << url << endl;
if ( url == '\n') // This is where I'm having problems.
count++;
}
inFile.close();
cout << "End-of-file reached..\n\n" << "links: " << count << "\n\n\n" << endl;
system("PAUSE");
return EXIT_SUCCESS;
}
The problem is that "url == '\n'" won't compile. It's probably really simple, and I'm really pushing the limits of what I can do with programming in any language with this, but I have a feeling I'm close. Please tell me where I'm going wrong.
When I run it, it only tells me that there are 0 lines. Maybe because it is taking the "\n" literally? Maybe the >> operator has something to do with that, but I don't know what to put instead.
Anyway, I got it working and it counts lines just fine ;)
string line;
while ( inFile )
{
getline ( inFile, line );
count++;
}
And how would you make it read only lines with specific words?
You can't read only lines with a specific word as you don't know that they contain it before you read them.
You can read each line and then check whether it contains that word and act accordingly.
Can you give me an example of how you are trying to sort those URLs so I can be more specific?
Thanks, to be more specific, here is what I am thinking:
I have a small self-made Perl Webcrawler that gives me a long list of URLs that it has visited. I want to extract the links from a specific domain so that instead of listing every single one of the URLs, as it does now, it will give me only the ones from the domain that I ask for.
If that sounds to complicated, then maybe I should consider limiting the webcrawler to a specific domain, but I wanted to focus on C++ and I am therefore making an attempt at it from this angle instead.