Hello guys, first this is my code below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87
|
#include <string>
#include <vector>
#include <iostream>
#include <sstream>
#include <fstream>
using namespace std;
vector<string> split(const string &s, char delim);
bool searchWord(string line, string topic);
int main(){
string text; string filepath;
vector <string> myvec; string line;
string word;
cout << " enter file Dest"<< endl;
cin >> filepath;
cout << "Enter words" << endl;
do { cin >> word;
myvec.push_back (word);
} while (myvec.size() < 4);
/*getline(cin, word);*/
ifstream file (filepath); // declaring file that contains the data inputs
while( std::getline( file, line) )
{
text += line;
}
/*vector<string> words = split(word, ' ');*/
vector<string> paragraphs = split(text, '.');
for (int j = 0; j < word.size(); j++)
{
for (unsigned i = 0; i < paragraphs.size(); i++)
{
if (searchWord(paragraphs.at(i), myvec[j]))
{
std::cout << ' ' << paragraphs.at(i);
std::cout << '\n'; std::cout << '\n';
paragraphs.erase(paragraphs.begin() + i);
}
}
}
cout << "";
}
bool searchWord(string line, string topic)
{
bool found = false;
size_t location;
location = line.find(topic);
if (location != string::npos)
{
found = true;
}
return found;
}
vector<string> split(const string &s, char delim)
{
vector<string> elems;
stringstream ss(s);
string item;
while (getline(ss, item, delim)) {
elems.push_back(item);
}
return elems;
}
|
my code is supposed to be a summarizer kind of. the user is meant to input the file that contains the block of text to be summarised and then types in his/her search words to be used to summarize. the program then outputs the sentences where it finds this words.
the problem is the output results do not come out in the other that they where in the original text. i know i need to assign some kind of indexing to the results then re other them before output. but its kind of complicated for me.
.......................................................................
below is my input text and the result so you can undertsand what i mean.................................................................
Data migration is the process of transferring data between storage types, formats, or computer systems. It is a key consideration for any system implementation, upgrade, or consolidation. Data migration is usually performed programmatically to achieve an automated migration, freeing up human resources from tedious tasks. Data migration occurs for a variety of reasons, including: Server or storage equipment replacements or upgrades; Website consolidation; Server
To achieve an effective data migration procedure, data on the old system is mapped to the new system providing a design for data extraction and data loading. The design relates old data formats to the new system's formats and requirements. Programmatic data migration may involve many phases but it minimally includes data extraction where data is read from the old system and data loading where data is written to the new system.
If a decision has been made to provide a set input file specification for loading data onto the target system, this allows a pre-load 'data validation' step to be put in place, interrupting the standard E(T)L process. Such a data validation process can be designed to interrogate the data to be transferred, to ensure that it meets the predefined criteria of the target environment, and the input file specification. An alternative strategy is to have on-the-fly data validation occurring at the point of loading, which can be designed to report on load rejection errors as the load progresses. However, in the event that the extracted and transformed data elements are highly 'integrated' with one another, and the presence of all extracted data in the target system is essential to system functionality, this strategy can have detrimental, and not easily quantifiable effects.
After loading into the new system, results are subjected to data verification to determine whether data was accurately translated, is complete, and supports processes in the new system. During verification, there may be a need for a parallel run of both systems to identify areas of disparity and forestall erroneous data loss.
Automated and manual data cleaning is commonly performed in migration to improve data quality, eliminate redundant or obsolete information, and match the requirements of the new system.
Data migration phases (design, extraction, cleansing, load, verification) for applications of moderate to high complexity are commonly repeated several times before the new system is deployed.
.............................................................
search words are:
disparity
interrogate
consolidation
complexity
........................................
the out put is
During verification, there may be a need for a parallel run of both systems to identify areas of disparity and forestall erroneous data loss.
Such a data validation process can be designed to interrogate the data to be transferred, to ensure that it meets the predefined criteria of the target environment, and the input file specification.
Data migration occurs for a variety of reasons, including: Server or storage equipment replacements or upgrades; Website consolidation; Server
To achieve an effective data migration procedure, data on the old system is mapped to the new system providing a design for data extraction and data loading. It is a key consideration for any system implementation, upgrade, or consolidation. Data migration is usually performed
..........................................
All i want to do now is to return the output in the same other they originally appeared in the main text.
thanks for reading