Like coder777 suggested, instead of setting the char to
' '
, you can set it to
(char)0
, which should not be confused with
'0'
. The former will set all the bits in memory to 0, and the latter is a character code for symbol
'0'
, which in memory will represent
32
. See this link for char codes:
http://www.asciitable.com/
If you set one of the character's bits to 0, the array will become a null-terminated string. What that means is its a C string (not C++ std::string), which starts at the memory address pointed by your array, and ends at the first character whose bits are 0.
If you set bits to 0 when user inputs '0', all of these should print out the same result.
1 2 3 4 5 6 7 8 9 10 11 12
|
for (int i = 0; i < SIZE; i++) {
if (a[i] == 0) { // Notice we aren't comparing to '0', but 0 instead. '0' translates to 32
break;
}
cout << a[i];
}
for (char* iter = a; iter != 0; ++iter) {
cout << *iter;
}
cout << a;
|
Passing char array to cout will be interpreted as a C string - a string of characters which terminates at the first 0bits char.
There is another case which might be of interest to you. In this example, you are dealing with text data, which means that 0bit character terminates the string.
If you were working with binary data, a 0bit character is perfectly legal.
The way to handle it would be to track the length of your binary data:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
|
#include <iostream>
#define SIZE 256
int main()
{
char a[SIZE];
std::cout << " "
<< "Adding characters together"
<< endl;
std::cout << "Enter the characters you'd like to put together:" << std::endl;
int dataSize = 0;
while (dataSize < SIZE)
{
std::cin >> a[dataSize];
if(a[dataSize] == '0')
{
break;
}
else
{
dataSize++;
}
}
std::cout << endl;
for (int i = 0; i < dataSize; i++)
{
std::cout << a[i];
}
std::cout << endl;
return 0;
}
|