New Safe C++ Proposal

Pages: 123
I've done something vaguely like that (only safeish, with placement new) one time when I needed to create a bunch of objects and I needed to allocate everything and then construct it.


Its off topic but WHY? I have yet to run into this one -- where you can't just allocate a container of the objects and let the ctors run? Its the same number of bytes, the same memory behaviors, so ... it it because the ctors painted you into a corner where you got the data to construct them late but needed it preallocated for performance or something like that?

Most of my reshaping is stuff like binary files where each record has the same header, so you stuff it into the header class and peel off the size and type, then restuff it into the correct object. Which can be handled other ways with modern style and tools, but back when it was the fastest way.
If everyone just wrote code perfectly, we wouldn't need any kind of checking at compile time.

Some mistakes are easier than others. I don't see how one can mistakenly fall into the pits of UB when safe code is usually what you'll find learning C++.

It doesn't matter. The compiler will put those initializations in the constructor, so if the constructor doesn't get called, they will not run.

Really? I remember that working. Either way, just a waste of time to rewrite all those variable names in the constructor when you could have just done so when you declared them.

Rust doesn't let you, due to the way the borrow checker works.

Rust is a little popular, but not too mainstream I think. I've only seen code snippets, never actually used it. Most mainstream languages I don't see race condition protection being enforced.
Its off topic but WHY? I have yet to run into this one -- where you can't just allocate a container of the objects and let the ctors run? Its the same number of bytes, the same memory behaviors, so ... it it because the ctors painted you into a corner where you got the data to construct them late but needed it preallocated for performance or something like that?
Multiple reasons. The main motivation was because if allocation was going to fail, it was preferable to fail before any of the constructors had run. It's not fool-proof, because the constructors could still allocate memory for members, but it's still an improvement.
Another important reason was that the objects had unpredictable relationships to each other, so when constructing them and setting their pointers, it was useful to have their destinations' addresses already fixed, even if they were still in an unknown state of construction. The alternative would have been to initialize everything except those pointers, and patch them up once all construction had finished. I think leaving construction half finished was a worse alternative. But some pointers still needed to be delayed if they involved a dynamic cast across a class hierarchy with diamond inheritance, but it was only those ones.
FYI, this was code that deserialized an arbitrary object graph with arbitrarily defined classes. Basically nothing could be assumed, except that the object being allocated was a Serializable, of a base type, or of any of a few standard classes.

I don't see how one can mistakenly fall into the pits of UB when safe code is usually what you'll find learning C++.
I mean, I don't know what to say. I guess go write more C++ until you understand why people are asking for memory safety features. Just a couple really trivial examples:
1
2
3
4
for (auto &x : xs){
    if (foo(x))
        function_you_dont_realize_will_modify_its_parameter(xs);
}

1
2
3
4
5
void foo(){
    int x = 0;
    std::thread t([&]{ bar(x); });
    t.detach();
}

Yes, the examples are obviously wrong because I made them obviously wrong. In real code where you get multiple people sticking their grubby little fingers over multiple months, it's easy for stuff like this to show up, but hidden across function or even module boundaries.

Most mainstream languages I don't see race condition protection being enforced.
Oh, I should clarify that race conditions are still possible in Rust. In fact I got one a few weeks ago, when trying to dynamically allocate a TCP port for IPC. The mechanism I had used would check the port and then open it, non-atomically, which would hang the process non-deterministically, IIRC if multiple instances of the program were run at nearly the same time.
What you won't get is data races, where two threads try to write to the same object, corrupting it, or where one thread reads while another is writing, and the first thread ends up making a decision on half-updated data.
he main motivation was because if allocation was going to fail, it was preferable to fail before any of the constructors had run.

Would this: std::vector<foo> a(10); not meet your requirements? The vector will allocate the memory before calling constructors I believe.

I guess go write more C++ until you understand why people are asking for memory safety features

I'm not arguing there shouldn't be memory safe features, I'm arguing there's already plenty of memory safe features. If you write "good" code, you won't use dangerous features. You have to search for the dangerous features to learn them.

In real code where you get multiple people sticking their grubby little fingers over multiple months, it's easy for stuff like this to show up

In the
void foo()
example, you have to learn about .detach() to use it! Learning .detach() will mean you've read that .join() is the preferred (safer) method of handling threads.

If you stick with the preferred safe methods, you're not likely to run into these issues.

C++ gives you safety, but it doesn't stop you from shooting yourself in the foot. Every gun has a safety, but its up to you to use it. A dangerous mountain path has guard rails, but no one is stopping you from climbing over them and hopping around.
Last edited on
Would this: std::vector<foo> a(10); not meet your requirements?
No.
1. The types are unknowable and unrelated. The objects could be numbers, strings, vectors, maps, pointers, or instances of Serializable subclasses.
2. Ownership of the objects must be arbitrarily transferable. The objects in a vector are owned by the vector, and the ownership can only be transferred by moving the vector entirely.

.join() is the preferred (safer) method of handling threads.
They're for different things. You wouldn't use join() where you would've used detach(), and vice versa. Sometimes you just need to create a thread that will run on its own.

If you write "good" code, you won't use dangerous features. You have to search for the dangerous features to learn them.
None of the features I used in my examples above is "dangerous" (as in, as dangerous as the features C++ inherited from C). The bugs arise when multiple features are combined and misused together. Just using safe features is not enough to produce code that won't corrupt memory.
Conversely, a Rust (or C#, or Java, etc.) program that doesn't contain any unsafe and doesn't call out to external functions will never, ever, ever corrupt memory.

A dangerous mountain path has guard rails, but no one is stopping you from climbing over them and hopping around.
Following this analogy, C++ has guardrails that at many points sink into the ground. You don't need to hop over them, you just need to follow them along. Eventually if you just keep walking normally without doing anything too crazy you'll walk past a spot that should've had a guardrail and you'll fall off the cliff. It's better than C, which has no guardrails and lets you do goofy shit all day long, but that's a low bar.

I'm not sure C++ has any non-dangerous features. As in, that cannot trigger UB in some way. Even just adding two numbers may be undefined if they're signed. You have to be constantly vigilant and aware of all possible pitfalls of everything you're using. There's a difference between not stopping you from shooting yourself in the foot and daring you to solve a minefield labyrinth.
They're for different things. You wouldn't use join() where you would've used detach()

Sure you can, it would just require a change of your programming logic. All that matters is whether join() is logically complete.

Conversely, a Rust (or C#, or Java, etc.) program that doesn't contain any unsafe and doesn't call out to external functions will never, ever, ever corrupt memory.

It's not like I think this is a bad thing, but I wouldn't want C++ to be nerfed in this way. As an "extension pack", sure. But to mandate the language be safe is not something I'd want.

There are legitimate reasons to want to write dangerous or even code that outright does corrupt memory (to check system security to such threats/attacks, to teach the inner workings of code, etc..). Yes you can use "unsafe" code in other languages, but I don't think/know if it's really the same.

I remember being sad that I couldn't use pointer logic to alter the value of a const variable, then happy again when I could do it using volatile. It wasn't useful in any way, but I love having the freedom to write terrible code, darn it!

Of course the first thing I tried was...

1
2
3
4
5
6
7
8
9
10
11
#include <iostream>

int main()
{
    const volatile int a = 3;
    int* ptr;
    ptr = (int*)(&a);
    *ptr = 5;

    int b[a];
}


But I couldn't trick it to make a VLA (or at least see what would happen) since it won't accept a volatile variable.

C++ has guardrails that at many points sink into the ground

Maybe, I just don't feel that it's as easy to fall into some UB trap as it seems. Certainly those who try to just "pick up" C++ may have trouble, but not someone who knows what they're doing.

Last edited on
You just lack experience. You've not spent enough time debugging broken code.

to check system security to such threats/attacks, to teach the inner workings of code, etc.
Those are not legitimate reasons.

Yes you can use "unsafe" code in other languages, but I don't think/know if it's really the same.
I keep making the same point, but it really is the best analogy. Just like how C++ lets you get out of the type system through casts in order to do things that you know work but that the compiler can't check, unsafe code is there to let you do things that are correct but that the rules of the language forbid because it can't prove them correct. Then just like how in C++ you can try to pass an int to a function expecting an std::string and the compiler will catch you, in Rust you can try to reference a moved object and the compiler won't let you. You only need to pay close attention inside unsafe blocks, not everywhere at all times.

If you're insane, you can wrap your entire program in a giant unsafe block and do whatever you want. Making wrong choices is your prerogative.
You've not spent enough time debugging broken code.

I've spent plenty of time debugging broken code. Rarely has it been the case that the issue was UB.

I don't normally use C++ for super complex projects since it usually requires some external library with a learning curve where C# can do it easily. But I don't think you can blame C++ for an external library's dangers.

Those are not legitimate reasons.

?? of course they are. Those are hardly all the reasons either. If there were no legitimate reasons for "dangerous" code, languages like Rust wouldn't have a need for unsafe blocks.
I've spent plenty of time debugging broken code.
"Plenty of time" and "enough time" are distinct conditions.

Rarely has it been the case that the issue was UB.
That just means the UB you've written hasn't exploded on you yet, or that you haven't written enough code to get a lot of UB yet.
Of course UB is a small proportion of all errors. The reason to want to eliminate it is because its destructive potential is unbounded.

?? of course they are.
They're not. You don't need to use unsafe code to test a system's reliability, and you don't learn how the abstract machine works by writing shitty code. I used to do that crap once, okay? There's a thread here somewhere where I "prove" to a guy that a reference is just a pointer by making it point to something else by overflowing a pointer. That was wrong; it doesn't prove anything about C++'s machine model. There's a CPU-compiler combination out there where our nonsense doesn't work.

If there were no legitimate reasons for "dangerous" code, languages like Rust wouldn't have a need for unsafe blocks.
I already explained why unsafe blocks are needed. There are safe operations that can be performed that the compiler can't check for validity. The same way this:
1
2
3
4
5
6
7
8
9
10
11
12
struct A{
    int x;
};

struct B{
    int x;
};

A a;
a.x = 42;
auto &b = *(B *)&a;
b.x++;
is safe, even though this:
 
B &b = a;
is illegal.
Unsafe blocks in Rust and casts in C++ are not there so you can write broken code on purpose, they're there because the language designers know the type system prohibits certain correct programs from being written. If it was possible to design a type system where only correct programs were valid, why would we want anything else? That doesn't exist, so instead we make the type systems more restrictive than absolutely necessary, and then add these escape hatches.
"Plenty of time" and "enough time" are distinct conditions.

In this case, I'd argue the complexity of the code is more important than how much time we've spent debugging. I would agree you've looked at more complex C++ code on average and have seen things I haven't.

Unsafe blocks in Rust and casts in C++ are not there so you can write broken code on purpose, they're there because the language designers know the type system prohibits certain correct programs from being written

That's exactly the point.. These "dangerous" things we partake in are needed in many contexts where they are used correctly to achieve a purpose.

It reminds me of some of those futuristic world anime/movies where cars drive themselves. Then what? Eventually the main character presses some buttons, pops out a steering wheel, and takes over.

I know me driving the car is "dangerous", especially compared to a sophisticated AI that would be able to do it safer, but I'd still rather drive myself! Again, not saying don't add safer options, just don't prevent me from shooting myself in the foot.

I wish I could argue safety was slow, but it does seem Rust is comparable to C++ performance. Though I wonder if that means compile times are higher.

There's a thread here somewhere where I "prove" to a guy that a reference is just a pointer by making it point to something else by overflowing a pointer. That was wrong; it doesn't prove anything about C++'s machine model. There's a CPU-compiler combination out there where our nonsense doesn't work.

This is only true since C++ dictates standards and anyone making a compiler is free to figure out how to implement those standards.

This doesn't mean you're wrong, it just means you're taking advantage of the CPU/compiler to prove your point. Just because it doesn't work on some other compiler doesn't mean the point you made was invalid.

For example, if I showed VLA not working on Visual Studio, that's evidence that C++ does not natively support them - even if some other compiler does allow VLA in C++.
*Sets up a comfy lawn chair in the corner, grabs a big bowl of popcorn and several nice and tasty adult beverages and sits down*

Ain't got no dog in this here tussle.....

I know I am very deficient at doing bug testing, so there.
I'd still rather drive myself!
Like I said, nothing prevents you from wrapping your entire Rust or C# program in an unsafe block and doing just whatever you want. It's a stupid thing to do, but you can do it. Likewise, you can eschew C++'s type system and write in C. A C compiler is barely a step above an assembler in terms of the sanity checks it performs on your code, if you really feel you're a bad enough dude to make your checks yourself.

If you ask me, no one should be writing in C anymore, unless they need extreme portability.

Though I wonder if that means compile times are higher.
Eh. It's a mixed bag from what I've seen. Build times very much depend on the codebase you're looking at. I'd say they're on the same order of magnitude, although Cargo is very good at parallelizing the building of dependencies.

This doesn't mean you're wrong, it just means you're taking advantage of the CPU/compiler to prove your point. Just because it doesn't work on some other compiler doesn't mean the point you made was invalid.
I disagree. Assuming the behavior of your current compiler is an inherently precarious position. Today's undefined behavior that works is tomorrow's bug. There was a Linux kernel bug a few years ago that was introduced because the developer added a dereference on a pointer before an existing null check on it. Since according to C's rules dereferencing a null pointer has undefined behavior, GCC was free to assume that the pointer was non-null, so it was able to optimize the if away. In kernel mode, that optimization did not preserve semantics when the pointer is in fact null, and it introduced an exploitable vulnerability into the system running that kernel.
https://lwn.net/Articles/342330/

When writing in C and C++ you should program against the abstract machine as much as possible, not the real machine. Every time you break that rule you're making life harder for yourself down the line.
If you ask me, no one should be writing in C anymore, unless they need extreme portability.

C is no longer a car, C is the DIY engine with 4 wheels.

Like I said, nothing prevents you from wrapping your entire Rust or C# program in an unsafe block

That's the point though, right? I don't wanna have to fiddle with the car to pull out the steering wheel and take over. Once you have a safe AI car, taking over control is going to be "wrong". If you're using Rust and you use "unsafe" blocks, that is now the dangerous behavior that is frowned upon.

Since according to C's rules dereferencing a null pointer has undefined behavior, GCC was free to assume that the pointer was non-null, so it was able to optimize the if away.

That seems ridiculous if true. I never liked GCC/Clang. I knew from the moment I found out they allowed VLA, when C++ standard does not, that these compilers would cause me nothing but trouble.

Eh. It's a mixed bag from what I've seen.

I may have to take back what I said about their runtime speed being comparable. Rust seems to be only as fast as C++ when you're using C++'s safe features.

This code and the Rust equivalent of this code both clock in at 13-20 microseconds (well, C++ hit 13, Rust only hit 14).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#include <iostream>
#include <vector>
#include <algorithm>
#include <chrono>

int main() {
    std::vector<int> arr = { 45, 23, 1, 100, 42, 78, 22, 56, 87, 34, 99, 23, 48, 12, 76, 38, 84, 19, 6, 92, 67, 4, 27, 73, 29, 68, 21, 53, 81, 47,
                            33, 88, 91, 31, 24, 74, 55, 18, 61, 66, 94, 15, 41, 5, 16, 89, 77, 36, 63, 14, 32, 69, 7, 93, 20, 54, 82, 40,
                            71, 13, 28, 10, 59, 83, 60, 2, 95, 46, 39, 26, 9, 62, 3, 50, 75, 70, 17, 57, 37, 49, 52, 8, 35, 72, 25, 80,
                            65, 11, 51, 30, 58, 79, 64, 44, 85, 43, 90 };

    auto start = std::chrono::high_resolution_clock::now();

    std::sort(arr.begin(), arr.end());

    auto stop = std::chrono::high_resolution_clock::now();
    auto duration = std::chrono::duration_cast<std::chrono::microseconds>(stop - start);

    std::cout << "Time taken to sort: " << duration.count() << " microseconds" << std::endl;

    for (const int& num : arr) {
        std::cout << num << " ";
    }
    return 0;
}



However, we all know vectors are a little slow. Change them to static arrays:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#include <iostream>
#include <algorithm>
#include <chrono>

int main() {
    int arr[100] = {
        45, 23, 1, 100, 42, 78, 22, 56, 87, 34, 99, 23, 48, 12, 76, 38, 84, 19, 6, 92, 67, 4, 27, 73, 29, 68, 21, 53, 81, 47,
        33, 88, 91, 31, 24, 74, 55, 18, 61, 66, 94, 15, 41, 5, 16, 89, 77, 36, 63, 14, 32, 69, 7, 93, 20, 54, 82, 40,
        71, 13, 28, 10, 59, 83, 60, 2, 95, 46, 39, 26, 9, 62, 3, 50, 75, 70, 17, 57, 37, 49, 52, 8, 35, 72, 25, 80,
        65, 11, 51, 30, 58, 79, 64, 44, 85, 43, 90
    };

    auto start = std::chrono::high_resolution_clock::now();
    
    std::sort(std::begin(arr), std::end(arr));
    
    auto stop = std::chrono::high_resolution_clock::now();
    auto duration = std::chrono::duration_cast<std::chrono::microseconds>(stop - start);
    
    std::cout << "Time taken to sort: " << duration.count() << " microseconds" << std::endl;

    for (const int &num : arr) {
        std::cout << num << " ";
    }
    return 0;
}


And C++ is around 2x faster on average than the Rust equivalent:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::time::Instant;

fn main() {
    let mut arr = [
        45, 23, 1, 100, 42, 78, 22, 56, 87, 34, 99, 23, 48, 12, 76, 38, 84, 19, 6, 92, 67, 4, 27, 73, 29, 68, 21, 53, 81, 47,
        33, 88, 91, 31, 24, 74, 55, 18, 61, 66, 94, 15, 41, 5, 16, 89, 77, 36, 63, 14, 32, 69, 7, 93, 20, 54, 82, 40,
        71, 13, 28, 10, 59, 83, 60, 2, 95, 46, 39, 26, 9, 62, 3, 50, 75, 70, 17, 57, 37, 49, 52, 8, 35, 72, 25, 80,
        65, 11, 51, 30, 58, 79, 64, 44, 85, 43, 90
    ];

    let start = Instant::now();
    
    arr.sort();
    
    let duration = start.elapsed();
    
    println!("Time taken to sort: {:?}", duration);

    for &num in &arr {
        print!("{} ", num);
    }
}



Of course I don't use Rust, so I had AI generate this, but seems equivalent to me. 7 microseconds with C++, 14 with Rust (no performance difference from using a dynamic array).

I ran all the code on programiz.com since they have Rust and C++ compilers.
That's the point though, right? I don't wanna have to fiddle with the car to pull out the steering wheel and take over. Once you have a safe AI car, taking over control is going to be "wrong". If you're using Rust and you use "unsafe" blocks, that is now the dangerous behavior that is frowned upon.
That's silly. So writing in C++ is safe, but writing unsafe code in Rust is not, even though they provide the same amount of memory safety? May I suggest that you learn Rust so you don't need to talk about things you don't understand?

That seems ridiculous if true. I never liked GCC/Clang. I knew from the moment I found out they allowed VLA, when C++ standard does not, that these compilers would cause me nothing but trouble.
Any compiler could do something similar. UB is UB.

I may have to take back what I said about their runtime speed being comparable.
Even though you have no idea which sort algorithms are being used?
So writing in C++ is safe, but writing unsafe code in Rust is not

That's not what I said. Clearly, what I said is meant to be interpreted as "writing it would be considered bad practice". With C++, new safer features make the old ones bad practice since they're more dangerous to use.

When you use Rust, the language is safe. Intentionally putting yourself in an unsafe context is simply bad practice. I don't need to learn Rust to know this is considered bad practice.

Any compiler could do something similar. UB is UB.

I haven't seen the code in question, so I can't really test the theory.

Even though you have no idea which sort algorithms are being used?

Rust is based off C++, I assumed they both used quick sort. I did ask AI to do it, but I can see they do use different sorting methods. I usually wouldn't have that problem, but the new Windows update changed how copilot works. It's really annoying.

Redoing the test gives the same results though:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
#include <iostream>
#include <chrono>

void quicksort(int arr[], int low, int high);
int partition(int arr[], int low, int high);

void quicksort(int arr[], int low, int high) {
    if (low < high) {
        int pi = partition(arr, low, high);

        quicksort(arr, low, pi - 1);
        quicksort(arr, pi + 1, high);
    }
}

int partition(int arr[], int low, int high) {
    int pivot = arr[high];
    int i = (low - 1);

    for (int j = low; j <= high - 1; j++) {
        if (arr[j] < pivot) {
            i++;
            std::swap(arr[i], arr[j]);
        }
    }
    std::swap(arr[i + 1], arr[high]);
    return (i + 1);
}

int main() {
    int arr[100] = {
        29, 3, 72, 44, 89, 17, 39, 58, 93, 15, 
        24, 68, 36, 4, 50, 78, 66, 7, 81, 55, 
        20, 9, 64, 31, 95, 11, 48, 27, 90, 13, 
        21, 62, 79, 33, 8, 70, 43, 99, 16, 38, 
        85, 22, 54, 41, 2, 75, 19, 97, 30, 61, 
        67, 5, 83, 25, 14, 87, 35, 69, 92, 10, 
        6, 73, 46, 60, 12, 57, 40, 98, 18, 91, 
        32, 26, 82, 53, 1, 74, 28, 59, 84, 37, 
        65, 23, 100, 49, 42, 77, 56, 94, 34, 76, 
        71, 52, 63, 88, 45, 86, 80, 47, 51, 96 
    };

    auto start = std::chrono::high_resolution_clock::now();
    quicksort(arr, 0, 99);
    auto end = std::chrono::high_resolution_clock::now();

    auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
    std::cout << "\nElapsed time: " << elapsed.count() << " microseconds\n";

    return 0;
}

Elapsed time: 7 microseconds


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
use std::time::Instant;

fn quicksort(arr: &mut [i32]) {
    let len = arr.len();
    if len < 2 {
        return;
    }
    let pivot_index = partition(arr);
    quicksort(&mut arr[0..pivot_index]);
    quicksort(&mut arr[pivot_index + 1..len]);
}

fn partition(arr: &mut [i32]) -> usize {
    let len = arr.len();
    let pivot_index = len - 1;
    let pivot = arr[pivot_index];
    let mut i = 0;
    for j in 0..len - 1 {
        if arr[j] < pivot {
            arr.swap(i, j);
            i += 1;
        }
    }
    arr.swap(i, pivot_index);
    i
}

fn main() {
    let mut arr = [
        29, 3, 72, 44, 89, 17, 39, 58, 93, 15, 
        24, 68, 36, 4, 50, 78, 66, 7, 81, 55, 
        20, 9, 64, 31, 95, 11, 48, 27, 90, 13, 
        21, 62, 79, 33, 8, 70, 43, 99, 16, 38, 
        85, 22, 54, 41, 2, 75, 19, 97, 30, 61, 
        67, 5, 83, 25, 14, 87, 35, 69, 92, 10, 
        6, 73, 46, 60, 12, 57, 40, 98, 18, 91, 
        32, 26, 82, 53, 1, 74, 28, 59, 84, 37, 
        65, 23, 100, 49, 42, 77, 56, 94, 34, 76, 
        71, 52, 63, 88, 45, 86, 80, 47, 51, 96
    ];

    let start = Instant::now();
    quicksort(&mut arr);
    let duration = start.elapsed();

    println!("Elapsed time: {} microseconds", duration.as_micros());
}

Elapsed time: 13 microseconds


Notably, these were their best times, but C++ averaged around 7 microseconds while Rust would regularly hit 15-20 microseconds.


Using vectors gives 11 microseconds for C++ and and 13 microseconds for Rust. Of course, there may be several other factors in play, but it does seem that the speed advantage of a regular (vs the safer dynamic) array in C++ does not exist in Rust.

Again, I've never used Rust so I can't be sure that this is a totally fair comparison, hence why I upload the code.
Last edited on
writing it would be considered bad practice
But it's not considered bad practice because of some pointless, pedantic reason. It's considered bad practice because the compiler can't inspect the code inside for memory safety. Objectively, from a memory safety perspective, there's no difference between unsafe Rust and C++. So asking that the language not be made safer because then you'd need to turn those features off is quite irrational. It sounds somewhat sovereign-citizen-ish. Like, yeah, it is certainly your right to write your software however you want, but why would you want the compiler to not check your work if it can? It's the same type of nonsense Python-heads spew about type annotations in static languages, not realizing type inference has existed for decades.

Rust is based off C++, I assumed they both used quick sort.
I think most C++ library implementations use a hybrid sort that's faster than quicksort when the input is nearly sorted.
So asking that the language not be made safer because then you'd need to turn those features off is quite irrational.

Again, I'm not saying C++ shouldn't be safer, I'm saying I like that it's dangerous as my own personal preference. The same way an AI car that will never crash is definitely safer, but eventually some people are gonna want to take out the steering wheel to take control.

Then those people will be frowned upon, "bad practice". Again, I'm not saying this is bad, obviously this is for the best. But, idk, something about seeing danger and concluding it must be made safer naws at me. Part of our illogical side likes treading into danger and having the skill to come out unscathed.

I'm not explaining it perfectly, but ig it's not that important.

I think most C++ library implementations use a hybrid sort that's faster than quicksort when the input is nearly sorted.

Yea, the sort algorithm is optimized to hell and back. I was surprised how close the performance was between sort() and quicksort implementation, but a simple array of 100 integers is probably not the best stress test for it.

Though it does seem that you get faster performance being unsafe than being safe.
You will not "come out unscathed", though. It's not a game of skill, it's a game you're bound to lose, like Russian roulette. It doesn't matter how many empty chambers there are, if you keep pulling the trigger you'll eventually lose. That is to say, you will get a day where you're feeling tired, or lazy, or distracted, and you will make a mistake. Because that's just how C++ is; it requires you to be on alert at all times, and when you're not, that's when it bites.
it requires you to be on alert at all times, and when you're not, that's when it bites.

Sure, but the stakes usually aren't very high. If they are, then you should be on alert, testing plenty before implementing. This is true whether or not you're using C++.

This is the first time I've actually run Rust code. It doesn't seem half bad reading it. However, my work right now wouldn't benefit much from using Rust so I probably won't learn it anytime soon. Since I'm training AI, it's actually better using C++, more chances for the AI to fail to and learn.
testing plenty before implementing. This is true whether or not you're using C++.
Like I said, it's the type of nonsense dynamic language fanatics say. Formal checking is much stronger than testing. Testing should only be enhancing it, not replacing it.
Last edited on
Pages: 123