If you are talking about a very naive version of mempool, then you are correct, but thats why I said a good implementation.
The whole point of a good mempool is that you malloc once, and only call free when you exit the program. The data structures for memory allocation will never get corrupted. And the memory pool will never release chunk twice cause it keeps tracks of allocated chunks.
User after free is mitigated in the same way. When you allocate, you get a struct back that contains a pointer to the data. When you release, that pointer is zeroed out.
> If you are talking about a very naive version of mempool, then you are correct, but thats why I said a good implementation.
No true Scotsman.
> The whole point of a good mempool is that you malloc once, and only call free when you exit the program. The data structures for memory allocation will never get corrupted. And the memory pool will never release chunk twice cause it keeps tracks of allocated chunks.
Then you've just moved the same problem one layer up - "use after returned to mempool" takes the place of "use after free" and causes the same kind of problems.
> When you allocate, you get a struct back that contains a pointer to the data. When you release, that pointer is zeroed out.
And the program - or, more likely, library code that it called - still has a copy of that pointer that it made when it was valid?
Its not about comparing implementations, its about the fact that a correct mempool implementation solves the problem without need for complex borrow checkers.
For example, in that implementation, you request memory from a mempool, it returns a chunk-struct with the pointer to allocated memory, the size of the chunk, and optionally some convenience functions for safe access (making sure that the pointer is not incremented or decremented beyond the limits). It also keeps its own pointer to the chunk-struct, along with the chunk that it was allocated. When you release the chunk, it zeros out the pointer in the chunk-struct. Now any access to it will cause a segfault.
You can of course write code that bypasses all those checks, but in Rust, thats equivalent to using unsafe when you wanna be lazy. Also you could argue that Rust is better because instead of segfaulting, the check will be caught during compile time, which is true but only for fairly simple programs. Once you start using RefCells, you cannot guarantee everything during compile time.
> You can of course write code that bypasses all those checks, but in Rust, thats equivalent to using unsafe when you wanna be lazy.
The difference is that most of the Rust ecosystem is set up to allow you to not use unsafe. Whereas whenever you use a library in C, you need to pass it a pointer, so bypassing these checks has to be routine. (Note that the article claims as a key merit that it's possible to add annotations to existing libraries)
> When you release the chunk, it zeros out the pointer in the chunk-struct. Now any access to it will cause a segfault.
Only if you're very lucky. Null pointer dereference is undefined behaviour, so it may cause a different thread to segfault on a seemingly unrelated line, or your program may silently continue with subtly corrupted state in memory, or...
> Also you could argue that Rust is better because instead of segfaulting, the check will be caught during compile time, which is true but only for fairly simple programs. Once you start using RefCells, you cannot guarantee everything during compile time.
Using RefCells should be (and, idiomatically, is) the exception rather than the rule. And incorrect use of RefCell results in a safe panic rather than undefined behaviour.
Null pointer dereference in the vast majority of cases will segfault. In the cases where it doesn't, thats fully on you for running some obscure os on some obscure hardware.
>Whereas whenever you use a library in C, you need to pass it a pointer,
When it comes to developing with Rust, any performance oriented project is necessarily going to have lots of unsafe for interacting with C libraries in the linux kernel in the same way that C code does.
As for comparison to fully safe Rust code outside the unsafes, you can largely accomplish analogous behavior in C with good mempool implementation. Or if you don't need to pass around huge amount of data, you can also do it by simply just never mallocing and using stack variables. There is still some things you have to worry about (using safe length bounded memory copy/move functions, using [type]* const pointer values to essentially make them act like references for function parameters, some other small things).
The point is Rust isn't the defacto standard for memory safety, and while it can exist as its own project, porting its semantics to other languages is not worth it.
> Null pointer dereference in the vast majority of cases will segfault.
Attempting access to a zero address will segfault on most hardware, but unfortunately common C compilers in common configurations will not reliably compile a null pointer dereference to an access to the zero address. Look up why the Linux kernel builds with -fno-delete-null-pointer-checks (sadly, most applications and libraries don't).
> When it comes to developing with Rust, any performance oriented project is necessarily going to have lots of unsafe for interacting with C libraries in the linux kernel in the same way that C code does.
I'm not talking about performance oriented projects. I'm talking about regular use of libraries e.g. I need to talk to PostgreSQL so I'll call libpq, I need to uncompress some data so I'll use zlib, I need to make a HTTP call so I'll use libcurl...
> The point is Rust isn't the defacto standard for memory safety
It absolutely is though. It's got clear, easy-to-assess rules for whether a project is memory-safe or not, and a substantial ecosystem that follows them; so far it's essentially unique in that unless you include GCed languages.
I mean you just proved your own point - compile with -fno-delete-null-pointer-checks.
And whatever criticism is you have of that is surpassed by the fact in all cases for regular software (i.e run on a server or laptop or desktop) that would be normal to write in either Rust or C, if it was written in C, and a null pointer is dereferences, it would absolutely crash (i.e Rust is not really being used to develop embedded system software code in non experimental workflows where zero address is a valid memory address).
And whatever criticism you have of that is surpassed by the fact that if you can write Rust code with all the borrowing semantics, you can also write a quick macro for any dereference of a mempool region that checks if the pointer is null and use that everywhere in your code.
So TLDR, not hard to write memory safe code. Rust is just a way to do it, but not the only way. Its great for enterprise projects, much in the same way that Java came up because of its strictness, GC and multi platform capability. And just like Java today, eventually nobody is going to take it seriously, people who want to get shit done will be writing something that looks like python except even higher level, with ai assistants that replace text, and then LLMs will translate that code into the most efficient machine code.
Most people don't though. Even if your code was compiled with it, libraries you use may not have been compiled that way. And even if you do, it doesn't cover all cases.
> And whatever criticism is you have of that is surpassed by the fact in all cases for regular software (i.e run on a server or laptop or desktop) that would be normal to write in either Rust or C, if it was written in C, and a null pointer is dereferences, it would absolutely crash
No it won't. Not reliably, not consistently. It's undefined behaviour, so a C compiler can do random other things with your code, and both GCC and Clang do.
> And whatever criticism you have of that is surpassed by the fact that if you can write Rust code with all the borrowing semantics, you can also write a quick macro for any dereference of a mempool region that checks if the pointer is null and use that everywhere in your code.
"Everywhere in your code" only if you're not using any libraries.
> So TLDR, not hard to write memory safe code.
If it's that easy why has no-one done it? Where can I find published C programs written this way? Like most claims of "safe C", this is vaporware.
>It's undefined behaviour, so a C compiler can do random other things with your code, and both GCC and Clang do.
Give me an example of a null pointer dereference in a program that one compiles -with -fdelete-null-pointer-checks that doesn't crash when its run on any smartphone, x64 cpu in modern laptops/desktops/servers or Apple Silicon.
> Give me an example of a null pointer dereference in a program that one compiles -with -fdelete-null-pointer-checks that doesn't crash when its run on any smartphone, x64 cpu in modern laptops/desktops/servers or Apple Silicon.
https://blog.llvm.org/2011/05/what-every-c-programmer-should... has an example under "Debugging Optimized Code May Not Make Any Sense" - in that case the release build fortuitously did what the programmer wanted, but the same behaviour could easily cause disaster (e.g. imagine you have two different global "init" functions and your code is set up to call one or other of them depending on some settings or something, and you forget to set one of your global function pointers in one of those init functions. Now instead of crashing, calls via that global function pointer will silently call the wrong version of the function).
> The whole point of a good mempool is that you malloc once, and only call free when you exit the program
So you're describing fork() and _exit(). That's my favorite memory manager. For example, chibicc never calls free() and instead just forks a process for each item of work in the compile pipeline. It makes the codebase infinitely simpler. Rui literally solved memory leaks! No idea what you're talking about.
One issue I see with this approach (compiler leaking memory) is, for instance, if the requirements change and you need to utilize the compiler as a lib or service.
For example, if the Cake source is used within a web browser compiled with Emscripten, leaking memory with each compilation would lead to a continuous increase in memory usage.
Additionally, compilers often offer the option to compile multiple files. Therefore, we cannot afford to leak memory with each file compilation.
Initially I was planning a global allocator for cake source.
It had a lot of memory leaks that would be solved in the future.
When ownership checks were added it was a perfect candidate for fixing leaks.
(actually I also had this in mind)
True, but with some stuff you just ain't gonna need it. For example, chibicc forks a process for each input file. They're all ephemeral. So the fork/_exit model does work well for chibicc. You could compile a thousand files and all its subprocesses would just clean things up. Now needless to say, I have compiled some juicy files with chibicc. Memory does get a bit high. It's manageable though. I imagine it'd be more of an issue if it were a c++ compiler.
The difference with cake ownership and RAII , is that with C++ RAII, the destructor is unconditionally called at end of scope.
Then flow analysis is not required in RAII.
Cake requires flow analysis because "destructor" is not unconditionally called.
When the compiler can see that the owner is not owning a object (because the pointer is null for instance) then the "destructor" is not necessary.
To understand the difference.
With flow analysis (how it works today)
int main()
{
FILE *owner f = fopen("file.txt", "r");
if (f)
fclose(f);
}
Without flow analysis (or with a very simple one, where the destroy must be the last statement)
void fclose2(FILE * owner p) {
if (p) fclose(p);
}
int main()
{
FILE *owner f = fopen("file.txt", "r");
if (f){
}
fclose2(f);
}
the cake implementation cannot be mapped to rust. I am not rust specialist but one concept for instance is that a owner pointer owns two resources at same time, the memory and object. In rust it is one concept.
Owner pointers take on the responsibility of owning the pointed object and its associated memory, treating them as distinct entities. A common practice is to implement a delete function to release both resources, as illustrated in Listing 7:
Listing 7 - Implementing the delete function
#include <ownership.h>
#include <stdlib.h>
struct X {
char *owner text;
};
void x_delete(struct X *owner p) {
if (p) {
/*releasing the object*/
free(p->text);
/*releasing the memory*/
free(p);
}
}
int main() {
struct X \* owner pX = calloc( 1, sizeof \* pX);
if (pX) {
/*...*/;
x_delete( pX);
}
}
let p: Box<X> = Box::new(X { ... });
let x2 = X { ... };
// Moves x2 into the same memory as the first X.
// The first X is automatically dropped as part of this assignment.
// Also consumes x2 so x2 is not available any more.
*p = x2;
// Drops the X that was originally assigned to x2 and then moved into p.
drop(p);
// No need, nor is it possible, to destroy x2.
Thanks for the rust sample. It looks very similar.
Can the allocator be customized?
As I said I am not Rust specialist.
Also, in my understanding is that in Rust, sometimes a dynamic state is created when the object may or may not be moved.
In cake ownership this needs to me explicit ( and the destructor is not generated)
I also had a look at Rust in lifetime annotations.
This concept may be necessary but I am avoiding it.
Consider this sample.
struct X {
struct Y * pY;
};
struct Y {
char * owner name;
};
An object Y pointed by pY must live longer than object X.
(Cake is not checking this scenario yet)
Also (classic Rust sample)
int * max(int * p1, int * p2) {
return *p1 > *p2 ? p1 : p2;
}
int main(){
int * p = NULL;
int a = 1;
{
int b = 2;
p = max(&a, &b);
}
printf("%d", *p);
}
This is not implemented yet but I want to make the lifetime of p be the smallest scope. (this is to avoid lifetime annotations)
int * p = NULL;
int a = 1;
{
int b = 2;
p = max(&a, &b);
} //p cannot be used beyond this point*
`Box<T>` is the type of an owning pointer that uses the default global allocator, and `Box<T, A>` is the type of an owning pointer that uses an allocator of type `A`. The latter is unstable, ie it can only be used in nightly Rust.
(Also the fact that the latter changes the type means a large part of existing third-party code as well as a bunch of code in libstd itself becomes unusable if you want to use a type-level custom allocator because they only work with `Box<T>`. But that's a different discussion...)
>An object Y pointed by pY must live longer than object X.
Yes, the py field in Rust would use a reference type instead of a pointer, and the reference would need to have a lifetime annotation, and the compiler would work to prevent the situation you describe:
struct X<'a> { py: &'a Y }
let y = Y { ... };
let x = X { py: &y };
drop(y); // error: y is borrowed by x so it cannot be moved.
But to be clear, the `'a` lifetime syntax is not what's making this work. What's making this work is that the compiler tracks the lifetimes of references in general. This works in the same way even though there are no lifetime annotations:
let y: String = "a".to_owned();
let x = &y;
drop(y); // error: y is borrowed by x so it cannot be moved.
do_something_with(x);
The explicit lifetime annotations are just for a) readability, and b) because sometimes you want to name them to be able to express relationships between them. Eg if two lifetimes 'a and 'b are in play and you want to express that 'a is at least as long as 'b, then you have to write a `'a: 'b` bound. In many cases they can be omitted and the compiler infers them automatically.
(question about rust.. this is not implemented in cake yet)
Let's say I have to objects on the heap.
A and B. A have a "view" to B.
Then we put a prompt for the user. (or dynamic condition)
"Which object do you want to delete first A or B?"
Then user select B.
How this can be checked at compile time?
The code path that drops B will not compile unless that code path drops A first. It doesn't matter if that code path is in response to user input or not. Again, as I said, the point is that the compiler tracks the lifetime of all references. In this case A contains a reference to B, so any code that drops B without dropping A will not compile.
> Also, in my understanding is that in Rust, sometimes a dynamic state is created when the object may or may not be moved.
Yes?
if cond {
drop(p)
}
// p may or may not be dropped here
or
let p;
if cond {
p = something();
}
// p may or may not be set here
These trigger dynamic drop semantics, in which case the stackframe has a hidden set of drop flags going alongside any variable with dynamic drop semantics, to know if they do or don’t need to be dropped. The flags are automatically updated when the corresponding variables are set or moved-from.
In cake there is no temporal hole, we cannot reuse the deleted object.
This prevents double free and use after free.
int main() {
struct X * owner p = malloc(sizeof(struct X));
p->text = malloc(10);
free(p->text); //object text destroyed
//p->text is on uninitialized state.
//cannot be used (except assignment)
struct X x2 = {0};
*p = x2; //x2 MOVED TO *p
x_delete(p);
currently cake uses existing msvc and gcc headers.
These headers does not have any owner qualifiers.
The temporary solution, is the re-declare the malloc etc when compiling with cake and not complain withe the function signature difference only by owner qualifiers.
if this ownership were standard then gcc and mscv headers would have the qualifiers there enabled or not , but they would be there.
It works in both ways.
We can just tell the compiler to ignore some function.
We can be also creative writing the code that at same time is good and makes the static analysis happy.
A good sample is linked lists.
The ownership works for non pointers. We can have integers (handles) that are owners. This allow custom allocators for instance.
The concept of owner is some value that works as reference to an object and manages its lifetime.
Static analysis has a significant advantage over runtime checks for memory leaks, especially in code that is almost never executed, because bugs can remain hidden until they appear in production. The code where I found the bug last year was executed occasionally, and to create a unit test, it was necessary to integrate with another server. So it wasn't easy to check at runtime.
On the other hand static analysis will catch the error at first compilation even on those almost never executed code.
Not sure I agree with that premise as the cake source would have been written in a way to be compatible with ownership annotations from the get go vs retrofitting an existing codebase. Help me understand how something like this composes:
Now open_file callers would need to know that ownership is being returned which means that local variables would need to have the owner annotation propagated. That’s what I mean when I say it’s not composable - the ownership has to propagate fully throughout the codebase for a specific resource. Of course maybe you know better as this is just an initial glimpse on my part.
Not sure if I understood. the usage of old and new (checked and unchecked) is a challenge.We may have the same headers used in both codes.
The other challenging is that same source may compile in compiler with or without support.
Ownership Feature Strategy (Inspired by stdbool.h)
If the compiler supports ownership checks and qualifiers such as _Owner, _View, _Obj_view, etc., it must define __STDC_OWNERSHIP__.
However, even if the compiler implements ownership, it is not active by default. The objective is to have a smooth transition allowing some files without checks. For instance, a thirty part code inside your project.
For instance, when compiling this file, even if the compiler supports ownership we don't have errors or warnings because the checks are not enabled by default.
#include <stdlib.h>
int main() {
void \* p = malloc(1);
}
A second define __OWNERSHIP_H__, is used to enable ownership. This define is set when we include <ownership.h> at beginning.
#include <ownership.h>
#include <stdlib.h>
int main() {
void \* p = malloc(1); //error: missing owner qualifier
}
The other advantage of having a <ownership.h> is because owner is a macro that can be defined as empty in case the compiler does not support ownership, allowing the same code to be compiled in compilers without ownership support.
To me ownership composability means you can express ownership locally without it infecting anything outside of that local scope. However, ownership is not always tied to lexical scope and in those cases it doesn’t compose. In other words, you can add all the annotations you want locally and a) the code will still be incorrect b) the code may not compile as you show in your other snippet because now the function signature needs to contain the ownership sigil which then results in all callers needing the ownership sigil. Disabling it partially only emulates composition if the callers are in external files compiled with alternate options / skipping ownership validation. If you have local calls to the newly annotated function, you’ll be back into needing to fix the entire file’s annotations.
Unless I misunderstood what OP meant about ownership composition.
If all you give me is half measures, then I’ll either just use plain old C/C++ or I’ll switch to a totally different language. Maybe one with a GC so I don’t have to please some ownership thingy.
I think it's a common misconception that ownership is there to make you suffer compiler shenanigans. When in my experience it changes the way you model programs. Turns out, that structuring your program in a way were it's clear who owns what makes for easier to understand and debug programs. It's a bit analogous to static typing, saying I'll use a language like Python without type hints, because its gonna make me avoid compiler errors, is a bit short sighted when I plan on developing the piece of code for a longer time.
If you want to keep doing programming in a way you are already familiar with, and are not willing to change your way of thinking about programs, yes then it's a bad fit. If you want to write reliable programs, there is evidence that changing the way we think about and express programming problems, can have substantial effects on reliability.
You don't need a borrow checker to write reliable programs. If anything the Rust obsession with memory safety has been harmful since it detracts from general safety. But don't take my word for it, maybe consider what the co-author of The Rust Programming Language, 2nd edition has to say [1].
If you really care about writing robust programs then focus on improving your testing methodology rather than fixating on the programming language.
> I think it's a common misconception that ownership is there to make you suffer compiler shenanigans.
I don't think it's a misconception. When I tried Rust I tried to implement a cyclic data structure but couldn't because there is no clear "owner" in a cyclic data structure. The "safe" solution recommended by the rustaceans was to use integer handles. So, instead of juggling pointers I was juggling integers which made the code harder to debug and find logic errors. At least when I was troubleshooting C I could rely on the debugger to break on a bad pointer, but finding where an integer became "bad" was more time consuming.
> When in my experience it changes the way you model programs.
Having to rearchitect my code to please the borrow checker gave me flashbacks to Java where I'd be forced to architect my code as a hierarchy of objects. In both languages you need design patterns and other work-arounds since both force a specific model onto your code whether it makes sense or not.
You seem to have a preset opinion, and I'm not sure you are interested in re-evaluating it. So this is not written to change your mind.
I've developed production code in C, C++, Rust, and several other languages. And while like pretty much everything, there are situations where it's not a good fit, I find that the solutions tend to be the most robust and require the least post release debugging in Rust. That's my personal experience. It's not hard data. And yes occasionally it's annoying to please the compiler, and if there were no trait constraints or borrow rules, those instances would be easier. But way more often in my experience the compiler complained because my initial solution had problems I didn't realize before. So for me, these situations have been about going from building it the way I wanted to -> compiler tells me I didn't consider an edge case -> changing the implementation and or design to account for that edge case. Also using one example, where Rust is notoriously hard and or un-ergonomic to use, and dismissing the entire language seems premature to me. For those that insist on learning Rust by implementing a linked list there is https://rust-unofficial.github.io/too-many-lists/.
The GP commented once, politely disagreeing and describing their own experience. Looking over their past comments, I also don't see hostility to the ideals of memory safety or using Rust.
Seems like you made a passive-aggressive presumption.
I don't think it's very compelling to convert C code to a thing that gives you a safety half-measure. You'll still have security bugs, so it'll just feel like theatre.
huh? There are also security bugs in Rust, so it is theatre as well?
Pointer ownership could eliminate a class of bugs. And such an approach can be combined with run-time checks for bounds and signed overflow, and then you have a memory-safe C more or less (some minor pieces are still missing, but nothing essential),
I don't personally like Rust, I believe Rust achieves this. In Rust, if you don't use the unsafe escape hatch, then your bugs are at worst logic bugs. There won't be any kind of weirdness like that you got some math wrong in an array access and now all of a sudden an attacker can make your program execute arbitrary code.
On the other hand, this Cake thing just adds some ownership and when folks say it's problemmatic the first answer is "oh just tell it to ignore your function". That doesn't sound like memory safety to me. It's nowhere near Rust in that regard.
Rust does the same thing, though? If you are having trouble pleasing the compiler, you can use unsafe to get around it. Of course, the Rust people are a lot more active at telling you that what you wanted was actually wrong and bad, but it's essentially the same position.
Code that's full of memory bugs is likely full of other bugs too. Improving testing methodology, perhaps establishing official guidelines, would address ownership issues and more. The goal should be to write robust software, because robustness implies memory safety but the reverse is not true.
Converting code can be challenging. The cake code has been successfully converted. null-checks are not ready, and something similar already happened to c#.
The experience is similar to changing a header file to use a const argument where previously the argument was non-const. This change will propagate everywhere.
I also think a similar experience is converting JavaScript to typescript.
The type system will complain before it stabilizes.
The ownership checks are new in Cake, less than one year. So the answer is no, just cake is using. And there is a lot of work to do..in flow analysis etc.
The cake source itself is moving to another experimental feature that is nullable types.