The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.
C/C++ maintainers knew this and didn't want to see their lives' work made less significant. Now the issue's been forced by (among other things) one of the world's most influential software customers, the US Federal Government, implying that contract tenders for software written in languages like Rust will have an advantage over those written in languages that don't take memory safety as seriously.
CHERI claims that the amount of changes are exceedingly small.
Fil-C is getting there.
So, C has a path to survival.
> The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.
“Lots” is maybe an overstatement.
Also, if there was a way to make C++ code safe with a smaller amount of changes than rewriting in a different language then that would be amazing.
The main shortcoming of CHERI is that it requires new HW. But maybe that HW will now become more widely demanded and so more available.
The main shortcoming of Fil-C is that it’s a personal spare time project I started on Thanksgiving of last year so yeah
CHERI is effectively a mix of option a and b in my categorization, necessitating hardware changes and ABI changes and limited amounts of software changes. I'm not familiar with the other options in particular, but they likely rely on a mix of ABI changes and/or software changes given the general history of such "let's fix C" proposals.
ABI breaks are not a real solution to the problem. When you talk about changing the ABI of a basic pointer type, this requires a flag day change of literally all the software on the computer at once, which has not been feasible for decades. This isn't an excuse; it's the cold hard reality of C/C++ development.
There is no solution that doesn't require some amount of software change. And the C committee is looking at fixing it! That's why C23 makes support for variably-modified types mandatory--it's the first step towards getting working compiler-generated bounds checks without changing the ABI and with relatively minimal software change (just tweak the function prototype a little bit).
Wouldn’t you have to recompile all your dependencies or run into ABI issues? For example, let’s say I allocate some memory & hand it over to a library that isn’t compiled with fat pointers. The API contract of the library is that it hands back that pointer later through a callback (e.g. to free or do more processing on). Won’t the pointer coming back be thin & lose the bounds check?
Do you have any guesses on whether it could easily target WebAssembly? I'd imagine many people would like to run C code in the browser but don't want to bring memory unsafety there.
CHERI handles that by dynamically dropping the capability when you switch to accessing memory as int.
Fil-C currently has issues with that, but seldom - maybe I've found 3 such unions while porting OpenSSL, maybe 1 when porting curl, and zero when porting OpenSSH (my numbers may be off slightly but it's in that ballpark).
The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.
C/C++ maintainers knew this and didn't want to see their lives' work made less significant. Now the issue's been forced by (among other things) one of the world's most influential software customers, the US Federal Government, implying that contract tenders for software written in languages like Rust will have an advantage over those written in languages that don't take memory safety as seriously.
CHERI claims that the amount of changes are exceedingly small.
Fil-C is getting there.
So, C has a path to survival.
> The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.
“Lots” is maybe an overstatement.
Also, if there was a way to make C++ code safe with a smaller amount of changes than rewriting in a different language then that would be amazing.
The main shortcoming of CHERI is that it requires new HW. But maybe that HW will now become more widely demanded and so more available.
The main shortcoming of Fil-C is that it’s a personal spare time project I started on Thanksgiving of last year so yeah
> CHERI claims that the amount of changes are exceedingly small.
Oh, man. Yes, they do. Many people have been claiming that for decades.
When can we expect one of them to claim it's done?
(To be fair, the amount of changes required has been diminishing through those decades.)
I think the hardest part about CHERI is just that it's new HW. That's a tough sell no matter how seamless they make it.
CHERI has hardware in the form of ARM Morello and CHERI RISC-V running FreeBSD, easily to check their claims.
CHERI is effectively a mix of option a and b in my categorization, necessitating hardware changes and ABI changes and limited amounts of software changes. I'm not familiar with the other options in particular, but they likely rely on a mix of ABI changes and/or software changes given the general history of such "let's fix C" proposals.
ABI breaks are not a real solution to the problem. When you talk about changing the ABI of a basic pointer type, this requires a flag day change of literally all the software on the computer at once, which has not been feasible for decades. This isn't an excuse; it's the cold hard reality of C/C++ development.
There is no solution that doesn't require some amount of software change. And the C committee is looking at fixing it! That's why C23 makes support for variably-modified types mandatory--it's the first step towards getting working compiler-generated bounds checks without changing the ABI and with relatively minimal software change (just tweak the function prototype a little bit).
Wouldn’t you have to recompile all your dependencies or run into ABI issues? For example, let’s say I allocate some memory & hand it over to a library that isn’t compiled with fat pointers. The API contract of the library is that it hands back that pointer later through a callback (e.g. to free or do more processing on). Won’t the pointer coming back be thin & lose the bounds check?
Compile everything memory safely and then no problem.
Fil-C sounds like an amazing project!
Do you have any guesses on whether it could easily target WebAssembly? I'd imagine many people would like to run C code in the browser but don't want to bring memory unsafety there.
link: https://github.com/pizlonator/llvm-project-deluge/blob/delug...
How much code out there does stuff to the effect of
And what would happen to such code if pointers are suddenly fat?
CHERI handles that by dynamically dropping the capability when you switch to accessing memory as int.
Fil-C currently has issues with that, but seldom - maybe I've found 3 such unions while porting OpenSSL, maybe 1 when porting curl, and zero when porting OpenSSH (my numbers may be off slightly but it's in that ballpark).