It seems to me that the part about unsafe operations is pretty much still unspecified. It's currently just a short paragraph saying it may cause undefined behaviour and a list of high-level descriptions of those unsafe operations. But what's the exact semantics of those operations? When is it undefined?
Part of the reason is that this is the most difficult part to define, and part is that this is very much an area of active research and improvement.
For example, what would a language specification say about the behavior and consequences of calling an unsafe function? It can only point to the function documentation, as the function could do anything. It might be reasonably well-behaved, for example `Vec::get_unchecked(index)` returns the item at `index` if index is a valid index, and does whatever your platform, chosen allocator and overzealous LLVM optimizations do on invalid pointer access. A different function might be complete chaos, since the unsafe function could contain any code.
"Dereferencing a value of a raw pointer type" would be easier to define, but then you get down the whole pointer provenance rabbit hole. Saying "well, that might do anything" isn't that unreasonable of a stance for a specification, as long as you properly specify the pointer-provenance-aware route once it has been stabilized. Documentation on the other hand should be more helpful than that (and sadly often isn't), telling you when dereferencing a pointer does what you expect, when it doesn't, and what the pitfalls are.
> if a process were to expose a mechanism for other processes to essentially proxy keychain queries through it, that can undermine the security of the whole system.
> A capability-based design should be able to systematically prevent this kind of problems.
I think Entitlements could be considered a type of capability? And if so, then you're right on your this point, as the solution was to require an entitlement to talk to the daemon itself.
I think as long as the code sticks to the discipline of never actually doing I/O but only manipulating functions that perform them it would basically be doing the same thing as IO monads in Haskell.
So print(s) returns a function that when called prints s. Then there needs to be function that joins those functions, so print(a); print(b) evaluates to a function that once called prints out a and then b.
What makes Haskell special in my opinion is 1) it generalises this way of achieving "stateful" functions, 2) enforces such discipline for you and makes sure calling functions never produces side effects, and 3) some syntactic sugar (do, <-, etc) to make it easier to write this kind of code.
Also note that the above example only does output which would be the easier case. When it comes to code with input, there will suddenly be values which are unavailable until some side effects have been made, so the returned functions will also need to encode how those values are to be obtained and used, which complicates things further.
Perhaps these
1. Stress relief
2. Makes boring work a bit more interesting
3. Rubber duck debugging
4. A small amount of distraction might actually boost productivity by allowing us to jump out of a local optimum?
It likely doesn't have performance that's good enough for production use. Doesn't look like there's JIT so it's all instruction by instruction interpreting.
For text editors or IDEs, basically contexts where we expect a lot of typing, I get why sticking to the keyboard is desirable. I don't get how it's better to browse the web without using the mouse. Most of the time on the web I just read, not type.
>Most of the time on the web I just read, not type.
This is kind of why it works. In vim you spend most of your time in normal mode, reading, moving around, making quick edits. It takes good advantage of the whole keyboard even when you're not writing new text. I use qutebrowser, which has default vim-style bindings for everything and an emphasis on the keyboard. It works very well.
I think an important difference between how I use vim and how I use a web browser is that in vim I need to type frequently. Though I may spend most time in normal mode in vim, I still need to switch to insert mode to make edits very frequently. With my normal web browsing it's not the case, so I'm more comfortable with a mouse for that. But I realise that this depends largely on the specific use case for web browsing. Reading blog posts is quite different from reading some documentation where one needs to search and backtrack frequently, for example, although both are done on a web browser. The latter involves the keyboard more frequently so would benefit from a keyboard-centric interface more.
I wonder how much of that is how and when one comes up in the industry. I learned a little vi at uni only by necessity after years of DOS and Windows 3/9x. It felt so awkward. Decades later I only use vi on servers, but slowly its macro ('normal') mode has grown on me.
For me, I’m often using the web as a reference while programming. It’s nice to be able to flip over to it (using the keyboard) and use the same familiar key bindings I’m using while in my workflow.
That's indeed a nice use case. In general keeping to the keyboard can be helpful in the context of typing, but if I'm just reading news or blog posts or watching videos I'd go with a mouse.
An interesting idea! Sending signals to all processes sounds expensive as hell though. All that work just to make a system call again to flush a buffer. Maybe a mechanism can be added to make the kernel aware of the user space buffer so it can directly fetch stuff from there when it's idle? Is it kind of like io_uring https://man7.org/linux/man-pages/man3/io_uring_register_buff...
> Sending signals to all processes sounds expensive as hell though.
You'd only send signals to processes who had run any code since the CPU was last idle (if the process hasn't executed, it can't have buffered anything).
There could also be some kind of "I have buffered something" flag a process could set on itself, for example at a well-known memory address.
My hypothesis is that the PDF format bears much blame here. The results might be
very different if the material is in a reflowable format like an epub or a web page.
If you think about it, the notion of a page exists only because of the physical
limitations of the paper medium, and PDF is page-centric only because it was designed
to represent printable materials. There's no reason to stick to pages anymore if
the material is to be consumed on a screen, and PDF is therefore also the wrong
choice for this.