That's also an option available to you: Mutex::new() is const, so there's no trouble putting one in a static variable. If the inner value can't be const-constructed, you'll need a way to prevent access before it's initialized; for that, you can use a OnceLock or just Box::leak() the Mutex once it's constructed and pass around a simple reference.
What would be disadvanatages of this approach? I don't like Arc's myself either, I would like to see what is best alternative for those in multi threaded apps.
Different commenter but yes, that's exactly what they mean.
The main disadvantage is strong coupling. The shared state inside the static makes it more difficult to test your code. There are ways to reduce this coupling, the most obvious being that you push as much logic as possible into functions that accept a reference rather than use your static directly. Then you minimize the size of the code which is difficult to test, and cover it with end to end testing.
The OnceLock does impose 1 atomic operation of overhead, whereas an Arc imposes 2 (in this case, it can be more if weak references are used). However neither are likely to be important when you consider the overhead of the Mutex; the Mutex is likely to involve system calls and so will dominate the overhead. (If there's little to no contention then the Mutex is a lot like an Arc.)
Your alternative would be to use `Box::leak()` as the parent comment describes, which would force you to pass a reference around (resulting in better coupling) and eliminate overhead from OnceLock. However, it's unlikely to result in any material performance benefit, and you could reduce coupling without departing from your current approach, so I don't think either approach is clearly or overwhelmingly better.
There's some precedent for this: Back in the 40s, the movie studios were forced to sell their stake in theaters due to antitrust issues around exclusivity. Streaming services owning studios feels like the essentially the same situation.
`Send`, `Sync`, and `Unpin` are special because they're so-called 'auto traits': The compiler automatically implements them for all compound types whose fields also implement those traits. That turns out to be a double-edged sword: The automatic implementation makes them pervasive in a way that `Clone` or `Debug` could never be, but it also means that changes which might be otherwise private can have unintended far-reaching effects.
In your case, what happens is that async code conceptually generates an `enum` with one variant per await point which contains the locals held across that point, and it's this enum that actually gets returned from the async method/block and implements the `Future` trait.
> There’s no difference with a password, except that the sign-in process can be streamlined when everything works
There is one other major difference behind the scenes: With passkeys, the service you’re logging into never has enough information to authenticate as you, so leaks of the server-side credential info are almost (hopefully completely) useless to an attacker.
It’s an attack that lets the malicious actor hijack the passkey registration flow to insert a key that they know, so that they can later log in as the victim.
If the computer where registration happens is not trusted, no authentication protocol will help. Compare this attack ("malicious computer substitutes passkey at registration time") with a password one ("malicious computer substitutes password at registration time").
But unlike a compromised password, a compromised passkey can be detected much more easily, since the "real" key will end up not working, unless the attacker also adds it to the victim's account.
That should be very noticeable to the victim though, right?
Their own key would not work (unless the attacker persistently MITMs them and swaps their own credential in for every subsequent authentication) or they'd see multiple credentials being present in their account.
It's also a good idea to send out an email for every new credential added.
A quick read through of their anonymization process seems to indicate that they didn’t scan the message contents for PII (other than usernames).
If true, that seems like a huge oversight. I also wonder what would happen if someone finds their information in the dataset and requests it to be removed per GDPR or other privacy legislation.
I understand wanting to be careful, but didn't they only grab messages from servers that are already very public? Are Twitter message datasets anonymized?
That's not how GDPR works and in this case the data is clearly anonymised despite the authors' claims. Amongst others, there needs to be mechanisms for users to delete their data, whether it was at some point public or not.
The authors can presumably update the dataset on the site; however, I think past versions remain. Besides that, the GDPR is at odds with the fact that public posts and data almost never goes away. I don't think that reality can be legislated away, try as politicians might.
In all honesty, it's better to reserve the effectiveness for private, personal data, for the sake of practicality.
I use Anki more as a serendipity engine than for memorization: Whenever[1] I have an interesting observation or thought, I'll write a couple of sentences about it and file two copies: One in Obsidian, with links to any adjacent/relevant notes (if any), and another in Anki as a Close deletion.
Anki is set up with a long review cycle (1 day, 1 week, 1 month, then automated) and I sit down to do my reviews about once a week. In that process, I usually end up having new ideas to make notes about based on either the randomized order the notes show up in or spotting a connection between the review note and something I've been working on lately.
[1] In practice, I let many/most of these go unrecorded - I probably average about one new note per day, but in bursts.
From my understanding, it's more that the explosion phase of a detonation is more fuel-efficient than the burning phase, but can't last very long under normal circumstances because the flame front is moving so fast-- This is a trick to continue fueling the explosion to sustain it over an indefinite time period.
Think about it in terms of an old-fashioned gunpowder line fuse: If you lay it out in a ring and have some kind of mechanism to continuously place down new gunpowder on the ring in front of the flame, you can keep it going until you run out of fuel.
What i've never understood is how detonation can be more efficient than deflagration. What does that mean? Both types of engine take a mix of fuel and oxidiser and turn it into hot combustion products. The hot combustion products then expand through a nozzle to produce thrust. How is that process different between the two? Does a deflagration engine leave some fuel unburnt, that a detonation engine burns? Does the combustion of the same fuel somehow produce more heat? Or less heat but more pressure? Is it something about the expansion?
To put it another way, if you set up a deflagration engine and a detonation engine next to each other, and fed them fuel and oxidiser at the same rate, how would the streams of exhaust gas coming out of them look different? What other external differences would you see?
This is mentioned in the introduction here [1]. The version I'm looking at is an image, so I cannot easily copy-paste the relevant passage, but it seems to say that the efficiency gain comes from detonation causing the combustion to occur at a higher pressure than in the case of deflagration. Generally speaking, higher peak pressures increase the efficiency of heat engines, as this allows for greater expansion of the working fluid, and thus more work being done for the same fuel consumption.
With regard to your comparison, I guess this means that the detonation engine can have a higher pressure in the combustion chamber, together with a larger bell, a faster-moving exhaust, or some combination.
The energy produced by the combustion is the same but different fractions of it are converted into work. Just like efficiency differences in other types of heat engine.
Carnot-Efficiency is the theoretical limit of the cycle’s ability to get work out of the thermal energy where T_c is the exhaust (coldest) temperature after expansion and T_h the hottest temperature (before expansion). Temperatures given in K (Kelvin), so 100% if you manage to get T_c to absolute zero temperature.
Think of it as similar to hi compression automobile engine (race car) being more efficient vs a low compression engine (tractor engine). Not exactly correct but a way of starting to think about it
Explosions release more energy in a shorter time, which leads to a higher peak temperature and thus higher kinetic energy in the products. This directly results in more pressure/thrust.
This is one of the annoying constructions in English that has two common meanings which are the opposite of each other. It can either be referring to the worst possible/conceivable setback (as here) or to the worst encountered setback-- you have to use other clues like overall tone and the surrounding context to figure out which was meant.
reply