> I don't think WAL can ever really have a durable transaction; it is essentially based on leaving a transaction open until it gets "check-pointed" or actually committed to the database file
why not? if you use synchronous=FULL, then WAL does provide durable transactions, no?
I wrote the first article, and I thought documentation is clear, but then I saw comment by Hipp which got confused me:
> If you switch to WAL mode, the default behavior is that transactions are durable across application crashes (or SIGKILL or similar) but are not necessarily durable across OS crashes or power failures. Transactions are atomic across OS crashes and power failures. But if you commit a transaction in WAL mode and take a power loss shortly thereafter, the transaction might be rolled back after power is restored.
The one which avinassh shows is MacOS's SQLite under /usr/bin/sqlite3. In general it also has some other weird settings, like not having concat() method, last I checked.
Another oddity: misteriously reserving 12 bytes per page for whatever reason, making databases created with it forever incompatible with the checksum VFS.
Other: having 3 different layers of fsync to avoid actually doing any F_FULLFSYNC ever, even when you ask it for a fullfsync (read up on F_BARRIERFSYNC).
You also can't load extensions with `.load` (presumably security but a pain in the arse.)
user ~ $ echo | /opt/homebrew/opt/sqlite3/bin/sqlite3 '.load'
[2025-08-25T09:27:54Z INFO sqlite_zstd::create_extension] [sqlite-zstd] initialized
user ~ $ echo | /usr/bin/sqlite3 '.load'
Error: unknown command or invalid arguments: "load". Enter ".help" for help
I don’t want to use JavaScript /TS but I want to make games for the web. Are there any nice framework which lets me write in Rust, it compiles to WASM for web?
I’m not using Obsidian atm, but my approach is simply to store them in normal notes. So if I have e.g. a topic.md file I’d make a section called `# Links` there which I can click on. That makes it easier to rediscover links in the right context.
(I’m currently using Org-mode, but the approach is the same.)
You could use Obsidian Web Clipper to save pages from your browser and then make a base that filters based on the folder or tag you use for your bookmarks.
> If a system has distributed-consensus mechanisms, many different forms of event-driven communication, CQRS, and other clever tricks, I wonder if there’s some fundamental bad decision that’s being compensated for (or if the system is just straightforwardly over-designed).
then later down in the article:
> Send as many read queries as you can to database replicas. A typical database setup will have one write node and a bunch of read-replicas. The more you can avoid reading from the write node, the better - that write node is already busy enough doing all the writes.
Somewhat, though CQRS might advocate for separate read and write models. It might be something like the writer publishes an event that is consumed by something that updates the read model and queries go directly to the read model. Whether or not that is overengineering depends on the problem and load.
I thought cqrs was a code pattern where you segregate query models from command models, rather than something that specifies where the data is read from or is written to.
Yes, exactly. That difference came in with '12 VSR's view change (deterministic, round-robin), which upgraded '88 VSR's view change.
'14 Raft missed this upgrade, unfortunately (despite citing the '12 paper), thus preserving Oki's '88 "pick the candidate with longest log".
I find the history of consensus fascinating, and was fortunate to interview both Brian Oki and James Cowling (so many anecdotes): https://www.youtube.com/watch?v=ps106zjmjhw
The Raft authors didn't "miss" it, they just decided that
randomized was simpler--and simplicity was
a deliberate design goal.
They found randomized was easier to reason
about compared to trying
to figure out how to explain and prove correctness
when the next "determined" leader has died or
been partitioned.
Lacking formal methods, the original VSR work seems less rigorous/proven. Maybe rigorous proof
was just not possible at the time; after all
Lamport had to work really hard to prove
Paxos correct.
That is, the "new view" may be missing information from
a leader who had a higher term/view number, but is
temporarily offline. Does the orignal VSR provide
the same fix that Raft does for that scenario? Does
it address other possible scenarios? Mechanical proofs seem conspicuously absent, and even human
proofs aren't in ready evidence.
:) I'm only using "seem" to indicate the limits of my knowledge.
I was just hoping someone would chime in with a
link to stronger/formal proof for VSR. Are you aware of any?
So, yes, to my limited knowledge, I've not found any
existing formal proofs for VSR.
The 2012 VSR revisited clearly labels their arguments informal;
in section 8: "In this section we provide an informal discussion of the correctness of the protocol."
I'd be delighted to learn of any formal / machine checked
proofs of VSR ; equivalent to the Verdi project for Raft.
These are elaborate and subtle protocols, easy
to get wrong.
In particular when it comes to things like
reconfiguration, even Raft had the famous 2016
bug in the simpler of its two reconfiguration protocols.
Note that Verdi did not attempt to verify the reconfiguration
protocol; apparently it was too difficult.
There was an attempt earlier this year to give a
proven reconfiguration protocol for Raft called Recraft[1],
based on an earlier paper called Adore[2]. They
discuss why reconfiguration is so difficult to prove.
It has to do with circularity.
"ReCraft: Self-Contained Split, Merge, and Membership Change of Raft Protocol" by Kezhi Xiong, Soonwon Moon, Joshua Kang, Bryant Curto, Jieung Kim, Ji-Yong Shin. last revised 28 Apr 2025, v2.
I'm not completely convinced yet that ReCraft works;
at one point I thought they assumed away certain
scenarios -- but I need to revisit it with a close reading.
At a minimum, reading the Adore paper's discussion of
how much subtlety is involved is pretty compelling.
My conclusion is that formal
proof is an absolute necessity to have a fighting
chance at a correct implementation--especially when
it comes to reconfiguration.
I don't mean to trap you, my anonymous friend :) but does the formal proof for Raft in Coq via Verdi not apply—at least in spirit—to the essential view change and SMR protocol for Viewstamped Replication? And, similarly, would you say that Viewstamped Replication's core view change and SMR protocol is really that different from Multi-Paxos as a superset—such that that proof also wouldn’t carry over?
I agree that proofs are sensitive to modeling choices. But the reason I ask is that the literature generally treats the core of these protocols (view change + SMR—reconfiguration aside for now) as essentially equivalent.
For example, I'm sure you're aware of Heidi Howard's work here, which unifies consensus under one framework, the main differences being election style (i.e. Raft does random, VSR does round-robin) and terminology, not fundamental mechanics. The upside being that optimizations and sub-protocols, such as reconfiguration, can then be shared across protocols.
To your point about reconfiguration, reconfiguration sub-protocols are a field in themselves, and here it’s common to mix and match. To be clear, I'm not aware of a proof for the reconfiguration sub-protocol in '12 VRR (and I've found a bug in its client crash recovery sub-protocol—with Dan Ports finding another in its recovery sub-protocol), but again, as Howard notes, since the SMR cores are equivalent, you can adopt a reconfiguration sub-protocol or session sub-protocol that has been proven—at least this is common practice in production systems.
I hope the spirit of the argument is clear. And trust that none of this changes the OP point: that VSR pioneered the field and that Raft (in the authors' own words) is "most notably" similar to Viewstamped Replication.
(Let's not get into the subject of actual implementation correctness, which is orders of magnitude harder than formal design proofs, or the fact that the formal proofs in question still lack for a storage fault model—for example, many Raft implementations violate the findings of “Protocol-Aware Recovery for Consensus-Based Storage”)
However, SQLite, by default, always truncates the WAL files on last connection close
> When the last connection to a database closes, that connection does one last checkpoint and then deletes the WAL and its associated shared-memory file, to clean up the disk.
Yes, but you can't rely on an orderly database close when it comes to correctness. Non-clean shutdowns are much more common than actual low-level data corruption in many SQLite applications.
why not? if you use synchronous=FULL, then WAL does provide durable transactions, no?