It's not a "deep dive" of the power adapter - it's part of a "deep dive" series of the laptop model, writing an article each about individual components.
Ignoring every other benefit and concern, verification in the way proposed is a bad idea because part of the verification process in most cases is verifying that the service can send messages to you that actually get delivered. If you do this and then send the first "log in link" email which gets held up as spam or putatively malicious since some server has the temerity to not be located inside the US, doesn't have a DKIM signature, etc, you haven't really verified everything that you need to know. Of course, those things can change at the drop of a hat anyway, but I'd rather have verified that 1 time than 0 times.
If all you do is use it for login and will never need to send a message, then fair enough, the email is just essentially a random string you can prove ownership of, and your ability to send messages that will be delivered to the corresponding mailbox is incidental. But that's not a general enough conclusion to hold for why "we" (all cases) should do it that way.
That's fair but in a lot of cases you verify the email for your account and there is activity associated with that account. There are many reasons other than to send unwanted marketing emails for the service to need to get in touch with you, including the many cases where you're entering the email address because you want to be notified of something.
Grammar is not the most important thing, but it's hard to exemplify tone and voice without having the common language to talk about them and grammar is that language. I'm guessing the article starts with it because tone and voice is the most universal aspect, not because it's the most important aspect to technical writing. The rest are very hands-on details but that might not apply to everyone.
The two-axis diagram on diataxis is a great way to categorize documentation and all of them have different needs and indeed call for different voices. I have seen a number of projects where the only documentation produced is tutorials or itch-scratching posts for this one thing, and then when a feature changes somewhere you have a mountain of out-of-date information because everything was hard-wired to this combination of dependencies, versions and circumstances (whereas a manual could have been updated quickly and that would be that). And on the flip side, plenty of projects where there is a dense compendium for every detail and it's hard amid all the implementation details of the clustering gossip protocol versioning to find the little list of items you want to know to set the thing up to begin with - linked to the particular sections to read more about installation, configuration, backup, etc.
JavaScriptCore is a constituent part of WebKit, which is pretty much available on washer-dryers these days. But the macOS/iOS/etcOS framework part of JavaScriptCore that adds Objective-C/Swift layers is only available for those platforms, yes: https://developer.apple.com/documentation/javascriptcore
The strongest the README gets is saying "The LumoSQL and SQLite projects are cooperating", which is closer than any other effort I've seen, and welcome if the SQLite project ever wants to swap out the underlying storage engine, but doesn't really mean that the SQLite team "works on LumoSQL" or vice versa. Certainly it looks like LumoSQL has put significant work into the cooperation by using Fossil and by "not forking" which may have made the cooperation palatable.
Also, the SQLite project has been consistent on wanting to write all the code for SQLite themselves and not merge in patches (https://sqlite.org/copyright.html). Them working on another exploratory project would be a way for them to absorb those changes back into SQLite in a way that wouldn't be incompatible, but it would have to be the same team doing the changes for that to be consistent.
Fossil is an SCM system - https://fossil-scm.org/. Are you suggesting that every piece of software ever kept in, say, git repositories are therefore maintained by the same team?
I understand the pedagogical example as a stark contrast, but this choice is immediately turning me off the product. Startups and the companies they grow into have spent decades helping themselves to data to track and identify and write clauses into their policies and terms of use to be able to stockpile behavioral data for a potential future "pivot" if nothing else.
We are living in a world where it is more based in fact to assume that companies will help themselves to this kind of information under a variety of pretenses than that they won't. The only reason people will assume differently is if they trust the company, the product and the people behind it. The policy seems to be doing its part, but trust needs to be built by every conversation with a potential future customer. If the question that pops up is "how can I be anonymous if I pay you each month", the resulting argumentation should not be able to be construed as "we're sorry, but we're not going to help be your burner phone", which the comment I'm replying to is toeing dangerously close to.
People search for many things and it could detail their interests, their current location, their financial troubles. The commitment needs to be "we will never, never, never, ever, do anything like this", and not just "it doesn't make sense for us to do this". Because if someone wanted to build in that sort of tracking, it could "make sense" from a business standpoint to do this in that the data, if collected, has a lot of value on the market.
Even if the incentives are indeed aligned to keep the paying customer pleased, we also need to know that if we do walk away, that at that point there will not possibly be anything left as an artifact that a future buyer could do anything with. This assurance will be most effective if it's rooted in trust and values rather than in practical concerns; the practical concerns are valid, but only if they are restraints that have been applied in search of an objective by yourself, instead of "it is not currently in our business plan". (As an extreme example, consider a bank saying "we'd never pick items out of our customers' safe deposit boxes and sell them; it would simply be too much work".)
Basically, the facts seem reasonable enough. But you need to work on not coming off like the parent saying "well I'm sorry you want to hide things from me", which is what unprompted bringing up the covering up of criminal activity in response to questions of anonymity and privacy does. I understand that those concerns do come up in thinking responsibly about a product like this from all angles, but making it a part of a discussion with a potential customer disrespects and denigrates their fundamental needs enormously, the same needs that would make them attracted to the product in the first place.
I understand I could have better communicated my message and thanks for bringing it up.
My point is that none of "us" require anonymity in normal circumstances of using a search engine. But we all absolutely require our privacy respected (and many people still conflate privacy with anonymity).
Kagi is 100% privacy respecting by default, and examples I gave illustrate not only that, but that certain (almost total) level of anonymity can be obtained by Kagi too (because searches are not logged, so they can not be logged with an identity). In a paid service environment where you need to authenticate the user, help them restore their account etc, it is very difficult, I dare say impossible, to achieve total anonymity guarantee from a technical standpoint and I want to be transparent about it.
I think a better sort of guarantee is exactly the one that I gave - our business model does not incentivize and sort of privacy invasive behavior.
> The commitment needs to be "we will never, never, never, ever, do anything like this"
Perhaps you have not read our privacy policy, but this is exactly what is says there, and I took it for granted. This is probably my mistake and I should have assumed that people submitting inquiry will not have read it.
In my mind "there is no need for us to do it" is a much more powerful driving force than simply stating we won't. Stating something is easy but not substantial as witnessed by many big tech companies. The reality is that their business model forces them to twist reality away from privacy statements like those. Our business model does exactly the opposite, and has a positive feedback loop reinforcement in relation with the customer (we cheat you -> you take your wallet elsewhere).
Rereading this very answer I have a feeling it will not be the most satisfying again, so I hope that with your help, we can come to an answer that will satisfy your inquiry.
Many designs does this implicitly. At least one, the original iPhone keyboard autocomplete/suggestion algorithm, did it explicitly.
Ken Kocienda's book Creative Selection has a very good chapter on the algorithm being built piece by piece, but finding out words being created by surrounding keys was part of collecting all the candidates.
They even used this in marketing. One of the pre-original-iPhone-launch videos was focused just on the on-screen keyboard (probably because almost everyone thought it was a really kooky idea at the time), and used the example of explicitly pressing "O-U-Z-Z-A" but still getting "pizza" as the autocomplete because it was the closest recommendation.
One of the iOS versions a few years ago became incredibly fond of including the space bar and considering alternatives with slightly off key presses near the space bar split into two or more words. When you're using a language with a lot of compound words like Swedish, this yielded some almost postmodern corrections with one or more words often completely different (but close on the keyboard, of course). I don't know if this was a tweak to the manual algorithm going off the rails or an AI version that wasn't quite tuned well yet.