rqlite author here. The way I think about it is that both systems add reliability to SQLite, but in addition rqlite also offers high-availability. Another important difference is that Litestream does not require you to change how your application interacts with the SQLite database, but rqlite does.
Another way I think about it (I'm sure Ben may have other ideas!) is that if you want to add a layer of reliability to a SQLite-based application, Litestream will work very well and is quite elegant. But if you have a set of data that you absolutely must have access to at all times, and you want to store that data in a SQLite database, rqlite could meet your needs.
Litestream author here. I agree with Philip. Litestream relaxes some guarantees about durability and availability in order to make it simpler from an operational perspective. I would say the the two projects generally don't have overlap in the applications they would be used for. If your application is ok with the relaxed guarantees of Litestream, it's probably what you want. If you need stronger guarantees, then use rqlite.
Agreed, they generally solve different problems. It's important to understand that rqlite's goal is not to replicate SQLite per-se. Its primary goal is to be the world's "easiest to operate, highly-available, distributed relational database". :-) It's trivial to deploy, and very simple to run. As part of meeting that goal of simplicity it uses SQLite as its database engine.
I was thinking about it as well. It provides a carefree holiday sort of feeling similar to the old postcards. Very much similar to the West Indies cricket board logo.
Being a backend developer, I've always assumed CSS to be magical.
Recently I needed to learn CSS to make my website viewable, so I started with w3schools. But the pace was too slow, and it made me search for other alternatives. A good amount of SEO friendly sites containing the list of best CSS books suggested `CSS In Depth - Keith J. Grant`. With low hopes, I thought to give it a try just for a day but it did not disappoint me. Being a CSS noob I liked the way `Layout` concept was presented. It started with `float` and ended with `grid` with a couple of chapters in between dedicated to `flex` and other techniques. Then it picks up responsive design. The concepts were presented in such a way that it became a responsibility for me to replicate whatever the author has done in the chapters. It was a satisfactory experience.
Afterward, it presents the transition and animation related concepts which I read but I've forgotten because of not using them in day to day work.
Hmm.. I see chapter 3 is "Mastering the Box Model", which is the part my co-workers really don't understand and seems to be the whole reason they treat CSS as magical. So that definitely sounds like a plus for that book (especially since one of the headers mentions border-box), although I've never read it myself.
From the headers it's not clear if it touches on block vs inline vs inline-block though, nor how margin/border/padding are related to each other (though that could just be mixed in with the border-box section). Is it complete there?
I can second this and also mention that if you buy it through manning it is DRM free! They also have a "livebook" version that lets you try code right inline with the book. Highly recommend.
This site which was posted here a while back is great, it's up to date and explains the concepts CSS is based on clearly with examples using Codepen so you get to see both the HTML and CSS as well as the result, plus you can open them in Codepen to edit them yourself. I've been doing CSS for close to a decade and learnt quite a few things from reading through it :)
I've created a Go based tool, named Chaakoo, for creating TMUX windows and panes based on a configured layout. The idea here is inspired by the CSS grid template areas.
The attempt here is to save ~5 minutes everyday by offloading the startup related TMUX work to Chaakoo.
Robert Fripp released a new playlist, in youtube[1], named Music For Quiet Moments. It is an instrumental album and quoting from the description:
`
Robert Fripp's "Music for Quiet Moments" series. We will be releasing an ambient instrumental soundscape online every week for 50 weeks. Something to nourish us, and help us through these Uncertain Times.
`
I consider this (posting on YT) a huge change from his God Save The Queen days. He's still obsessed with control but offering free content is something he wouldn't have done then.
Adding for context: After GStQ shows, he'd sit on the edge of the stage and talk to the audience. He definitely connected with his fans. Yeah he's (openly) a control freak but he's no 'tone deaf' elitist.
edit: It occurs to me that Discipline could be a product of that controlling nature.
If I can just instruct the browser to delete the sessions, cookies, localstorage, etc. after I close a Google search tab, then would it require us to self host Whoogle? This considers that I'll never login to Google using that browser and 3rd party cookies are disabled.
They can through browser fingerprinting, yes. Canvas/webgl/fonts/IP/accelerometer/every other web api basically
> If I can...
You can! Install Firefox, Multi account containers (add-on by Mozilla), and Temporary Containers (third party addon). You can configure Temporary Containers to spawn a new container for every tab, or every google tab, etc. Each container is like a new browser session. It can clear the data of a closed container after 15 minutes.
What kind of settings? If you want per-site settings there is (was) uMatrix which allows for extremely granular configuration. Sadly it is discontinued right now. But it still works.
Let's say for example I'd like one container to have JavaScript disabled, and only enable the cookie autodelete add-ons. In another, enable JavaScript and some specific add-ons (like a password manager). Also, would be nice to be able to control cookie settings per container.
One question: Let's assume that I've added Firefox containers and instructed it to open every tab in a new container. If I open a link from the Google search result in a new tab(that is inside a new container) then can Google still trace the flow because the opened links are from Google and not the actual search result and it may contain the tracking info?
1) Install ClearURLs. This addon strips tracking identifiers from URLs. If you hover a google search link you'll see it doesn't direct to you to website.com it directs you to google.com which then forwards you to the site you clicked without this addon
2) Configure Temporary Containers to make a new container for every different subdomain or domain. This way, if you click a link from google search, regardless of using ClearURLs, a new container spawns for any domain/subdomain that does not match (ex: click netflix.com from google and TemporaryContainers identifies this and spawns a second tab for netflix). This makes some things impossible, like SSO, so configuring it properly can be tricky. You might be able to configure it such that only links clicked from google.com spawn a new container and those that redirect to sso sites don't, but I haven't done this. You can always open a private window where the context is shared (temporary containers don't work in private) if you need SSO.
Obviously there's more you have to do to be even safer because with pings on by default and js enabled on google, they can still see you clicked a link. Also, with Google Analytics (GA), they can infer someone searching "x" and then "another user" from the same IP fetches "x" GA tracking scripts a second later is the same person. The list goes on and Google really likes tracking people, so it's very difficult to mitigate. The first and most important thing you can do is GET OFF CHROME/EDGE!
Whether it's evil or not depends on how it's used. Suppose that the top result for some common search is poor, but the second one is better, and this is visible to most users from the search result page. Everyone clicks link #2, hardly anyone clicks #1. That is valuable feedback and the search engine developer then knows that there's something wrong with the first result, and this can be determined without keeping any information on the original user. Often this happens when some clever SEO has caused the search engine to give a high rank to some stupid site.
Yes, Google can still recognize "you" for some variation of "you". Anecdote - the other day my wife searched for an address on her phone while on WiFi. I searched for that same address just one minute later on a different computer (on the same WiFi) and the address was auto-completed by Google before there was enough of the address entered to make it unambiguous.
(Consider living in a neighborhood where all the streets around you start with "Fl". And then you go to search for "Flanders Drive", which you have never searched for before, and it gets auto-completed. Even though you would have expected "Fl" to expand to "Florence Road" since that's the thing you commonly search for. That's what happened here.)
I did not understand the last sentence. If I solve the captchas, get the search results and then do the cleanup, it'll just continuously ask for the captchas and that's the only added pain, no? Will they be able to conclude if all the requests are coming from the same user?
Yes. But its going to keep asking you the captchas till the user changes behavior (sample of 1) :) (and of course the ip address too like the other poster mentioned - you can try it by searching for insurance/anything with good money on your phone and switch to desktop - assuming both are connected through the same router).
I once had to solve thousands of captchas for an archiving project, and buster helped me with a quarter of the captchas (https://github.com/dessant/buster)
Instead of instructing your browser to delete cookies after you leave, why not instruct your browser to not accept cookies from such sites in the first place? I currently have google.com and youtube.com to "Never allow cookies from this site" in my browser.