For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | ssth's favoritesregister


There's a hidden setting when configuring geo on campaigns.

By default it's set to "People -interested- in your target location" . You need to change it to "People who are -in- your target location".

This setting is hidden under a toggle, so it's very easy to miss. Definitely a dark pattern and results in a lot of garbage clicks if you overlook that setting.

This is just one of many dark patterns which makes Google Ads effective only for people willing to spend the time tuning and tweaking every single setting.

A big part of the problem is Google themselves - they say always use "broad" keyword matches (and of course it's the default). Broad matches are really not good for most campaigns unless you have an extremely large budget, yet if you read their documentation they heavily encourage it.

While we're at it...

1) Never enable the "auto apply recommendations" setting. If you do, it gives Google free reign to modify your campaigns (this has always resulted in worse performance and more spend in my experience)

2) Never listen to a google ads rep if they call. Once you're spending enough, they'll call you every week trying to convince you to change various settings. 95% of the time their advice is just plain bad. The quality of the advice does increase once you're spending enough and get assigned more senior reps. But even the senior reps are there to get you to spend more money, their job is not to make your campaigns more effective, "ad specialists" are simply sales people in disguise.


Very much with you on this one. Even with 50k+ LoC a 'make clean;make' takes less than a second (with help from ccache and a RAM disk). It effectively gets the compiler out of the way and keeps the thought processes continuous.

One trick I like is to have a 2nd terminal watching the file(s) I'm editing, triggering a make/run on each save; building and running is then a single keypress away:-

  while [ 1 ]; do
  inotifywait -e move_self foo.c
  cc foo.c -o foo && ./foo # or 'make && ./foo'
  done

> It's almost like some tiny extremist faction has gained control of Windows

This has been the case for a while. I worked on the Windows Desktop Experience Team from Win7-Win10. Starting around Win8, the designers had full control, and most crucially essentially none of the designers use Windows.

I spent far too many years of my career sitting in conference rooms explaining to the newest designer (because they seem to rotate every 6-18 months) with a shiny Macbook why various ideas had been tried and failed in usability studies because our users want X, Y, and Z.

Sometimes, the "well, if you really want this it will take N dev-years" approach got avoided things for a while, but just as often we were explicitly overruled. I fought passionately against things like the all-white title bars that made it impossible to tell active and inactive windows apart (was that Win10 or Win8? Either way user feedback was so strong that that got reverted in the very next update), the Edge title bar having no empty space on top so if your window hung off the right side and you opened too many tabs you could not move it, and so on. Others on my team fought battles against removing the Start button in Win8, trying to get section labels added to the Win8 Start Screen so it was obvious that you could scroll between them, and so on. In the end, the designers get what they want, the engineers who say "yes we can do that" get promoted, and those of us who argued most strongly for the users burnt out, retired, or left the team.

I probably still know a number of people on that team, I consider them friends and smart people, but after trying out Win11 in a VM I really have an urge to sit down with some of them and ask what the heck happened. For now, this is the first consumer Windows release since ME that I haven't switched to right at release, and until they give me back my side taskbar I'm not switching.


> These days you need to have your kernel driver signed by Microsoft

However, this is required only for a “proper” kernel driver specifically; kernel code execution can still be accomplished without any signing at all using /dev/kmem-like mechanisms, which Microsoft explicitly does not consider a bug[1].

> or edit your boot config options to put the machine in an insecure state mostly useful for testing.

Or fiddle with undocumented registry settings (used, among other things, to support upgrades from Windows 7 installations with unsigned drivers) and suppress signing checks for your driver even outside of testing mode[2].

> To get it signed you need to pass a basic test suite which MS provides [...].

You also need to register a business entity and cough up upwards of 300 USD/yr for a Microsoft-approved EV code signing cert[3] before that, which is the biggest hurdle for me at least.

I have to say, even if this new Microsoft is not the same as old Microsoft, it sure looks very similar from some angles.

[1] https://github.com/ionescu007/r0ak#is-this-a-bugvulnerabilit...

[2] https://geoffchappell.com/notes/security/whqlsettings/index....

[3] https://docs.microsoft.com/en-us/windows-hardware/drivers/da...


My personal favorite PHP wtf is this one:

  if (md5('240610708') == md5('QNKCDZO')) {
    echo "true\n";
  }
Though I do use php, and love many things about it.

I'd personally recommend pgBackRest as a wal-g replacement. We (Covalent) started with wal-g ourselves, but pgBackRest does full backup and restore so much faster. Besides the parallelism (which is great), pgBackRest's backups are manifests, symbolically mapping to the individual objects in storage that may have come from previous backups. Which means that a differential or incremental backup doesn't need to be "replayed after" a full backup, but instead is just like a git commit, pointing to some newer and some older objects.

Also, auto-expiry of no-longer-needed WAL segments (that we use due to our reliance on async hot standbys) along with previous backups is pretty great.

And we haven't even started taking advantage of pgBackRest's ability to do incremental restore — i.e. to converge a dataset already on disk, that may have fallen out of sync, with as few updates as possible. We're thinking we could use this to allow data science use-cases that would involve writing to a replica, by promoting the replica, allowing the writes, and then converging it back to an up-to-date replica after the fact.


It is rather easy to keep an inkjet printer from gunking up if you buy a set of refillable cartridges, fill them with a 50/50 mix of demineralized water and inkjet cleaning fluid, install them and run a cleaning cycle after you finished a printing session. No ink in the nozzle means no gunk. This trick also works well for cleaning nozzles, since the cleaning fluid is a surfactant and it breaks up clumps of dried pigment better than just a stream of ink.

The command you want is "git pull --rebase". There are configuration settings to make "git pull" rebase by default, and I'd recommend always turning that on that default (which may be by why the person above omitted it from his pull command).

I actually thought "git pull" did "git pull --rebase" by default (this may be what you get from running `git --configure` without modifications?), but maybe I've just been configuring it that way. You can achieve this in your global Git configuration by setting "pull.rebase" to "true".

I don't think it's sane behavior for "git pull" to do anything else besides rebase the local change onto the upstream branch, so I'm surprised it's not the command's default behavior. Has the project not changed the CLI for compatibility reasons or something?

When do you ever want a "git pull" that's not a rebase? That generates a merge commit saying "Merge master into origin/master" (or something similar) which is stupid. If you really want to use actual branches for some reason, that's fine, but "merge master into master" commits are an anti-pattern that if I ever see in a Git repository I'm working on or responsible for, results in me having a conversation with the author about how to use Git correctly.


I used to self-host transfer.sh, but nowadays I just use WebDAV.

1. it's easy to get WebDAV running, most of HTTP server support it through modules. or you could run it with rclone, on non-standard port or behind a reverse proxy

2. I don't share upload access with others, it's only for my own use. and this way I don't need to deal with huge uploads or illegal contents.

3. I could also just curl it, the uploaded content will have proper mime types. It's convenient for me to share pictures and videos this way on Telegram because it generate previews for me, and it's easy to just copy-paste a link to send to more than one person.

  curl https://user:password@domain.tld:port/path/to/file.png -T file.png
4. I could use WebDAV for various other cases, such like keeweb instance, Orgzly (android) notes syncing, saving keepass database etc.

Could write a simple shell script to include random characters in the URL, copy url to clipboard etc, such like this one I wrote [1]

btw for plain text, I prefer to use fiche [2] a simple netcat based pastebin. I have a public fiche instance [3] which allow people to upload with bash/zsh/netcat and show webpage with syntax highlighting. Text usually consume very a few of storage and not that "sensitive" like some photos and videos could be, so it's less troublesome to provide a public service for that. I've also write a Telegram bot for easily upload to my pastebin.

Lifespan of my WebDAV uploads and pastes are 1 month, it's very easy to clean them with crontab

  @daily find ~/webdav/tmp -mindepth 1 -maxdepth 1 -type f -ctime +30 -delete
[1] https://ezup.dev/git/dotfiles/file/.local/bin/eztmp.html

[2] https://github.com/solusipse/fiche

[3] https://ezup.dev/p/


> If I could be so unproductive...

It goes much further than that...

When one views a fossil-hosted forum post from the main fossil site or sqlite's site, they are looking at...

- A forum post rendered by software Richard wrote. - Piped out to you via an HTTP server he wrote. - Served from an SCM he wrote. - Stored on a database package he wrote. - All coded in a text editor he wrote.

Complete vertical integration. He's written, or had a majority hand in, every part of the chain except for the browser. We're all awaiting his announcement about his browser project. It's only a matter of time.


I would choose it.

Bottom line, organizing and maintaining large codebases on which lots of developers are collaborating is going to be painful no matter what your stack is. There is no technical fix for overcoming all the dependency and coordination problems created by large, complex software.

As nothing is going to remove that cost from you, the best you can do it transform one of set painpoints into a different set of painpoints. The most dangerous choice is then the one where the painpoints are not well understood, even to the point where you think they aren't there. Trust me - they are there, lurking - waiting for you to start tripping over them.

At least with Spring there is a well understood approach with a large pool of developers and some accumulated wisdom. That's better than most alternatives for real world use.

OTOH, if you are building a small project with a small team, it doesn't matter too much which framework you use, just use whatever your team members are most comfortable with. If your intention is to grow into a massive project, then finding devs who have experience in your stack will matter more down the road.


My favorite Fish plugins:

- jorgebucaran/fisher - Plugin manager

- IlanCosman/tide - Nice prompt with git status

- jorgebucaran/humantime.fish - Turn milliseconds into a human-readable string in Fish.

- franciscolourenco/done - Automatically receive notifications when long processes finish.

- laughedelic/pisces - Helps you to work with paired symbols like () and '' in the command line.

- jethrokuan/fzf - To integrate fzf (junegunn/fzf)


Author here. Absolutely. I used to love Turbo Pascal and Delphi when I was younger. If Free Pascal uses GNU LD.BFD or LLVM LLD when it links programs, then all you'd need to do is is configure it to use cosmopolitan.a when linking system call functions like read(), write(), etc. See https://github.com/jart/cosmopolitan Another option is if Free Pascal wants to write all the system call support from scratch, then doing that now is going to be a whole lot easier since the Cosmopolitan codebase does a really good job documenting all the magic numbers you'll need. See files like https://github.com/jart/cosmopolitan/blob/master/libc/sysv/s... and https://github.com/jart/cosmopolitan/blob/master/libc/sysv/c... I've been working on a tiny C11 compiler called chibicc which has most GNU extensions and I managed to get it working as an actually portable executable with an integrated assembler: https://github.com/jart/cosmopolitan/blob/master/third_party... I also got Antirez's KILO text editor working as an APE binary. https://github.com/jart/cosmopolitan/blob/master/examples/ki... If we can build a linker too then we can get a freestanding single file toolchain + ide that's able to be a modern version of Turbo C.

Reminder about alternatives:

For the less extreme case, when you want mostly one distro, but also a few packages from another one, there is “alien” [0] which converts between different package formats. It sometimes needs help with system integration or dependencies, but occasionally “just works”

For a more extreme case, there is a schroot (or a docker container) with /home, audio and X mapped in. Works surprisingly well for desktop apps, like those embedded IDEs which require a specific Linux distribution you don’t want to run. You do have manage desktop integration (menu items, file associations) yourself though.

Those solutions are hard to set up than bedrock, but they also do not have special filesystem magic, so I bet they are much easier to debug.

[0] https://manpages.debian.org/unstable/alien/alien.1p.en.html


Here's a better solution.

https://www.atlassian.com/git/tutorials/dotfiles

TL;DR:

  alias dotfiles='/usr/bin/git --git-dir=$HOME/src/dotfiles --work-tree=$HOME'
  dotfiles add
  dotfiles commit
  ...

Worth mentioning is the insane and disruptive technology making TSMC's 5nm possible. ASML and it's suppliers have built a machine[1] that has a sci-fi feel to it. It took decades to get it all right, from the continous laser source to the projection optics for the extreme ultraviolet light[2]. This allowed photolithography to jump from 193nm light to 13.5nm, very close to x-rays. The CO2 laser powering the EUV light source is made by Trumpf[3].

Edit:More hands-on video from Engadget about EUV at Intels Oregon facility[4]

[1]https://www.asml.com/en/products/euv-lithography-systems/twi... [2]https://youtu.be/f0gMdGrVteI [3]https://www.trumpf.com/en_US/solutions/applications/euv-lith... [4]https://youtu.be/oIiqVrKDtLc


I have been using Firejail for a few years now and absolutely love it. It is now a central part of my setup and workflows. Here are two features I use regularly:

- The "virtual home" specified with --private=/path/to/folder runs the app with the specified folder as a home folder. I use this for all the apps I sandbox to make sure my real home does not get polluted by tens of config files, cache, etc. Removing all traces of an app is now as easy as deleting /path/to/folder; I find this pretty neat to keep my home folder organized (each app gets its own home in ~/.sandboxes/<app name>).

- Starting an app with --private (without any argument) will run it into a temporary/disposable home folder which will be cleaned up when the app is stopped. I use it to run some apps I don't really trust and don't need persistence for (e.g. I start Chrome with this option so that I get a fresh home, hence profile, every time I need it, same for Zoom when I need to join a meeting---not very often).

And of course all the profiles that are built-in to customize the sandboxing to most popular apps is great!

I'm really thankful for the work being done on this project.


On a similar topic related to the display CSS property, you can attach a CSS to any XML document and have it displayed in a web-browser.

  <?xml-stylesheet type="text/css" href="style.css"?>
Bonus points if you mix that with XSLT.

Author here. I think the Rust vs. Go question is interesting. I actually originally wrote esbuild in Rust and Go, and Go was the clear winner.

The parser written in Go was both faster to compile and faster to execute than the parser in Rust. The Go version compiled something like 100x faster than Rust and ran at something around 10% faster (I forget the exact numbers, sorry). Based on a profile, it looked like the Go version was faster because GC happened on another thread while Rust had to run destructors on the same thread.

The Rust version also had other problems. Many places in my code had switch statements that branched over all AST nodes and in Rust that compiles to code which uses stack space proportional to the total stack space used by all branches instead of just the maximum stack space used by any one branch: https://github.com/rust-lang/rust/issues/34283. I believe the issue still isn't fixed. That meant that the Rust version quickly overflowed the stack if you had many nested JavaScript syntax constructs, which was easy to hit in large JavaScript files. There were also random other issues such as Rust's floating-point number parser not actually working in all cases: https://github.com/rust-lang/rust/issues/31407. I also had to spend a lot of time getting multi-threading to work in Rust with all of the lifetime stuff. Go had none of these issues.

The Rust version probably could be made to work at an equivalent speed with enough effort. But at a high-level, Go was much more enjoyable to work with. This is a side project and it has to be fun for me to work on it. The Rust version was actively un-fun for me, both because of all of the workarounds that got in the way and because of the extremely slow compile times. Obviously you can tell from the nature of this project that I value fast build times :)


I’ve used raindrop for a year and sadly it’s been a horror story. To begin with it’s a complete maze just to subscribe to it. Second, a serious bug was released that replaced all my bookmarks with some random spam video. Third, the ultimate deal breaker which happened a few days ago, was that the iPad app is so buggy that it deleted all my bookmarks when I tried to simply move a folder to a different folder since the navigation is so broken I was left with no choice. A years worth of bookmarks went “poof!” in a second; words can’t express my rage.

My advice, drop raindrop before it drops your bookmarks.


For anyone who needs to/prefers to use bash:

> One nice feature for fish history is the ability to filter through the history based on what is typed into the shell.

Add this to your .inputrc

  ## arrow up
  "\e[A":history-search-backward
  ## arrow down
  "\e[B":history-search-forward

> Fish does not enter any command that begins with a space into its history file, essentially treating it like an incognito command.

Add "HISTCONTROL=ignorespace" to your .bashrc

> A new addition in fish 3.0 is the --private flag that can be used to start fish in private mode; it stops all subsequent commands from being logged in the history file.

run unset HISTFILE


I wish I could like Traefik, but it really isn't easy.

The use case in our Hackerspace was to dispatch different Docker containers through our wild-card subdomains. Traefik is supposed to also automatically create TLS certificates. I had numerous problems with the Let's Encrypt functionality.

Debugging information is quite cryptic, the documentation seems all over to me, which is even more problematic given the number of breaking changes between 1.x and 2.x versions. The way you automatically configure things through Docker labels means that a simple typo can render your configuration ignored.

Also, plugging in Traefik to complex docker-compose projects such as Sentry or Gitlab is next to impossible, because of networking: whatever I tried, Traefik just couldn't pick up containers and forward to them unless I changed the definition of every single container in the docker-compose to include an extra network. I don't feel this should be this complex.

Sometimes I just feel that we should get back to using Nginx and write our rules manually. While the concept of Traefik is awesome, the way one uses it is extremely cumbersome.


I consider shellcheck absolutely essential if you're writing even a single line of Bash. I also start all my scripts with this "unofficial bash strict mode" and DIR= shortcut:

    #!/usr/bin/env bash
    
    ### Bash Environment Setup
    # http://redsymbol.net/articles/unofficial-bash-strict-mode/
    # https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
    # set -o xtrace
    set -o errexit
    set -o errtrace
    set -o nounset
    set -o pipefail
    IFS=$'\n'

    DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
I have more tips/tricks here: https://github.com/pirate/bash-utils/blob/master/util/base.s...

For those who like htop-style system monitoring tools, you should also keep these commands handy:

- atop (great for finding out what's causing system-wide slowness when you're not sure whether it's CPU/disk/network/temperature/etc.)

- iotop/iftop/sar (top equivalents for disk IO, network traffic, and sysstat counters)

- glances/nmon/dstat/iptraf-ng (pretty monitoring CLI-GUI utils with more colors)

- systemd-analyze blame (track down cause of slow boot times)

- docker stats (htop equivalent for docker containers)

- zpool iostat -v <poolname> 1 (iotop equivalent for ZFS pools)


the prevailing consensus esp in the JS world is that 1 tool should do 1 thing. This is fine under the Unix philosophy, but challenges arise due to the combinatorial explosion of config needs, bad errors, and overhead from crossing module boundaries. There are a number of attempts at challenging this status quo:

- ESbuild (100x faster than webpack in part due to focus on doing a single parse (but also shipping a Go binary))

- Deno

- Rome

As we consolidate on the jobs to be done we expect out of modern tooling, it makes sense to do all these in a single pass with coherent tooling. It will not make sense for a large swathe of legacy setups, but once these tools are battle tested, they would be my clear choice for greenfield projects.

recommended related reads:

- https://medium.com/@Rich_Harris/small-modules-it-s-not-quite...

- (mine) https://www.swyx.io/writing/js-third-age/


Speaking of fancy alternative shells, I tried http://xon.sh/ for a while, whose selling point is that its language is a superset of Python.

While it was neat for interactive use to have stuff like proper string manipulation and list comprehensions, writing scripts with it (.xsh files) was horrible, because the sorta-but-not-really-Python-ness meant that no tooling worked. Syntax highlighting was messed up, code autoformatters barfed, linters barfed, autocompletion didn't work.

I actually find writing scripts in Bash, aided by the amazing linter https://www.shellcheck.net/ to feel almost like it's now a compiled language. It goes from being a fraught endeavor, to being kinda fun and educational. It's like pair programming with someone who has an encyclopedic knowledge of the language quirks. Hook it up to automatically run on-save in your editor!


XWiki:

* CKEditor: https://extensions.xwiki.org/xwiki/bin/view/Extension/CKEdit...

* Syntax: https://www.xwiki.org/xwiki/bin/view/Documentation/UserGuide...

Actually any wiki pages can define a Class and how to display this class and / or instances of a Class:

* https://www.xwiki.org/xwiki/bin/view/Documentation/DevGuide/...

* https://extensions.xwiki.org/xwiki/bin/view/Extension/App%20...

The Script Macro is useful to make some dashboards ( https://extensions.xwiki.org/xwiki/bin/view/Extension/Script... )

I've deployed this for the internal documentation inside a company I worked for (MediaWiki was a no-go even with a visual editor).

For each new feature, I was developing inside a clean new wiki, than I exported the changes once I was sure everything was okay. It is way more easy then to upgrade to the new XWiki version.


Slightly related PSA:

Everyone should consider running a wiki locally just for yourself. It's like being able to organize your brain. I just got into it two days ago and basically spent the whole weekend dumping things into it in a way I can actually browse and revisit, like the short stories I'd written, spread out across Notes.app and random folders.

You don't need to run WAMP, MySQL, Apache, phpmyadmin or anything. Here are the steps for someone, like me, who hadn't checked in a while:

0. `$ brew install php` (or equiv for your OS)

1. Download the wiki folder and `cd` into it

2. `$ php -S localhost:3000`

3. Visit http://localhost:3000/install.php in your browser

I tried DokuWiki at first (has flat file db which is cool). It's simpler, but I ended up going with MediaWiki which is more powerful, and aside from Wikipedia using it, I noticed most big wikis I use also use it (https://en.uesp.net/wiki/Main_Page). MediaWiki lets you choose Sqlite as an option, so I have one big wiki/ folder sitting in my Dropbox folder symlinked into my iCloud folder and local fs.

Really changing my life right now. The problem with most apps is that they just become append-only dumping grounds where your only organizational power is to, what, create yet another tag?

My advice is to just look for the text files scattered around your computer and note-taking apps and move them into wiki pages. As you make progress, you will notice natural categories/namespaces emerging.

I just wish I started 10 years ago.


I have that similar feeling, I don't want a bunch of paper around, but writing on paper is just SUPERIOR to anything digital, as it's so much easier to freeform write, add sketches, etc. That's why a bought (several sizes now) a Rocketbook. Paperish, can write/draw, easy to digitize and reusable.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You