For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more snowytrees's commentsregister

> Our discussion here suggests that the reasons for the increase in nonparticipation are not strongly related to a weak economy or insufficient labor demand, but rather due to a change in individuals’ desire to supply labor in the market.

The most interesting tidbit from the article is that the data supports the very popular idea that people are withdrawing from the labor market as they just don’t want to work (for any reason eg low pay, hazardous conditions, etc)


I agree wholly with this. I only very recently started using RSS and almost every site/blog has one that produces quality content. For example the St. Louis Fed has a few high quality blog feeds and when I couldn’t find the RSS feed I sent them an email and lo and behold, there was an RSS feed to each one (just on a different page that I missed). It seems to be a rarity that good content feeds don’t have RSS.


huh, maybe if you want to make a crawler that competes with google you want to penalize content that does not have RSS feeds.


I’m not on Ubuntu but using Wayland on nixos. Fractional scaling does not look or work well at all in my experience. I decided on 1.0 scaling and pushed it all over to font sizing (around 1.5). This works pretty well for most apps but it is a little small sometimes. The biggest issue is having to manually scale up some apps that don’t keep my preferences.


Gnome's accessibility feature "larger text" works a bit better than font scaling IMEx. With font scaling often some alignment and spacing seems odd. And you can toggle "larger text" with one click, if you make the accessibility menu permanent.


This is what I do as well. I find it works better than fractional scaling.


When I was doing some research into building an app that encrypted data similar to these cloud password managers, I encountered OPAQUE[1] which seems to be the ideal way to perform authentication and securing a master encryption key. It is an asymmetric PAKE that also has a step for providing a salt. This removes the need to do what LastPass does with treating the first hash as a password. There is a great article from Cloudflare on how it works[2], and a working implementation of the spec in rust[3].

[1]: https://github.com/cfrg/draft-irtf-cfrg-opaque

[2]: https://blog.cloudflare.com/opaque-oblivious-passwords/

[3]: https://github.com/novifinancial/opaque-ke


As someone who has learned Linux/Unix just the past few months using a systemd based distro, I find it a breeze to use. Great documentation, FOSS, and really easy to pick up. I just don’t understand why there is so much dislike thrown around when that energy can be focused elsewhere on truly bad behavior/software. And from my understanding, there are distros out there that don’t use systemd for those who dislike it that much. It’s really perplexing to me as someone newer to the community.


I'm an old school unixhead who's nonetheless made his peace with systemd because, honestly, it makes lots of things work way better than most previously available solutions did.

Thing is, it also forcibly changed a bunch of things by introducing defaults that were not at all what people expected. They might have been better overall, but it still caused some nasty surprises.

Let me try and explain a bit more concretely -

An easy example (not a hypothetical one, btw): if you have a physical server with multiple hard drives and one of them doesn't start, old school unix would boot everything it -could- boot anyway. SystemD changed that behaviour to "if a filesystem doesn't come up, and you don't set an option to tell systemd it's ok if that doesn't happen, don't finish booting".

Now, imagine you're somebody who has a physical server somewhere that has a big cache disk that you use a cheap drive for because if it fails, well, you've lost some cache space. Now imagine your server used to run a pre-systemd setup, and when you upgraded it to a systemd using version of the same distro you left all your settings as-is because everything seemed to be working fine. Now ... imagine that drive dies during a reboot ... and so the server doesn't finish booting, and so you can't even ssh in to it to figure out what's going on, and the only way to figure it out is to get physically in front of the console.

If that server is four hours' drive away, you might not be very impressed.

I still, overall, think systemd is pretty nice. But while some of the people annoyed with it are annoyed with it on a purely aesthetic/principle sort of basis, there are definitely some decisions the authors made that caused -very- surprising results for people who had been running servers for years already, and some of the dislike being thrown around is very much understandable.


That's a fantastic explanation, thanks! Yep, I can definitely understand the dislike of software coming with defaults that break commonly understood precedent.


ILOM/iDRAC/etc largely solved that sort of thing, but there was always a chance that grub config became corrupted eg when upgrading kernels, leaving you with a 4-hour drive. Pid 1 is critical, and it's ridiculous that it took until systemd for it to stop being something that was just hacked together instead of a well-supported (if surprising at time) software package. It was a real "cobbler's son has no shoes" moment when I first realized back in the RHEL 4 days that the /etc/init.d/ scripts I was looking at were the actual implementation and not some abstraction.


> SystemD changed that behaviour to "if a filesystem doesn't come up, and you don't set an option to tell systemd it's ok if that doesn't happen, don't finish booting".

I still have to find a way to let it digest sshfs filesystems that may hang at boot without stopping the whole boot process.


Systemd by itself is alright, the problem for me is the larger movement from "Linux is about choice" to tightly integrated, opinionated tools. Examples:

- Gnome started to depend on systemd features. Now non-systemd systems become second class citizens.

- You used to be able to freely swap out window managers, taskbars, and so on, because everything was working with common standards. In the modern world, everything is integrated in your wayland compositor, and if you want to change the way window decorations look, you will also have to change everything else. Systemd is similar, it wants to take over so many responsibilities (task scheduling, DNS, session management), and if you want to swap out one part it is going to be difficult.

- Pulseaudio, NetworkManager, Polkit: Written in the same spirit, in part from the same people. Work great if everything works as expected, if they break it is a nightmare to figure out why. But code depends on them, so they become de facto standard on desktop linux.


Linux has never been about choice. There was choice because no two people could agree on a set of protocols and standards. And slowly we're converging on a small set of decent (not perfect!) ideas volunteers can focus their efforts on. I honestly hope the choice meme dies because it is so toxic.

My Linux workstation and servers are not something I want to tinker with, I want solidity and stability and a fragmented user space ecosystem is not the solution.


"linux" (as in: Unixoid distributions for desktop systems) has always been about choice. Debian's update-alternatives, the whole FreeDesktop project, ICCCM standards. The theming community, gnome-look.org, kde-look.org, and so on.

If I want the stabilty and flexibility of Linux (the kernel) and the GNU command line tools, and the flexibility to choose a UX paradigm that I prefer and visuals that I like, I go to desktop Linux. In that sense, choice is the USP of linux (linux as in: desktop distribution). If I loose the flexibilty, I go back to Windows or Mac. Opinionated ("not fragmented") systems like Gnome only work if you're fully on board with their choices. If not, then there is really no benefit for me over just sticking with windows.

Now, on a server the situation is of course different, and stability is paramount. I like for example that services are declarative and not imperative in systemd. But on a desktop, I don't really care.


> "linux" (as in: Unixoid distributions for desktop systems) has always been about choice.

Again, no. You call it choice, I call it chaos, it was anarchy, and that's the default state for any loosely-coupled systems. There is no manifesto, no underlying philosophy that decided that yes, Linux is about choice. Don't conflate lack of organisation with choice.

Linux has always been about creating a libre UNIX-like kernel for x86 processors. And converging toward the same set of libre techonologies isn't against the Linux spirit.

I still don't understand why people keep harping on about choice. I've used Linux full time since 2001, and the anarchy has just made a libre desktop an utopia because everyone wants every software to support every single option, with no understanding that the number of bugs scales exponentially with the number of choices.


> I still don't understand why people keep harping on about choice.

Because they don't want the choices other people made for them.


Fragmentation is not a skippable problem though. And especially when you depend on lower level services, choice without a standard interface will create an exponential explosion of should-be-supported APIs, which is terrible for an already thin community.


> Linux has never been about choice. There was choice because no two people could agree on a set of protocols and standards. And slowly we're converging on a small set of decent (not perfect!) ideas volunteers can focus their efforts on. I honestly hope the choice meme dies because it is so toxic.

So everyone who uses or develops alternative init systems like runit and openrc and alternative DEs like KDE and package managers like apk and pacman should just stop and work on and use systemd, GNOME, and flatpak instead because that is the One True Way of using Linux on desktop?

Quite a power trip attitude I'd say. I can't imagine why I would use Linux if choices I don't agree with were shoved down my throat.


Wayland doesn’t mandate anything, it’s a goddamn protocol to put a rectangular window’s content to the screen. It is also freely extendable and has a very sane feature discovery in-built so basically any additional functionality can be created through a simple API (and many such exist already)


But can you run a remote application from another machine on your own machine? This is my biggest obstacle with Wayland. Xorg did it perfectly.


Yes you can with proper compression and usual streaming. X is no longer network transparent anyways (everyone uses proper GPU-acceleration with a litany of bitmaps so it’s not as easy as serialize drawing commands)


So if I SSH into my remote machine, with the appropriate forwarding agents and wayland in place I can run evolution?

Ok I will give that a go.

That reminds me when will Wayland be able to support disparate displays say one big display on 4k with 30 bit colour and two others on QHD and 24 bit colour?


Disclaimer: I _still_ like systemd.

My only real big pet peeve with Systemd is the documentation is somewhat lacking..., like it's all there in the manual, but its laid out poorly and makes it hard to understand. The interaction between unit files and dependencies is also quite convoluted when you start digging into how things are ordered. There's a ton of unit file options that enable weird conditional behavior. The moment you need to start comprehending the difference between Wants vs. Requires and not hooking into running when multi-user.target is reached (the most common situation) you are in for a world of trial-by-error testing.

Systemd is great for starting simple background services on simple desktop and server environments. It becomes a real pain in the ass once you get beyond that.


> The moment you need to start comprehending the difference between Wants vs. Requires and not hooking into running when multi-user.target is reached (the most common situation) you are in for a world of trial-by-error testing.

And the worst is that the tooling to test is very limited ( non-existent). I had a problem with conflicting/circular ndependencies ( my service had to be launched before networking but after dbus, which is just impossible because dbus needs networking itself), which worked 9 times out of 10. It was nearly impossible to debug, i had to use the graph tool that shows you visually how much time each service took to start, and completely by chance i noticed that dbus started after networking, which led me to look into it.


If forcibly "updating" my system so that it doesn't boot doesn't count as "truly bad behavior/software", I don't know what does. The difference between systemd and outright ransomware is only a matter of degree (frankly I'd prefer paying a bit of cash to having to learn a new set of config files and admin commands because someone decided they didn't like the old ones).

(I don't particularly blame him for the specific non-booting bug, but I do blame him for the arrogance. Everyone makes mistakes, but most people learn from them and develop a bit of humility. Pottering continues to break people's computers every couple of years)


I use redirector which lets me set rules (as complex as you want) to redirect links. For example I have one that redirects a a all YouTube.com/* to piped.com/*

See (also on chrome under same name): https://addons.mozilla.org/en-US/firefox/addon/redirector/


The paper the article is based on: https://www.nature.com/articles/s43017-021-00219-y


Yep I posted this hoping to raise awareness, but the reaction was not what I expected. In the US even a meritless legal threat will require hiring a lawyer to ensure you are in the clear which requires significant amount of money, in addition to the stress. Researchers should never be putting anyone in that position.


Here's a thread with three different attorneys who were consulted over this email:

https://twitter.com/DanielleVEsq/status/1472105731474137094


Just going to tag along to my comment above. I can't help but notice the difference in tone of the responses to this article vs. the one yesterday (https://news.ycombinator.com/item?id=29599553) which I did not see. It seems those who participated in the thread yesterday had a quite different opinion.


As the author, I wonder how much me being not quiet but not an asshole about being nonbinary and having furry stickers in the article makes people start hitting the vitriol button.


As a person who has watched this kind of thing happen a lot: I think a lot. I had a bunch of friends on tumblr who wrote and exchanged very similar fiction and art. The ones who were known to be trans, female, enbies, or non-white got harassed a lot. The one who was understood to be a white cis male got left alone. And this is on tumblr, where the ostensible position of a large part of the user base is that white cis males are Bad People by default...

So yeah, pretty sure it's that. I almost never get harassed by people who think I'm cis male, the bulk of the harassment came from people who thought I was transmasc.

Probably the furry stuff too, which is honestly sort of terrifying, do these people not know how much infrastructure relies on stressed and overworked furries?


You obviously don't need anyone to tell you this, but for passersby for whom this has not been on their radar before: this is almost certainly true. HN commenters will on occasion be superior jerks towards anybody, but this is some loud posters' instantaneously adopted position when they see a name that doesn't code as male--for another example, see many `rachelbythebay.com` posts, where you have randos getting sniffy and assuming she's incompetent or junior for some Real Interesting Reasons.

By my estimation it is real, the proprietorship (who I like and appreciate personally, and I think are operating on good faith) of this community has often made noises about how it should be better. It's not, and it should be.


I guess it's the wording. "without my consent" is used to imply a strong violation of the personality rights, at least in some circles [0]. It's also pretty much a repeat of yesterdays post. At least to me, this makes the issue feel a bit overblown.

[0] The irony that the whole problem is based on wording that implies a lawsuit is not lost on me.


Getting “informed consent” is one of the big, guiding principles for research done on humans. My guess is the author deliberately used that language of the scientific community, to make clear that they did not agree to be part of the research.


Yeah, but the idea of “informed consent” is misunderstood broadly. There is no constitutional right to informed consent. Not all human subjects research requires informed consent — or even consent. There are other institutional ethical lapses that are much more dangerous — and there are also ethical attitudes that rest on the researcher, not the institution.

Righteous indignation over something like this is dangerous, not least because it can lead to science being much harder (more bureaucratic and more expensive) to do by all scientists. “More oversight” or “dissolve the irb” all put too much responsibility on the institution.

Sometimes, we should just blame the people who did something rude and stupid not the institution.


> Not all human subjects research requires informed consent — or even consent.

There have been multiple examples of this going horribly, horribly wrong. (Naturally, the worst examples were government-funded and ran during the Cold War.)

As a society, we have since concluded that at bare minimum, people should know they are being experimented on--and even that isn't enough to stop things from going badly.

This is why the IRB exists in the first place. A major part of its purpose is to prevent this sort of thing from causing undue harm, i.e., by forcing people with limited incomes to seek legal counsel because they believe they're about to be sued into the ground. One of the rules generally agreed upon for this is that experiments with human test subjects must inform those humans up-front what they're getting into.

To say that the response to this "can lead to science being much harder" is an ethically wrong defense. We know it makes certain kinds of research harder; that's the point. There are certain kinds of research that directly harm their subjects, and we don't do that to people. More than that, people have a right to decide whether they want to be involved in a study, as they may personally feel endangered by it (i.e., someone who has a PTSD response to being sued may not want to deal with being fake-sued).

To say that calls to dissolve the IRB "put too much responsibility on the institution" is flat-out false. This IS the IRB's responsibility--they approve or reject studies like this specifically to avoid ethical problems like this one. To claim that this isn't the IRB's responsibility is like claiming that it's not the responsibility of the law to revoke a driver's license when someone has been driving drunk, or that it's not the responsibility of the Food and Drug Administration to reject approval for foods that contain dangerous contaminants.


I'd recommend reading up on why not all human subjects research requires informed consent. For instance, there are exceptions for human subjects research that takes place on normal educational practices. This carve out was made because of the difficulty of getting unanimous consent from all parents during normal classroom education. With greater oversight and full informed consent, a lot of educational research simply wouldn't happen.

So, to reframe this: "can you think of scenarios where institutional oversight could cause negative harms on society?" There are tradeoffs in ethical domains —and usually a lot of work has been done to find a middle ground.


The example you gave is not relevant. The researchers in this study are already directly contacting everyone they need consent from, and if an individual declines, the rest of the sample can still be studied (unlike in a classroom setting, where everyone's physically in the same room and it's impossible to study any of them in isolation).

Further, the study would still have worked if the researchers had simply asked for the information as researchers, instead of lying about their identity and making thinly-veiled threats of legal action if the subject doesn't comply.


I’m not defending their study and I feel like you are trying to argue “sides” rather than trying to understand my point about nuance.


I'm primarily concerned with the study, since A) that's the topic at hand, B) it involves technology and privacy, which I care about, and C) people keep comparing this to pentesting, just like last time, and I also care about security research.

I apologize for assuming that you were defending the study; I figured that was the topic of conversation, after all.

If there is nuance, but it does not apply to this situation, then it is worth saying that this nuance does not apply to this situation--so that I'm less likely to misinterpret what you're saying.


Heh, you should check out who you're replying to. He runs some kind of educational research platform.


Wait, your motivating example is keeping parents out of the loop... do you give this example to show that it usually goes really bad when informed consent is missing? Or do you mean to argue that it's okay to experiment on kids without consent, because the end justifies the means? Or a third option, that I'm overlooking currently?


Yes, that’s correct. Because conducting experiments on things like “does this approach to teaching fractions work better” is important for society. The ends are good and the means are reasonable. We aren’t injecting kids with chemicals—we can only experiment with “normal” educational practice. It shows why nuance is important— and why requiring informed consent isn’t always the most ethical choice for society.

Ethical action involves nuance! It is very comforting to think that the world is black and white, good and bad. But it isn’t. Why is this so difficult to communicate?


It sounds like this type research - which could alter the subject's actions and cause harm (costs) - should have included consent.


And, the researchers shouldn’t have phrased things like an arse, just to increase their response rate!


> In the US even a meritless legal threat will require hiring a lawyer to ensure you are in the clear which requires significant amount of money, in addition to the stress.

So what happens when a site receives a CCPA inquiry from an actual person concerned about privacy instead of a researcher under a fake identity? The site still needs to determine if the law applies to them and if so what they must do to satisfy their obligations, so a real inquiry should be as costly and as stressful as a research inquiry.

Does this suggest that privacy laws such as CCPA (and GDPR) which create obligations for sites to deal directly with users on privacy matters are a bad idea? Should such laws instead require users to go through some state agency as an intermediary which would then only contact the site on behalf of the person if the agency determines that the user's data at the site is covered?


It would have been possible to make the requests without a threat of suit. The thinly veiled threat of suit came from a portion of the email that quoted a specific section and used legal verbiage to get people to respond within a certain time frame (as required by the law) This was taken as a legal threat. The request would have been just as valid without the threat


Though due to the nature of the requests, they were not actually subject to that specific section of the law, and thus the demand of a response within 45 days had no genuine legal foundation.


Which brings us back to the comment about meritless legal action still being costly for those it is used against.


Is that intended to either justify or excuse the inclusion of that language? If so, I strongly disagree.


> So what happens when a site receives a CCPA inquiry from an actual person concerned about privacy instead of a researcher under a fake identity?

They conclude that it's a Princeton research study and throw it in the trash.

This is part of the harm this study has done; because the researchers were not upfront about who they were and what they were doing, they have introduced uncertainty about the CCPA process.

> Should such laws instead require users to go through some state agency as an intermediary which would then only contact the site on behalf of the person if the agency determines that the user's data at the site is covered?

That could be a good idea. It would depend, of course, on that agency being well-staffed and well-trained (both of which are separate from being well-funded, which can help). There's a giant pile of messy problems that can crop up due to negative influences from, say, corporations that want to sell more data.

That said, it would be nice to have an org that can do the minimum legal work necessary to figure out if the claimant has a leg to stand on. That would not only minimize the harm of this sort of ill-advised study, but also make it harder to use threats of legal force to coerce smaller site owners.


This is just FUD. I've yet to meet a lawyer who won't do a cursory evaluation of your case for free. It's in their interest to know if you're bringing them an easy win.


> I've yet to meet a lawyer who won't do a cursory evaluation of your case for free.

You don't get out much, do you?

I've known tons of lawyers that won't look at their watch and tell you the time, unless they get a tenner from it. To be fair, they are used to folks trying to extract highly valuable services from them, for free, so it's sort of a defense mechanism.

I have (and have had) many friends that are lawyers. A few will help me out with quick consults for free. I even have one chap that has gone beyond that, and I'm grateful. I'm quite aware of the value of their services, and always offer (and am willing) to pay; even if they decline to invoice me.


I guess it depends on the lawyer. Not long ago I was shocked when a lawyer (not somebody we know, just a random phone book lawyer) stayed on a call with my girlfriend for a half hour talking about her father's estate and charged nothing for it.


It's also worth noting that under US law, a lawyer who gives you legal advice can be held liable if that advice causes trouble down the line.

This creates even more incentives to have a paywall--one, it keeps people from bugging you for free legal advice that can bite them and you in the ass later, and two, it ensures that the people who do get advice from you have followed your procedures for setting up an account with you.


I see the issue being the email was formulated in a way that is easily perceived as a legal threat. I don't see how it is ethical for a study to be sending out legal threats to see how someone responds. Now if the goal is to understand how websites respond to the CCPA/GDPR then it shouldn't be too difficult to ask how they would respond to a request and note that it is part of a voluntary study.


I have been part of the beta test for kagi search engine (kagi.com) and my favorite feature is you can prefer/mute websites. So for example I prefer MDN so the site is ranked higher if it shows in the search results. Note I’m unaffiliated with them but the product/team is great so rooting for them to succeed.


I'm using Kagi at the moment and not only does it have nice features like this but it is also a fantastic search engine.

Maybe the best way to explain how fantastic it is is that I think it has a bang operator like DDG but I have never tested it because when I look at the results they are obviously better than DDG or Google.


I love the preference idea... I'm going to implement that in my personal search engine.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You