For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more sanqui's favoritesregister

I tend to think of the following as "states-in-waiting" -- centrally organized groups that can fill political vacuums.

- Several large religions (Catholics, Mormons, certain Baptist conventions)

- Mexican drug cartels

- Several multinational corporations (though internal bureaucracies of many in F500 are stagnate and wouldn't respond quickly to a power vacuum)

- Certain sub-divisions of governments (in the US: states, large municipalities, military bases, land managers)

- Certain volunteer organizations, like Civilian Air Patrol, Girl/Boy Scouts

These organizations have strong hierarchy, clear incentive constructs to align to their mission, and can typically respond quickly to external opportunities.

Books like Dies the Fire by S.M. Stirling walk through sort of an uber-case of this kind of scenario from a scifi/fantasy perspective. And that another organization fills a void is not crazy -- we've seen it in history (Catholic -> Roman empire, post-American revolution, KKK and fraternal orders like Masons/Elks post-US civil war, several places in the Philippines, France, and others in response and following WWII occupation, and so forth).

Power is slippery. There is no divine right to it. People do well when it is held stably and respected for what it is. People do poorly when it is mismanaged, and will tend to support the strongest claim when there is a vacuum.


Real wages have been flat since the 1970s (https://www.pewresearch.org/fact-tank/2018/08/07/for-most-us...). "Okay, so people's living standards are about the same since 1970?" Not even close. The problem with inflation is that its calculation is fraught with all kinds of selective weighting, bias, and politics. Take a look at the relative differences between the things which have decreased and increased in cost in real terms (https://static.seekingalpha.com/uploads/2017/5/29/saupload_i...). Okay, so TVs and cellphones are cheaper, but necessities like college, childcare, and healthcare are up significantly.

And when you drill in more, there are other issues. For example, the housing metric appears relatively flat in real terms, right? Except, that doesn't tell the story for the majority of Americans. Housing costs are WAY up in cities as Americans migrate to where the jobs are, while housing costs are down in all the areas where jobs are disappearing. In practise, housing costs are actually significantly up for most Americans, but because the CPI is calculated across the entire nation, the government can claim inflation is low. This allow them to keep the fed rate low, which keeps stock valuations high. This LOOKS great and generates great headlines, but the reality is that most Americans are significantly worse off since the 1970s.

And then we come to the other elephant in the room: why are real wages flat? Given the incredible performance of the American economy since 1970, shouldn't real wages be significantly UP? This is yet another reminder that wealth is being squeezed from the bottom and funnelled straight up. As a non-American, it's heartbreaking to see efforts to unify the underclass completely derailed by this ridiculous race war. Poor people in America have FAR more in common with each other than the rich, no matter race. I really hope you all manage to set aside the obvious red herring of race and unite in tackling your staggering wealth inequality.


Many years ago I stood at the window of my comfortable apartment, watching wind and cold rain rage outside.

I thought about my cave men ancestors who during such a storm if they needed water would have to go out and get it, getting themselves soaked.

If I wanted water, the tap in the kitchen would give it to me, in a nice controlled fashion. If I did feel like having water rain down upon me, my shower would do that, again in a controlled fashion, and I could select the water temperature.

If they wanted the cave to be warmer, they had to burn something and deal with the smoke. And they might have to work hard to obtain whatever it is they burn.

If I wanted my apartment warmer, I just had to turn the knob on the thermostat.

They were at the mercy of their environment. My environment is mine to command. I was feeling pretty superior to my cave man ancestors.

Then I realized that I don't know how to build the systems that I was relying on for my supposed superiority, or even how some of them work.

I'm really just a cave man that found a nicer cave.


Wow. Didn't expect this :) I was the author of this tiny extension back in the days (extension #201, I still remember it) together with a friend who recently passed away, Federico Parodi.

We hacked the first version when we still didn't know much about JS, but it immediately exploded (500k daily users). Donations were super helpful at that time, being 22 years old with a newborn.

After a couple of years (iirc) Nils Maier joined the team and helped us a lot. Years later he managed to transition the codebase to the new web extension tooling that was introduced by Firefox a few years ago.

This little extension really shaped my professional career. The realization that something I made could have such an impact is what gave me the drive to keep creating small indie products, which I'm still doing nowadays with http://www.datocms.com

Thanks for making my day! :)


>I tried to work with Mozilla to get a couple minor changes made to enable some of the missing features, but that went nowhere fast.

Yeah, unless you actually write the code, there is no way. And writing code for the mozilla code base can be quite intimidating with their mix of C++ and js and XPCOM and not-XPCOM and webidl, etc. I once submitted a patch (unrelated to my extensions) that bounced from the who-is-who of mozilla rockstar devs at the time, and nobody wanted to really even look it because it was changing something so deep within the XPCOM-Javascript bridge and nobody really remembered how it even worked. I finally got my r+ from some brave soul who just said "I don't really know the code it touches either, but somebody has to do the review".

Even if you do the work, it can be an uphill battle to get code in, especially if you're trying to add new features and not just fix existing bugs.

I spoke to people within mozilla back in the day - I was part of the community after all and knew a lot of folks - and they weren't exactly happy, but weren't in a position to make things better, either.

DownThemAll! was big enough that they eventually "officially" reached out and ask me what I need, and then essentially said they couldn't really do any of it, "sorry" and they know "that sucks" (refreshingly honest, at least, but I wasn't talking to upper management but a developer-turned-developer-relations). The person who contacted me, one could tell, was given a mission to appease developers by showing mozilla cared, but wasn't actually provided any resources to really help or support people. All that person could do was to apologize and suggest to read the docs and read the docs on how to propose and implement new APIs - but at the time I had already proposed some new APIs that in my opinion would not just have benefited DownThemAll! but all kinds of add-ons dealing with downloads, and was struck down as "not generally useful to a lot of add-ons, sorry, we do not want to maintain such an API" already.

What I said almost 5 years ago still is true in that regard: they tried to a certain degree to accommodate some of the really popular add-ons, and with some success too, and the smaller add-ons were left in the dust. Not because of ill-will of mozilla, but simply because they lacked the resources to do anything more.


The idea here can be generalized to a primitive programming construct: The magic function "wish", which can do anything you want, just give it a string description.

For example:

    wish('copy these files to this host', files, host)
You can augment this further by allowing the program to specify a return type from the wish, and magically the wish will produce the type you want. So for example:

    data = wish(Path, 'the path of the data files I need')
Of course, the wish is actually implemented by sending a request to some human user. In my "wish" library for Python http://rsyscall.org/wish/ you can have a stack of handlers for wishes, just like exception handlers. If later you want to automate some wish, you can specify a handler which intercepts certain wishes and forwards the other ones on (by wishing again, just like an exception handler can re-raise).

The first paragraph question "But I wonder if it’s ethical to play anything from [company], these days?" probably is the most discussion-worthy topic of this article.

I'd argue that the answer to that question is "yes", no matter if it's Blizzard or some other company. It's completely ethically valid to boycott a shitty company with shitty leadership and shitty culture (as OP puts it), however, it's also completely ethically valid to not boycott such a company and simply pick the best product. There is no ethical duty to participate in every fight; it's ethically good (and better) to "fight for good" but it's also completely ethically acceptable to not put effort in any one of the near-infinite of those fights - it's only unethical if you're actually harming others or doing evil, and just living your life is not that.

Guilt by distant association is an abhorrent concept that should not be validated - using someone's products does not imply supporting the company's policies or leaders/owners personal behavior or political actions. If you want to make a point of ostracising someone, sure, go right ahead, but there's a very high bar to pass to state that it would an ethical requirement for everyone else to participate in that ostracism - IMHO it might be debatable if it would be a an ethical requirement in case of clearly convicted murderers, and even more so in cases which are (a) less severe than murder and (b) the particular culprits less clearly proven.


I've put some thought on this topic, and I think the driving force between the Internet seeming so small nowadays is a combination of changes to how search engines work, as well a move from forums to big social media which has meant a shift from organic community discovery to being drip-fed content from based on what an algorithm thinks will be engaging.

The Internet, as you remember it, still very much exists. Some forums have shut down, but there are still small personal websites, blogs, all that stuff. They're just really hard to find with Google and facebook/reddit/twitter.

Here are some cool and creative things I've discovered recently. I have no affiliation with these projects, I just thought they were cool:

http://www.lileks.com/

http://dreamcult.xyz/

http://sod.jodi.org/index.html

http://godxiliary.com/

https://www.floppyswop.co.uk/

https://www.dedware.com/


> Where I get uneasy is how quickly SmarterEveryDay seems to have

> switched from a YouTube channel about explaining things into a

> YouTube channel about stoking "Big Tech" fears as a way to sell

> apps. The latest video is title "Is Your Privacy An Illusion?

> (Taking on Big Tech) - Smarter Every Day 263".

This has been a trend with YouTube science communicators and educators for a while now. They're getting hired by companies to basically produce an ad in the style of their regular videos.

Physics Girl did a video in cooperation with Toyota praising Toyota's hydrogen powered test car, telling her audience that hydrogen as a passenger car fuel has a future while ignoring the fact that every car manufacturer including Toyota has given up on it and favoring Lithium battery packs instead. https://www.youtube.com/watch?v=hghIckc7nrY It was an odd move from her to hype Hydrpgen cars but her audience caught on to it quickly in the comments. Which prompted her to make a follow up video: https://www.youtube.com/watch?v=dWAO3vUn7nw where she's trying to save what's left to save for the hydrogen cause.

Veritasium did it too. In which Derek Muller created nothing short of a propaganda piece for driverless cars and not even just an ad for the Waymo service in particular. https://www.youtube.com/watch?v=yjztvddhZmI This gives me the same vibe as those doctors promoting cigarettes smoking against coughs in the 1950s and 1960s: Here's a professional often someone who is highly regarded as being objective and impartial who is basically leaving out crucial information to push something either a product or an idea onto their audience that is either outright harmful or not holding up to its promises.

In both these cases it was promoting a technology that isn't working out quite as intended. Green hydrogen in commuter cars is way too expensive and impractical to be able to compete with Lithium-Ion battery powered cars that can simply be plugged in at home, where you charge it with your own solar power or energy you get from a renewable energy company. Green hydrogen fuel which is impossible to come by right now would be way more expensive than green electricity you can currently get at home. Self-driving taxis hardly work at all in most commute situations and if they do, it's only in very restricted ways. Within confined neighborhoods and they are prone to fail due to unexpected situations and practical jokes by pedestrians. So much so that Google and other companies have all given up on these kind of services for the time being. These are two examples of products that either do not exist (and never will in the form they are being promoted) or are a horrible choice for most of their viewers. So why promote them? Answer is the companies pay them to promote them. That's the real reason these influencers lie about why this stuff is cool and why it's the future.

But they're simply bullshitting us by leaving out the issues and shortcomings and by not talking about the bigger picture of the market situation for these 2nd or 3rd tier technologies. They are trying to help them survive against the more established and more promising competitors. The key issue is promoting products that are not wise for YOU to buy or vote for right now. They say it's about choice, but they try to trick their viewers to choose or vote for something that when considering all facts you'd not want to choose. ("Why You Should Want Driverless Cars On Roads Now" - Wait, really? Is my neighborhood and commuting route suited for driverless cars right now??) If the companies do it that's fair, they struggle for their survival, but these are content creators who base their brand on being scientific and trustworthy. So that's an issue, they gamble their credibility and millions of their followers trust them to say the right thing, even though their viewers might be more vigilant and educated to call bullshit as well, so there's hope this will not become a regular thing.

Tom Nicholas did a video about this trend: https://www.youtube.com/watch?v=CM0aohBfUTc


> There is no system to keep content alive so links will still die.

Torrent trackers solved this in a very interesting way. They created an economic system where bandwidth was the currency, incentivizing the permanent seeding of content. It was illegal to take more than you gave. I've even seen an academic paper studying their system!

Bandwidth as a currency eventually proved to be a failure. It enabled the rise of seedboxes, dedicated servers featuring terabytes of storage and connections to high capacity network links. Just like the IPFS centralized gateways you mentioned. They would eventually monopolize all seeding, removing any normal person's ability to gain currency. In some trackers, if you wanted to consume content, your only options were renting one of these seedboxes or uploading new content to the tracker. You always stood to gain at least as much bandwidth as the size of the content you uploaded. The seedboxes would monitor recent uploads and instantly download your new content from you so that they could undercut you. I suppose it was a form of market speculation.

They also failed to realize that there is no uploading without downloading. By penalizing leechers economically, they disincentivized downloading. This led to users being choosier: instead of downloading what they like, they'd download more popular stuff that's likely to provide higher bandwidth returns on their investment. Obscure content seeders would not see much business, so to speak, due to the low demand for the data. Users would stock up on popular and freeleech content so they could get any spare change they could. The more users did this, the less each individual user would get. Then seedboxes came and left them with nearly nothing.

This was eventually solved by incentivizing what was truly important: redundancy. Trackers created "bonus points" awarded to seeders of content every hour they spent seeding, regardless of how much data they actually uploaded to other users. These points can be traded for bandwidth. This incentivized users to keep data available at all times, increasing the number of redundant copies in the swarm. People will seed even the most obscure content for years and years. In some trackers, these rewards were inversely proportional to the amount of seeders: you made more when there were fewer seeders. This encouraged people to actively find these poorly seeded torrents and provide redundancy for them.

We can learn from this. People should be compensated somehow for providing data redundancy: keeping data stored on their disks, and allowing the software to copy it over the network to anyone who needs it. The data could even be encrypted, there's no reason people even need to know what it is. Perhaps a cryptocurrency could find decent application here. Isn't there a filecoin? Not sure how it works.


The trouble with HAM radio is that it isn't that useful anymore. We have amazing cell coverage, wireless broadband just about everywhere and the ability to socialize online.

Ham used to be incredibly useful for global communications, rural areas with no phone service, and a way for people to stay socially connected in remote (or not remote) locations. This isn't needed in the vast majority of cases now.

That has led to most hams active on the airwaves being an older demographic. And the way they try to recruit others into the hobby is obsolete because the need just isn't there. It's a 100% academic exercise, rather than a way to connect with anyone around the world.

There is a newer aspect to ham which is modern digital radio, but most old timers have no knowledge or interest in that. There's a subculture that exists that does amazing work in that space, but it's disconnected from the old-timers that represent the face of ham radio. You'll find this subculture at DEF CON and similar events.

Even the things about ham that are still useful, like the marine net on 14.3Mhz which we used in the 90s for transoceanic sailing is about to become obsolete with affordable internet in the mid-pacific or mid-atlantic becoming a reality. And polar bases are about to get polar orbit Starlink satellites for broadband.

So the hobby really needs to transform it it's going to survive. But the old guard and new world are so disconnected, destruction and recreation may be a better way to think about it.

My wife and I are both ham Extras and we run a cybersecurity company. She's KF1J and I'm WT1J. We're passionate about the hobby, but both see the writing on the wall. Ham right now is a solution looking for a problem.

73.


Heh. I worked on a type of photonic PoW in 2019 (but it wasn't silicon photonics).

- It's faster and lower power for the hash rate. It's worth it. But not as fast and low power as hoped.

- It's a great advantage for the early adopter who can mine that bit faster than everyone else for a short time. That was the motivation for the project.

- Contrary to the title of the article, making hashing faster for less energy doesn't "help with the high energy consumption of the crypto currency mining activities" once the technology becomes available to enough miners.

It's the same as the introduction of ASICs before it: only a temporary advantage to those few who have it first. Of course that's only a few miners, so it doesn't affect energy consumption of the whole blockchain much. As soon as a new technology becomes widespread enough to have a significant effect on the whole blockchain, blockchain proof-of-work returns to exactly the same amount of energy consumption as whatever technology came before it.

It's because of how the mining-price system works, proof-of-work energy consumption is independent of the technology widely used. In other words, you can't solve the proof-of-work energy consumption problem with a new proof-of-work hashing technology.

So whenever you see a headline or article promoting a new proof-of-work solution on the grounds that it will reduce energy consumption of the blockchain - that's false. Snake oil, even.

(On the other hand, other mining methods such as proof-of-stake do reduce blockchain energy consumption for real, in exchange for different problems, perhaps.)

But I think the Eindhoven people have understood this, and it's the journalism that misleads in this respect. From the article:

> From our perspective, the ideal proof of work algorithm would be an algorithm that spends no energy, but time (i.e. pure delay). This kind of proof of work is hardly possible in the real world. So instead, we would love to see some sort of a “computational delay” when only a small fraction of energy is spent on proof-of-work computation, while this computation itself introduces a time-delay. We believe that photonics can help us to achieve such a “computational delay”.

I'm surprised Verifiable Delay Functions (VDFs) are not mentioned. There is has been an emerging field of cryptography focused on delays enforced mathematically, called Verifiable Delay Functions, since about 2018. Rather like the technology race of proof-of-work, VDFs rely on the assumption that underlying technology (logic gates, arithmetic units (like in analogue photonics), etc) is somewhat evenly distributed; that nobody has access to a technology that can perform the calculations with much lower delay than everyone else.

VDFs still won't reduce energy consumption if they are used as another form of proof-of-work. For example if running many delay functions calculated in parallel gives a mining advantage, many will be built and together they will use the same amount of energy as proof-of-work. To provide a real energy consumption benefit, the blockchain consensus method needs to change as well. Think "proof-of-verifiable-delay". As far as I know, no such method has been worked out yet.


Nuclear industry here.

In the fanciest systems, checklists live in a computerized procedure system tied into the plant process computer, so the plant state and procedures can be kept in sync and mistakes can be avoided when the software can see if you didn’t actually do the step you were supposed to.

A more conventional approach is a document management system and controlled binders in the control room with the latest procedures, often laminated so they can be marked up and wiped off.

When working procedures on paper, we always use a circle-slash system for place-keeping: circle the step number when starting it, and slash through the circle when completed.

Finally, key procedures should have a separate document documenting the bases of the procedure—-why key values were chosen or what other documents they were taken from or depend on. That document becomes the key in change management—-if a dependency changes, or you want to change the procedure, you can use the bases document to ensure side-effects are considered.

Finally, procedures still have programmed regular reviews.


Bolting on to this comment with some other neat things, as a fellow HASS enthusiast:

* Every single light in my house moves from dim/orange -> bright/bluer -> very dim/orange throughout the day (basically like an whole-home f.lux or Night Shift). Sadly still third party, but it's easy to use: Circadian Lighting component will find it

* I have "night" modes in all my rooms. This is usually a single bulb in lowest-brightness full red. You can barely see it during the day, but at night it makes bathroom trips a non-blinding affair

* I have 6+ speaker zones, on Raspberry Pis mostly. Snapcast runs the audio stream, but turning on/off a room mutes the speakers in it (walking into a room turns on the lights via motion detectors and music continues to follow, which is neat)

* I pipe a lot of text-to-speech messages. Some rooms won't play them if the room is off (outer stuff like my garage), but others always do (so I hear them). This is more custom now, and I even duck the playing music stream for the TTS portion. It can take in text, so I do things like have my automation say a bunch of things every morning (my age in days, some web-scraped snippets, etc)

* $10 power sensor is enough to know when your washer is finished. Power for awhile -> running state, no power after awhile in running state -> finished. This goes right into the text to speech system

* Every room has a 10-button remote (the very, very cheap zap remote kind). Most of the layout is the same--room on, room off, start music (or skip track if playing), stop music, full-bright lights, night lights. This still leaves a few for custom-to-the-room buttons, which I use

* Contact sensors on all openings to the house. I let me cats into the backyard during the day. Cat access via any configuration is still open, and sun is below X degrees? Text to speech

* Most of my logic is Node-RED (another comment here about that), which gives me a lot of flexibility. I have a global "house" mode, which I can set to guests or party to suppress most of my assumptions that I'm home alone

* Example of one of those: My setup knows if I'm using one of my two desk computers. If I am, and house is in "home" (alone) mode, I turn off all the other rooms in the house

I could go on--I went pretty deep when I first set up Home Assistant, but that was years ago now. Every now and then I do a major update or add functionality to smooth over something that's been bugging me

Once you hit some tipping point of soooo many things available as sensors or services in HASS, adding completely new functionality is a very incremental change


I think it's worse than the patent office rubber stamping everything. They actually don't rubber stamp things that your average, small inventor might try to patent. They seem to always initially reject applications, requiring multiple submissions with small modifications over and over until they relent and grant.

This serves to filter out everyone except those with deep pockets and highly paid patent attorneys.

When I filed for a patent a few years ago, we got back prior art from the patent office that was very far fetched. We paid $3500 to the attorney for the initial application and then each time it would come back, we would pay a few hundred more.

Our patent attorney estimated that we would end up paying around $10k, but he was fairly certain that we would get through. He said most of his applications followed this same pattern.

We decided to cut our losses at around $5k and move on. A few years later, some large company patented very similar technology. We submitted prior art but they just had to pay for a few more iterations to get around it.


Most of the dissonance comes from the fact that a job is much more than the core activity.

When we speak of pottery as a hobby, we think of handling clay, sitting in front of a pottery wheel, choosing a fitting glaze, then admiring our finished work. The focus is on exploring and enjoying the process. You have complete control over the process and you can do as much or as little as you like because there is no delivery pressure. Sounds fun!

But a pottery business is a different beast. Now, the act of making the pottery is a much smaller part of the overall workload. You have to deal with marketing, sales, payments, inventory, and communications. Your pottery time is no longer about exploring, but about replenishing inventory or handling custom orders in time to avoid bad reviews. You can't just make a piece and be done with it, you have to carefully pack it, handling shipping, drop it off, and hope that nothing goes wrong. If it does go wrong, you now have to deal with the fallout and handle angry customers. Most customers are fine, but it only takes 1 irrationally outraged customer to ruin your day.

Hobbies, by definition, can't be a job. You can try to make a job or business that includes your hobby, but it's a superset of the hobby with numerous other activities that you may or may not enjoy.

I do actually know a lot of people who enjoy running businesses. You might say that businesses are their hobby and delivering results to customers is their passion. Not surprisingly, they're doing quite well in both business and the mental health department. Lucky.


The funniest demonstration that I watched was at the computer museum at the University of Stuttgart (it's just a single room, but it contains a lot of history!). The guide took an old, butchered radio that was reduced to a coil attached to a speaker and put on top of the front panel of a PDP-8. Then he started a Fortran compiler, which would take several seconds to complete. During that time, the radio made kind if hideous digital beeping noises from the CPU's EMV radiation that got picked up by the coil inside. You could easily learn to distinguish different compiler phases and tell whether the program made progress. The guide explained that this was a common way for operators back in the day to keep track of the jobs they were running while taking care of other tasks: were they still running? Did they get stuck? Did the job complete and is it time for the next one? Some inventive guys figured out that when you wrote certain instruction sequences, the EMV noise would become tonal and the pitch could even be tuned to some extent. That got them to write programs which would compute nonsense, but when you picked up the EMV emissions, you would hear music! The museum guide ran a few of these programs to our great amusement :).

I've yet to see this mentioned - or demonstrated - anywhere else.


The tech industry in Australia has for some time existed in an impossibly hostile climate, with onerous tasks to betray their own users able to be handed out to individuals at any time, with virtually no oversight. [1]

This has all happened under the auspices of the most overtly corrupt and authoritarian government we’ve had in my lifetime, within four years.

If I had the option, I’d pack up and move, but having written this and other anti-government sentiment, it’s entirely possible that they stop me at the border. [2]

Combining this with the political apathy of nearly everyone here that I know, the current vilification of protestors (not entirely wrongly given the circumstances), and the utter spinelessness and complicity of the “opposition”, I don’t see a future for my country that doesn’t involve violent revolution (we’ve tried it before [3]) or tyranny.

But it gets better: the tech industry is not only under attack, they’re complicit. Beyond even the ordinary run-of-the-mill scumbaggery we see from tech companies, a number have recently formed the Tech Council of Australia [4], aiming to make Australia a startup capital of the world. Scroll down that page and you’ll see a list of member corporations, several of which sent opposition to the AABill when consultation was open. Now, apparently, they’ve decided that it’s too hard to beat ‘em, so they’ll join ’em.

[1] https://parlinfo.aph.gov.au/parlInfo/download/legislation/bi...

[2] https://covid19.homeaffairs.gov.au/leaving-australia

[3] https://en.m.wikipedia.org/wiki/Eureka_Rebellion

[4] https://techcouncil.com.au


What worked in the 1960s does not necessarily work in 2020s. In the 60s University of California Berkeley started the Free Speech Movement. But more recently the same UC Berkeley was in the news for banning the infamous Ann Coulter from speaking on campus.

This is not because young people today are less woke than young people in the 60s. Societal attitudes towards free speech have changed a lot. This is related to technological advancement and the rise of social media. The advent of social media has made it too easy to spread dangerous levels of hate and false information online. Malicious individuals and groups now have the power to reach hundreds of millions instantly, at no cost to themselves. It started off innocently enough, with cat videos uploaded to YouTube, but soon extremists were taking advantage of social media for radicalization purposes, adversarial nations were spreading fake news to influence who gets elected, and others were even live-streaming mass murders.

This has caused an upheaval in attitudes towards free speech. Enough is enough! There needs to be limits. Communities started imposing limits to free speech. Society — as opposed to governments — have decided that some censorship is in order. Some censorship, by private parties such as Twitter, as opposed to absolute free speech, will be the new normal. We live in a new world; the old norms no longer apply.


Google search has progressively deteriorated in quality over the last 10 years, to the point where I see it becoming useless in the relatively near future. And it's mainly not even their fault.

I've been using Google search for all kinds of research for 15 years. There used to be a time when you could find the answer to pretty much anything. I could find leaked source codes on public FTP servers, links to pirated software and keygens, detailed instructions for a variety of useful things. That was the golden age of the web.

These days, all the "interesting" data on the Internet is all inside closed Telegram chats, facebook groups, Discords or the rare public website here and there that Google doesn't want to index (like sci-hub, or other piracy sites).

The data that remains on SERPs is now also heavily censored for arbitrary reasons. "For your health", "For your protection". Google search is done.


ESP32 doesn’t impress me by the things you said. The most impressive thing about ESP32 is the ESP-IDF and the amount of effort Espressif has put in to make it accessible, developed in the public and amazing progress over the years. Compare that to TI or ST, they look like they’re stuck with management issues and not getting out of the rut. It boggles my mind that HALs are still developed in isolation and not open source. WTF are these companies thinking. Microchip PIC platform is another dumpster fire.

Yes, exactly. Video ID is just a base64'ed DES-encrypted primary int64 video key from MySQL. It used to be sequentially incremented until at some point they switched to randomly generated primary keys. Any (ex-) engineer who snapped a copy of the encryption key (it used to sit right in the code for anyone to see) can enumerate all videos from YT until that moment, including unlisted - which are only protected by secrecy of that one key. If the key leaks, then also anyone in the world can. That's what they are afraid of here. Source: worked for YT.

Getting sufficient oxygen is a key limiting factor in metabolism. The more oxygen an animal can get, the bigger an animal can afford to get.

And it's not just insects. For example, the ancestors of dinosaurs evolved a highly efficient respiratory system towards the end of the permian period when oxygen concentrations were low, which likely helped them survive the deadliest mass extinction ever. Afterwards, oxygen levels skyrocketed up, and the dinosaurs, who still had their efficient lungs, could take advantage of it and became the giants we know and love. This same system is also key to the ability of modern birds to fly.

Insects and other arthropods breathe through their skin, so the amount of oxygen they can get is limited by their surface area, whereas the amount of oxygen they need is determined by volume. Thus for any given oxygen concentration, there is some maximum surface area to volume ratio for insects, and in turn a size limit for any given body plan. This is why, for example, the largest species of tarantulas across multiple continents, despite evolving separately, are all the same size - being big has a lot of perks, but if they got any bigger they'd suffocate.


I think Firefox news is what I comment most on here. I don't know why. Well I guess I do. Web browsers, javascript etc is front and center here - which somewhat means the digital world. All my friends use Chrome, and think I'm weird. I think _they_ are. They care about human rights, at least I think they do. But, having Google, an ad company, dictate how our window to the digital world looks like, is now _way worse_ than what Microsoft with IE6 ever did. They had their share, but nothing compared to this. At least Microsoft was just a software company.

If a class of people on a forum such as this with so many brilliant minds cannot even be bothered with the values of open source and how this pertains to democracy and human values I really don't see how any other part of our species could. "Chromium scrolls 5% better on my machine so who cares about Firefox". I see these comments all the time here. Even in this thread.

Firefox. Linux. Postgresql. Wikipedia. Xmms! The WORLD WIDE WEB! Imaging if Google invented hypertext and not CERN (funny how Google made their initial fortunes). AMP would be the least of your worries. Imagine if Amazon "invented" Wikipedia. All of these open source projects are just mindblowingly awesome. They help people. Me. You.

But who cares right? Certainly not the lot with money in the 80/90s that didn't understand how or why this would be important. Which is why all these amazing things happened. Now they do of course. So none of this anymore.

Which is why I think humanity is doomed to succumb to our newfound overlords. The Big Four. Five? Six? Seven? Who cares really. A handful.

I wish people, especially like what this place definitely represents, would stand up more (including me!). Teach people, politicians, your parents, your siblings, your kids. We understand tech and it's implications. We understand how big tech is now stifling everything. Good luck training a neural network that competes with 50 billion uploaded photos. Facebook, Amazon. Google. Microsoft.

But noone will - "it doesn't matter". But make no mistake. It does matter. And humanity is just sitting here. It's like we're heading for this High Class Great Filter.

Good thing I'll be dead before all this comes to full fruition.

(Sorry for the rant - hope it was still relevant and on point, and I really encourage discussion here!)


This part of the global message kind of paints a specific picture for me

> It's a new genesis for a new era. Thank you for using freenode, and Hello World, from the future. freenode is IRC. freenode is FOSS. freenode is freedom.

Here's my theory. Dude's got some sort of "spectacle complex". You know that feeling when you leave a movie theatre after watching something like "I, Robot" and you have that kind of inspired feeling that you need to revolutionize the world through robots or whatever? In a normal person it lasts about as long as it takes to walk out of the movie theatre, then you're reminded you're back in real life.

It's kind of like this vision of creating something great, a new hope, futurism fetish sort of thing. It's the kind of "visionaries" that always talk about how they're going to revolutionize something without having any clear steps for how to get there or even a clear idea of what the "problem" is.

It's meant purely as sensationalism, and as an appeal to some weird emotion that I still don't really have words for.

Anyway he wants to be the leader in this grandiose movement to... something, I don't even think he knows what. Almost as though he's doing people a favor by doing this.

Completely misguided, and I doubt he'll learn anything from it. Clearly he lives in his own world in his head.


I currently have 10 fully remote engineering jobs. The bar is so low, oversight is non-existent, and everyone is so forgiving for under performance I can coast about 4-8 weeks before a given job fires me. Currently on a $1.5M run-rate for comp this year. And the interviewing process is so much faster today, companies are desperate, it takes me 2-3hrs of total effort to land a new job with thousands to chose from.

I wonder if they'll make it unverifiable (like Keybase[1]), opportunistic (like WhatsApp[2]), requiring confirmation in every call so nobody practically uses it (like Telegram[3]), hide the key verification screen so 0.001% practically does it (like Signal[4]), uses long-lived keys that can decrypt all historical traffic (like PGP), have encrypted mode be some second-rate mode that removes most of the features you need (like Telegram's encrypted chats; I see that the announcement already mentions some unavailable features when you turn on encryption), or if they'll have a novel trick to break the encryption at will. At a minimum, I expect it'll be an obfuscated binary with auto-updates and no published independent code reviews.

Let's hope Microsoft surprises us all! (as I see most of the comments being skeptical)

[1] Try to verify which encryption keys your mobile keybase chat client is using or if the server injected its own keys. Bonus points if you spot the "this chat is end-to-end encrypted" banner while you're doing this. Details: https://security.stackexchange.com/questions/222055/how-can-...

[2] You have to go into settings to enable even just seeing key changes, let alone that the client stops you from sending messages when a previously verified key changes.

[3] Telegram calls have encryption, but every time you have to verify the emojis again and again instead of it just storing the other party's key.

[4] Threema shows the verification status for every chat. My mother in law, from her own volition, wanted to verify keys with me. She later tried Signal to compare, since she needs to use something encrypted for her work (medical data), but didn't get that she had to verify keys on Signal, too. I find that even techies often don't know where to find that in Signal, or even realize that it's required for the security claims to apply.


It was an interesting experience.

I went in assuming someone's in charge, but honestly, most of the mistakes Google makes are in the category "nobody's in charge." They operate at a scale where everyone tries to use them to do everything. That's everything good and everything bad. They've been both a force for normalizing LGBTQ identity and a force against it, a mass communication tool and a mass oppression tool, a platform to help people and a platform to stalk people. They actively manage, observe, maintain, and regulate only a subset of the space of uses their tools allow.

This is explanation, not excuse. I'm not there anymore because I think it should be their responsibility to take responsibility reflective of their size and impact. I lost faith that the leadership agreed.

In this specific example, my assumption from personal priors is they let this ad in because there's nobody in charge of negative-filtering ads like this until complaints come, and in the absence of policy the default policy is "allow." They have categories to catch ads for illegal substances, various forms of illegal activity, and so on, but "A state-level actor will use our ad platform to paint a false message of the status of a political prisoner" is a new one for them.


I love stories like this. I have one of my own, actually.

When I was first getting into IT I started sending out CVs. Mine was terrible. I had been working in call centres for years at this point and all my "experience" was basically self-taught, so not really experience at all. As a result my CV was void of any actual content a hiring manager in IT would want to read, thus it was binned a lot.

I applied for a job at a nearby network hardware repair place. They needed someone to look after their Cisco kit and about 30 Debian Linux systems. I was attracted to the mix of responsibilities so I applied, sending in me not-so-good CV. I was eventually asked to come in to have a chat after waiting about a week to hear back from the place.

At the end of the interview, Bob (let's call him), said I was more knowledgeable than the RHCEs that were coming through his door. This was nice to hear, but then he said something that really made me smile...

Apparently my CV was worse than I thought. It was so bad, that Bob literally put it in the bin under his desk. About four days later, Bob was reading through a local Linux User Group (LUG) mailing list and he saw a name he recognised: mine. So he opens the email and reads the thread in which I helped another LUG member compile a sound driver for their kernel. The instructions I gave worked.

Bob was impressed but he couldn't quite remember where he had seen the name. At this point the business owner, John (heh...), was standing besides Bob's desk and noticed my CV in the bin. He pulls it out and reads my name across the top. The penny drops for Bob and I get the call to come in and have a chat.

I got the job.


Yes.

Do not connect with agent forwarding, as doing so would allow the server operator to connect to other locations as you. Do not forward environment information, though the typical ssh default is not to. You will likely leak your username. If you connect from an internet reachable host, and you made the mistake of not doing the first item in this list, they could easily connect back to you, not requiring any zero days. Other probably lower ROI attacks might include forcing you down to using extremely poor protocol versions or crypto options, resulting in potential information exposure if you remained online long enough to push a relevant sample of traffic. I would pin the client to a very tight set of allowed protocols and cipher suites.

Your terminal emulator program should ideally be sandboxed, iTerm, xterm, rxvt, etc have had bugs found and most aren't regularly fuzzed.

Similarly, having been in the ssh code base plenty, I'm not really sure I would wholly trust the standard openssh(1) client post-auth against a malicious server. It's highly macro-conditioned C with subtle semantics and invariants spread all over the place, extremely large functions, in-line parsing and in-house crypto. It does some things well, like trying to clear keys from memory early, but it's not written in a safe language, nor is it written in a safe way. As far as I know, the client is not fuzzed (though I'd be happy to find out I'm wrong). It also, depending on configuration calls out to other libraries with unfortunate history, zlib in particular, which while there hasn't been a known recent issue, there have been serious issues in the past. Depending on how it was sourced, there may be other issues too. If you look in the OpenBSD repository for example, you'll find the libz it is linking is from zlib 1.2.3, so a good 10 years older than the last relatively serious zlib exploit, which is about 5 years old. The zlib changelog in OpenBSD does not seem to include the patch for CVE-2016-9841. This doesn't prove anything that significant, only points out the reality that this stuff doesn't get as many eyeballs as it really should. I just went diving for 10 minutes and this is what I found. In case you're wondering, the function in question is called from inflate, which is called from ssh_packet_read_poll2 (one of the aforementioned extremely long and macro-configured ssh functions), and is called in both the server and client dispatch code.

Using a modern web browser is a much safer way to go about this, in the end.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You