For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | dwattttt's commentsregister

No? I have locks on my house and car that I have the keys for. That an argument _for_ secure boot.

It is absolutely not.

It's a decent one for "locks on an apartment building that someone else owns."

But no, purchasing a house ought not include by default "a set of locks that you must work around, permission-wise."


Funnily enough, when you buy a house, the first task is to change all the locks.

Y’know, for security.


Sure. Now, of the people who buy houses -- how many of them would find this a difficult or onerous task?

And then, do computers.

Apples and oranges here, for this point.


Sorry dwattttt, I’m unable to verify your identity and your keys are disabled. If you have an issue, please fax a copy of your DUNS number.

You don't have the ability to revoke my keys on this machine, that's the point. Not even MS could do that, because these are _my_ keys. The alternative proposed here is no keys at all.

It would work just as well if the instructions instead told you to enrol your own key and sign the cracks. Those instructions just aren't as popular.

I've seen the "success" a non-software company has been having, trying to integrate AI into their processes. A hypothetical competitor who chose not to do so would absolutely be coming out ahead right now.

I can't say whether this trend would continue, but the answer to your question today is "yes".


> Someone will fulfil the need as there is a high incentive to

And those uses which fall short of the new threshold, e.g. hobbyist SBCs, slowly fall away.


In reality were they going to survive anyway? I would wager likely not.

Raspberry PI is the defacto standard for SBCs. Almost all the other SBCs had significant problems usually around software support and also third party support e.g. Hats, cases etc.


I'm happy to be against both the white houses' 3rd party telemetry as well as other apps. I can multitask.

Reading the link provided by https://news.ycombinator.com/item?id=47511778, I believe "atomically acquire multiple objects". The link states they try to emulate it by performing a poll then a read, but the gap between those results in a race, which is a terrible thing to have in a synchronisation primitive.

There was also something about needing to back out if any of the reads fails to acquire, which also sounds nasty.


Great post.

Ah, interesting, so wfm does both the wait and the acquire!

When using eventfd it is indeed annoying having to both poll and later read to disarm the object (there are epoll tricks that can be used but are not generalizable).

The signal+wait is also a primitive that it is hard to implement atomically on posix.


Video streams are not known for their low bandwidth needs, let alone adding in RTT latency for inputs.


That's true, I'm not saying it comes without trade-offs. But in return you get a perfectly consistent and physically accurate simulation. It would mostly be expensive, I think, but it's technically feasible (services like Shadow or GeForce Now already demonstrate that).


Which one of your friends can host an mp physics heavy game with a number of low-latency high-resolution video streams? I would estimate the average answer to be zero.


Perhaps the solution could be to have all players stream the game from a centralized instance, rather than all clients streaming from the host’s instance.

That would have a number of advantages, come to think of it. For starters, install size could be much lower, piracy would be a non-issue, and there would be no need to worry about cross-platform development concerns.


We don't have to theorize about this. We've had cloud gaming for years, and the companies have immense motivations to turn us all into renters in the cloud so they've poured a lot of effort into it and we can see half-a-dozen highly-resourced results now. We can just look at it and we can see that it... almost... works. If you don't care much about latency it definitely works.

However, Teardown is in the set of games where it just barely works and only if all the stars and the moon align. I'd characterize it as something like, cloud gaming spends 100% of the margin, so if anything, anything goes wrong, it doesn't work very well.

(Plus, as excited as the companies are about locking us into subscriptions rather than purchases that we own, when it comes time to actually pay for the service they are delivering they sure do like to skimp, because it turns out it's kind of expensive to dedicate the equivalent of a high-end gaming console per person. Most stuff that lives in the cloud, a single user averages using a vanishing fraction of a machine over time, not consuming entire servers at a time. Which doesn't pair well with "you spent 100% of the margin just getting cloud gaming to work at all".)


For the record, my comment was a joke. I was quoting from Stadia’s marketing. :-)


Running several raytracers on a single videocard isn’t free either. Syncing the world changes as they do is the least intensive for the server, and the last bandwidth. It’s probably optimal in all ways.


Most consumer GPUs have a limit on the number of video streams their hardware encoder can handle at once, and in some cases the limit is as low as 2.


Okay, I didn't know that


We should note, this is entirely a Nvidia software imposed limitation, as the hardware can easily handle more, the purpose is to upsell to non-gaming cards.

With that being said, its trivial to bypass with tools like: https://github.com/keylase/nvidia-patch

Regardless, this does throw a spanner in the works for a video based game streaming solution for P2P multiplayer.


If the comments impact correctness (which inlining doesn't, but I believe there are other directives that do), saying it's "an implementation detail" waves away "it's an implementation detail that everyone needs" aka part of the spec.

The reason it feels like a kludge is that "comments" are normally understood to be non-impactful. Is a source transformation that removes all comments valid? If comments have no impact per the spec, yes. But that's not the case here.

In practice comments in go are defined to be able to carry semantic meaning extensibly. Whether they're safe to ignore depends on what meaning is given to the directives, e.g. conditional compilation directives.


There are directives and packages that affect correctness. E.g. the embed package allows you to initialize a variable using a directive. E.g. //go:embed foo.json followed by var jsonFile string initializes the jsonFile variable with the contents of the foo.json file. A compiler or tooling that doesn't support this results in broken code.


There's nothing unique to Go about this kind of tooling. It exists in C, Java, Rust, Typescript, and probably dozens of other settings as well. It's the standard way of implementing "after-market" opt-in directives.


Are we referring to 'go fix' as after market tooling?

It's certainly done in many places. JsDoc is the biggest example I can think of. But they're all walking the line of "this doesn't have an impact, except when it does".

It being done by the language owners just makes them the ones walking the line.


That's exactly how this works: it doesn't have an impact, except when you ask it to. This is an idiomatic approach to this problem.


The part I object to is overloading comments, which aren't meant to be load bearing. A tool developed outside the language has no choice but to take something that has no meaning and overload it, but language owners weren't forced to take this route; they could've made directives not comments.

In practice, the Go language developers carved syntax out of comments, so that a comment is "anything that starts with //, unless the next characters are go:"


Actually, in practice the rule is "The delimiter for a comment is '// '", which to be clear, is slash-slash-space. Slash-slash-(not a space) is in practice a directive. There are several basic tools in the community that work on this principle including one of the basic linters.

If it would make you happier you can imagine this is part of the spec. It wouldn't change much if it was.


The book says "Comments begin with //." and "// +build" is a special comment. It was only replaced with "//go:build" 1.17 (2021) . So your statement incorrectly implied that this strict syntax distinction between directives and comments has always existed, what happened was people started doing this slowly over time and eventually noticed the disconnect and changed "// +build" which they could because all that stuff was implementation-defined behavior. Right now gofmt handles "// +build" "//+build" and "//go:build" by moving them to the top, adding //go:build if it doesn't exist and adding a space to "//+build", which already breaks setups that add a build comment.

Why would millions of programs becoming out of date with the spec make me happy. There is value in the language maintainers and go programmers talking about the same object. I don't disagree that '// ' is standard Go style (and more readable), but it would break all the code that uses //comments /// ////.

I DO agree that it wouldn't change much if by 'it' you mean the go language and it's tooling, a proper spec does prevent arbitrary change. But it should have been added at least 5 years ago.


"So your statement incorrectly implied that this strict syntax distinction between directives and comments has always existed,"

There was no such statement. "In practice" clearly indicates the contrary about it being "strict" and certainly encompasses the possibility that it only developed over time.


So how many angels can you fit on the head of a pin?


In the listed examples, the compiler will emit a diagnostic upon encountering those comments:

https://go.dev/blog/inliner#example-fixing-api-design-flaws

So these comments carry more weight than how those comment annotations might be consumed by optional tools for other languages.

For most of the listed examples, I think the corresponding C annotation would have been "[[deprecated]]", which has been added to the syntax as of C23.


It does not exist in Java. Comments in Java do not change code.


It doesn't exist in Go either. https://go.dev/ref/spec

That's why you find it in the comments. That is where tools have found a place to add their own syntax without breaking the Go code.

Absolutely you can do the same in Java. It exists to the exact same degree as it does in Go. I expect it isn't done as often in the Java world because it is much harder to parse Java code so the tools don't get built.


This also does not change th code. It is an advertisement to a linter-loke tool to take some action on the source code. Its most similar to linter directives which usually are comments.


We're talking about the "//go" comments in general I think here.

Things like "//go:embed" and "//go:build" very much do change the semantics of source code.

The comments above 'import "C"' containing C function definitions and imports change the compilation of go source code.

The "//go" comments contain a mix of ones that must be respected to compile, to being optional, to being entirely ignorable (like generate and fix).


> Things like "//go:embed" and "//go:build" very much do change the semantics of source code.

When you take the source code as a whole they do, but the Go parts are unchanged. The Go spec says nothing about //go:embed, or //go:build. These are not Go features. What you are looking at is another programming language that has been added by a particular tool, using Go comments as the gateway to supporting non-standard syntax inline.

> The "//go" comments contain a mix of ones that must be respected to compile

That is true, but in using it you've accepted that you are no longer writing Go. In fact, with respect to your talk of C, "Cgo is not Go" is one of the most oft-repeated lines seen in the Go community. It's not just a silly catchphrase. It is the technical reality. Cgo is literally a different programming language, much like how Objective-C is not C despite supporting everything C does.


> What you are looking at is another programming language that has been added by a particular tool

So the go stdlib also isn't "the go programming language"? The 'golang.org/x' repositories aren't standard go?

I feel like you're no-true-scottsmanning this into a meaningless definition where the spec defines Go, but all Go code in existence is not actually Go code, it's a different programming language which everyone happens to call Go and think of as Go.

Like, sure, redefine Go like that if you want, but I don't think anyone else in practice thinks "Ah, yes, the go stdlib isn't actually written in Go", so you'll be off by yourself with that definition.


> So the go stdlib also isn't "the go programming language"?

Which standard library? There isn't just one. tinygo's standard library, for example, isn't the same as gc's. They are, in most cases, API compatible, but there are some packages not shared and they are completely different codebases. A standard library is a distribution concern, not a language concern.

Maybe you're caught up thinking in some other kind of language that only has one distribution and trying to apply that misaligned logic to programming at large?

> I feel like you're no-true-scottsmanning this into a meaningless definition

It's not, though. It is actually something you have to think about in the real world. A 100% spec-complaint Go compiler can be completely unable to use gc's standard library code as it contains additional language constructs that are bound to gc specifically.

> I don't think anyone else in practice thinks "Ah, yes, the go stdlib isn't actually written in Go"

Again, its not clear which standard library you are referring to, but if we assume gc's it is mostly written in Go, but it contains extensions that are not Go. If you say that it was written in Go casually, people will know what you mean, but technically, no, it isn't written in Go. At least no more than an Objective-C program is written in C.

But, unlike Objective-C, the extensions are added in such a way that they remain valid Go syntactically. That means you can use, say, a Go language server on gc's standard library without it blowing up on the extensions. It gives the advantage of a new language, but without having to build new language tools.


There are no comment-based directives in Rust, are there?


It provides the feature to use. It’s possible nobody has yet.


Eh, you're right, they have a structured attribute system.


> The reason it feels like a kludge is that "comments" are normally understood to be non-impactful. Is a source transformation that removes all comments valid? If comments have no impact per the spec, yes. But that's not the case here.

This is not inlining in the compiler. It's a directive to a source transformation (refactoring) tool. So yes, this has no impact on the code. It will do things if you run `go fix` on your codebase, otherwise it won't.


And yet it still breaks "comments aren't semantic". That transformation I described is still invalid.


I don’t understand why that wouldn’t be valid. As far as I understand if you compile code with these go:fix comments, they will be ignored. But if instead of compiling the code you run ‘go fix’, the source code will be modified to inline the function call. Only after the source code has been modified in this way would compiling reflect the inlining. Do you have a different understanding?


I mean that directives other than inlining impact correctness. If you have a source file that only builds for one OS, stripping the build tag will break your build.



And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.

The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.


The knife manufacturer would be more than happy to continue to sell to you, except for that minor little detail that you're in jail.


Any knife vendor who

1. Found out you used their knives to go murdering

2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)

Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.


Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)

> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.

That... Doesn't happen.

Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.


People who buy luxury kitchen knives are exactly the type of people who would choose not to buy a product because it is associated with crime.

People who buy (and make) firearms are... pretty close to the exact opposite.


So now it's "luxury" kitchen knives?

Goalposts moved.


Direct to consumer sales of kitchen knives are entirely luxury products... the goalposts are exactly where they've always been.


Ahhh, direct to consumer.

Where either it's a computer program (website) that knows nothing about you, or cutco.

If you think you wouldn't find a cutco representative to sell to you, you're on some good reality-altering drugs.


sotto voce the knives are a metaphor


Doesn't matter.

There will always be some company willing to sell to even the worst person, in any product category.

The response that companies have to boycotts, and the results of the boycotts themselves, are fractally chaotic at best.

But even most nominally socially-aware companies are reactive, rather than proactive.


Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?


It's not about the standard we should judge them by, which is equivalent to how we think they should act.

It's about how we think they will act.

Especially when it comes to sales to the US military, I have no expectations about how companies will act.

Hell, just look at how many companies willingly helped China with their Great Firewall.


> Not sure why knife dealers would be assumed to be more moral than firearms dealers

What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.

But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.


> is that you _did_ judge them by a standard used for weapons manufacturers.

I think any of these companies will attempt to get away with whatever the fuck they can.

That has fuckall to do with your rhetorical question of:

> That's the standard we should judge them by?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You