For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | formerly_proven's commentsregister

iirc sending HID feature reports doesn’t need admin rights on windows

Kyber/ML-KEM-only is exactly the suite b (CNSA 2) recommendation.

I used my 1080 Ti for about eight years. The successor GPU is in some ways way faster (raytracing, AI features etc.), but in others really quite stagnant considering the huge stretch of time that passed between them. ~10 years for 2-3x performance in GPUs at higher nominal and real price points shows how slow silicon advances have been compared to the 90s and 2000s. The same period from 2000 to 2010 would've seen 1000x performance if not more. The difference between a 1080 Ti and a more expensive RTX 50 card is the RTX can render ideally triple the frames in synthetic benchmarks, double the frames in some rasterizing games (most games won't see gains that high), and do a few relatively tame raytracing tricks at performance which is still not really good. At the same throughput it consumes maybe half the power or a bit less. The difference between a GeForce 2 and e.g a Radeon HD 4k is several planes of existence.

My 1080ti is still working away in my kid's PC. If you connect a 1080p monitor, it will still hit 60fps in mostly everything.

The only thing that holds this card back now is a handful of titles that will not run unless ray-tracing support present on card - Indiana Jones and The Great Circle springs to mind etc.

I am very likely going to get a decade of use out of it across three different builds, one of the best technology investments I've ever made.


It really is an impressive bit of hardware. I finally pulled it out of my last system a year ago, but it was definitely holding its own up until that point.

Well. The 5090ti is significantly faster than a 1080ti. It has 92b vs 12b transistors. That's the 10 years difference you mention. 10 years before the 1080ti we had the 8800 ultra with 600m transistors. So yeah you are a bit right. But stacked transistors in the future might become reality and enable transistor increase again.

A 5090 is more than twice as expensive as a 1080 Ti in real MSRP terms and way more than that in actual real terms, since the 1080 Ti was available for some time below MSRP, while the 5090 realistically never was and usually goes for 50-100% above MSRP. So I don't think these can be compared. Basically a similar story with the 5080, it's significantly more expensive in real terms (and about ~2x in nominal terms).

The 5070 Ti would be the same spot.

If you compare these - the RTX 50 card has a bit higher TDP (which it will usually not reach due to clock limits), is a roughly 100mm² smaller die with around 4x the transistors and about 3x the compute (since much more of the chip is disabled compared to the 1080 Ti's chip). It has 5 GB more memory (11->16) and a lot more bandwidth.


The G200 mattered to some degree for a long time, because most x86 servers up until a few years ago would ship a G200 implementation or at least something pretending to be a G200 card as part of their BMC for network KVM.

Like virtualized NICs pretending to be an NE2000? That's interesting, do you know why they'd use a G200 and not something like an older ATI chip?

Probably started out as a real G200 chip which might’ve been the cheapest and easiest to integrate in the 2000s? Or it had the needed I/O features to support KVM (since this would’ve involved reading the framebuffer from the BMC side), or matrox was amenable to adding that.

The ATi Rage 128 was used in everything short of toasters for a long time too. I assume that the drivers are part of what made it obsolete.

I remember having a ton of servers with cut down Mach64 chips. They were so bad that you would get horizontal lines flickering across the screen while text was scrolling in an 80x25 text console. I don't know why server manufacturers go to so much effort to make the console as terrible as possible. Are they nostalgic for the 8 bit ISA graphics from the original 5150? They seem offended at the idea that someone might hook a crash cart directly up to their precious hardware.

They were probably forced to update when they dropped older busses. Without a PCI or AGP bus on there they have to find something that can hang off of a PCIe lane.

Drivers, probably.

Even current Dell servers less than a year old ship with G200 graphics. If it works, why change it? A 1998 ASIC can be put in the corner of a modern chipset for pennies or less.

Some 120 million americans currently approve of the Trump II administration :)

Too bad they didn’t vote

Winforms is a Win32 API wrapper, so on the same level as MFC, not a separate UI framework.

It is a wrapper, but it's not quite on the same level as MFC. MFC really is a thin wrapper, almost 1:1 in most places. WinForms is more like VCL or VB6 in that it uses Win32 where it can but doesn't design around it, so in practice it's more high-level.

Doesn't Anthropic claim that Claude Code is 100% written by Claude, which would obviously mean that it is not copyrightable code and therefore the DMCA does not apply and logically that these DMCA claims are invalid?

IBM has two architectures which are de-facto only used by them, s390x and ppc64le. They have poured a lot of resources into having open source software support those targets, and this announcement might mean they find it easier/cheaper going forward to virtualize ARM instead and maybe even migrate slowly to ARM.

AIX is still ppc64be. That and s390x are the only big-endian CPUs I can think of which aren't end-of-life, which I think is going to be an increasing maintenance burden over time for IBM alone.

I think they see customers wanting to have the flexibility to move to ARM and this is the fastest way to say they support ARM workloads. Maybe this is a path for IBM to eventually use ARM chips down the road, but I see this as being more about meeting customers where they think the demand is today rather than an explicit guess for tomorrow.

ppc64le has other machines. Raptor off the top of my head, but there's also that weird notebook project that seems to be talked about once every few years and probably won't ever happen and some pretty cool stuff in the amiga space (I don't know if that's strictly le but power is supposed to be bi-endian)

The PCB design for a small desktop computer (which is a step of the notebook project) has been finished 2 weeks ago, and they are trying to get funding to actually manufacture a few prototypes rn [0]

[0]: https://www.powerpc-notebook.org/2026/03/we-are-ready-for-pr...


I'm not saying there hasn't been progress, rather, it doesn't seem like it's ever going to release

Is Raptor still relevant? Looking at their website (https://www.raptorcs.com/) they announce such novel technologies as PCIe 4.0 and DDR 4, and POWER 9 which was released in 2017 while IBM is already on POWER 11.

This article claims that these are somewhat open questions, but they're not and have not been for a long time.

#1 You sign a blob and you don't touch it before verifying the signature (aka "The Cryptographic Doom Principle") #2 Signatures are bound to a context which is _not_ transmitted but used for deriving the key or mixed into the MAC or what have you. This is called the Horton principle. It ensures that signer/verifier must cryptographically agree on which context the message is intended for. You essentially cannot implement this incorrectly because if you do, all signatures will fail to verify.

The article actually proposes to violate principle #2 (by embedding some magic numbers into the protocol headers and presuming that someone will check them), which is an incorrect design and will result in bad things if history is any indication.

Principles #1 and #2 are well-established cryptographic design principles for just a handful of decades each.


Maybe I'm misunderstanding the article but I'm fairly sure the magic number is not transmitted.

It's used exactly as you say: a shared context used as input for the signature that is not transmitted.


You’re right, but I think the commenter you’re replying to is also right.

The OP is using unreadable hex strings in a way that obscures what’s actually going on. If you turn those strings into functionally equivalent text, then the signatures are computed over:

    (serialized object, “This is a TreeRoot”)
and the verifier calls the API:

    func Verify(key Key, sig []byte, obj VerifiableObjecter) error
(I assume they meant Object not Objector.)

This API is wrong, full stop. Do not use this design. Sure, it might catch one specific screwup, but it will not catch subtler errors like confusing a TreeRoot that the signer trusts with a TreeRoot that means something else entirely. And it requires canonical encodings, which serves no purpose here. And it forces the verifier to deserialize unverified data, which is a big mistake.

The right solution is to have the sender sign a message, where:

(a) At the time of verification, the message is just bytes, and

(b) The message is structured such that it contains all the information needed to interpret it correctly.

So the message might be a serialization of a union where one element is “I trust this TreeRoot” and another is “I revoke this key”, etc. and the verification API verifies bytes.

If you want to get fancy and make domain separation and forward-and-backward-compatibility easier, then build a mini deserializer into the verifier that deserializes tuples of bytes, or at most UUIDs or similar. So you could sign (UUID indicating protocol v1 message type Foo, serialization of a Foo). And you make that explicit to the caller. And the verifier (a) takes bytes as input and (b) does not even try to parse them into a tuple until after verifying the signature.

P.S. Any protocol that uses the OP’s design must be quite tortured. How exactly is there a sensible protocol where you receive a message, read enough of it to figure out what type (in the protobuf sense) it contains such that there is more than one possible choice, then verify the data of that type? Are they expecting that you have a message containing a oneof and you sign only the oneof instead of the entire message? Why?


I think in practice it doesn't work to deserialize only verified data. Snowpack has a mechanism for this but I found it impractical to require all use cases fit this form.

I'm not sure exactly what system you're describing, but I hope it doesn't involve hand-marshaling and unmarshaling of data structures. Your requirement (b) seems at odds with forward-compatibility. Lack of forward-compatibility complicates upgrades, especially in federated systems when you cannot expect all nodes to upgrade at once.

I might be biased, but it's been possible to write a sufficiently complex system (https://github.com/foks-proj/go-foks) without feeling "tortured." It's actually quite the opposite, the cleanest system I've programmed in for these use cases, and I've tried many over the last 25 years. Never am I guessing how to verify an object, I'm not sure how that follows.

I also think it's worth noting that the same mechanism works for MAC'ing, Encryption and prefixed hashing. Just today I came across this IDL code:

  struct ChunkNoncePayload @0xadba174b7e8dcc08 {
    id @0 : lib.FileID;
    offset @1 : lib.Offset;
    final @2 : Bool;
  }
And the following Go code for making a nonce for encrypting a chunk of a large file:

  pld := lcl.ChunkNoncePayload{
    Id:     fid,
    Offset: offset,
    Final:  isFinal,
  }
  hsh, err := core.PrefixedHash(&pld)
  if err != nil {
    return nil, err
  }
  var ret [24]byte
  copy(ret[:], (\*hsh)[0:24])
  return &ret, nil
As in signing, it's nice to know this nonce won't conflict with some other nonce for a different data type, and the domain separation in the IDL guarantees that.

No, I'm pretty sure they are saying you need to transmit it

No, they propose just concatenating it with the data received from the network

> it makes a concatenation of the domain separator (@0x92880d38b74de9fb) and the serialization of the object, and then feeds the byte stream into the signing primitive. Similarly, verification of an object verifies this same reconstructed concatenation against the supplied signature.

> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification. Encrypt, HMAC, and hash work the same way


You are, of course, right. And this distinction is important for this chain of comments.

Though, in fairness, that is /kind of/ like transmitting it---in the sense that it impacts the message that is returned. It's more akin to sending a checksum of the magic number, rather than the magic number itself. But conceptually, that is just an optimization. The desire is for the client to ensure the server is using the same magic number, we just so happen to be able to overload the signature to encode this data without increasing the message size.


Oh, it's just in the hash input. So if you don't use the right ID when you check the hash, it fails.

I think not:

> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification.

But saying it's about wasting bytes is a little confusing, as you observe that isn't really the point.


It is definitely not transmitted.

Domain separation happens in the input to the hash function, not on the wire. Because what arrives off the wire is UNTRUSTED input.


Hmmmm. I agree that an ad-hoc implementation with protobufs can go wrong. But presumably, 1 canonical encoding for the private key constitutes the Horton principle?

It seems like Horton Principle just says "all messages have ≤1 meaning". If a message signed by key X must be parsed using the canonical encoding, then aren't we done?

There is still room for danger. e.g., You send `GetUserPermissionLevel(user:"Alice")` and server responds with `UserNicknameIs(user:"Alice", value:"admin")`. If you fail to check the message type, you might get tricked.

Maybe it's nice if it was mathematically impossible to validate the signature without first providing your assumptions. e.g., The subroutine to validate message `UserNicknameIs(user:"Alice", value:"admin")` requires `ServerKey × ExpectedMessageType`. But "ExpectedMessageType" isn't the only assumption being made, is it?

You might get back `UserPermissionLevel(user:"Bob", value:"admin")` or `UserPermissionLevel(user:"Alice", value:"admin", timestamp:"<3d old>")`. Will we expect the MAC to somehow accept a "user" value? And then what do we do about "timestamp"?

Maybe we implement `ClientMessage(msgUuid: UUID, requestData:...)` and `ServerResponse(clientMsgUuid: UUID, responseData:...)`, but now the UUID is a secret, vulnerable to MITM attack unless data is encrypted.

It seems like you simply must write validation code to ensure that you don't misinterpret the message that is signed. There simply isn't any magic bullet. Having multiple interpretations for a sequence of bytes is a non-starter (addressed in the post). But once you have a single interpretation for a sequence of bytes, isn't it up to the developer to define a schema + validation logic that supports their use case? Maybe there are good off-the-shelf patterns, but--again--no magic bullets?


Are keys that expensive to generate? You could have a unique signature key for each data type.

The article proposes a way to agree on context out of band and enforce it with idl. This seems to be an implementation of the principle you mention

No, it’s completely wrong. It’s a very minor refinement of a terrible yet sadly common design that merely mitigates one specific way that the terrible design can fail.

See my other comment here. By the time you call the OP’s proposed verify API you have already screwed up as a precondition of calling the API.


What if (and this is perhaps to big an if), you only ever serialize and de-serialize with code generated from the IDL, which always checks the magic numbers (returning a typed object(?

It's a big if because the threat model normally includes "bad guys can forge messages". Which means that the input is untrusted and you want to generate your own domain separation bytes for the hash function, not let your attacker choose them.

That's just the FSF editing the license text without updating the version number or date.

https://web.archive.org/web/20160114094744/https://www.gnu.o...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You