For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | maxtaco's commentsregister

I think in practice it doesn't work to deserialize only verified data. Snowpack has a mechanism for this but I found it impractical to require all use cases fit this form.

I'm not sure exactly what system you're describing, but I hope it doesn't involve hand-marshaling and unmarshaling of data structures. Your requirement (b) seems at odds with forward-compatibility. Lack of forward-compatibility complicates upgrades, especially in federated systems when you cannot expect all nodes to upgrade at once.

I might be biased, but it's been possible to write a sufficiently complex system (https://github.com/foks-proj/go-foks) without feeling "tortured." It's actually quite the opposite, the cleanest system I've programmed in for these use cases, and I've tried many over the last 25 years. Never am I guessing how to verify an object, I'm not sure how that follows.

I also think it's worth noting that the same mechanism works for MAC'ing, Encryption and prefixed hashing. Just today I came across this IDL code:

  struct ChunkNoncePayload @0xadba174b7e8dcc08 {
    id @0 : lib.FileID;
    offset @1 : lib.Offset;
    final @2 : Bool;
  }
And the following Go code for making a nonce for encrypting a chunk of a large file:

  pld := lcl.ChunkNoncePayload{
    Id:     fid,
    Offset: offset,
    Final:  isFinal,
  }
  hsh, err := core.PrefixedHash(&pld)
  if err != nil {
    return nil, err
  }
  var ret [24]byte
  copy(ret[:], (\*hsh)[0:24])
  return &ret, nil
As in signing, it's nice to know this nonce won't conflict with some other nonce for a different data type, and the domain separation in the IDL guarantees that.

Bingo!

Hi, post author here. Agree that the idea isn't tricky, but it seems like many systems still get it wrong, and there wasn't an available system that had all the necessary features. I've tried many of them over the years -- XDR, JSON, Msgpack, Protobufs. When I sat down to write FOKS using protobufs, I found myself writing down "Context Strings" in a separate text file. There was no place for them to go in the IDL. I had worked on other systems where the same strategy was employed. I got to thinking, whenever you need to write down important program details in something that isn't compiled into the program (in this case, the list of "context strings"), you are inviting potentially serious bugs due to the code and documentation drifting apart, and it means the libraries or tools are inadequate.

I think this system is nice because it gives you compile-time guarantees that you can't sign without a domain separator, and you can't reuse a domain separator by accident. Also, I like the idea of generating these things randomly, since it's faster and scales better than any other alternative I could think of. And it even scales into some world where lots of different projects are using this system and sharing the same private keys (not a very likely world, I grant you).


It should be possible to change the name of the type, and this happens often in practice. But type renames shouldn't break preexisting signatures. In this scheme you are free change the type name, and preexisting signatures still verify with new code -- of course as long as you never change the domain separator, which you never should do. Also you'd need to worry about two different projects reusing the same type name. Lastly, the transmission size in this scheme remains unaffected since the domain separators do not appear in the serialized data. Rather, both sides agree on it via the protocol specification.

That’s easily addressed. We just need a global immutable registry of types, their names, an alias list and revocation list. ;-)

We can let one be managed by ICANN and the others various competing offerings on ETH.


I would say two problems with the asn.1 approach are: (1) it seems like too much cognitive overload for the OIDs to have semantic meaning, and it invites accidental reuse; I think it matters way more that the OIDs are unique, which randomness gets you without much effort; and (2) the OIDs aren't always serialized first, they are allowed to be inside the message, and there are failures that have resulted (https://nvd.nist.gov/vuln/detail/cve-2022-24771, https://nvd.nist.gov/vuln/detail/CVE-2025-12816)

(edit on where the OIDs can be, and added another CVE)


OIDs have to be unique just enough to not fall into the wrong parsing/validating path in the same system, which isn't that hard.

>Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure

That's on brand for the javascript world, yes.

With asn1 being a can of worms, at least it's a can of worms with a reputation, unlike this nice magic trick.

Disclaimer: there exists a PR filled under my name into an asn.1 parser that fixes a bug, which is not merged since October 2022.


Those CVEs seem a little more subtle than OID serialization issues. In the first example there are actually two distinct problems in concert that lead to the vulnerability, one of which is when a "low public exponent" is used.

https://github.com/digitalbazaar/forge/commit/3f0b49a0573ef1...


It seems like in that PR, the fact that the OID wasn't checked is part of the problem. I think a better system wouldn't compile or would always fail to verify if the OID (domain separator) is wrong, and I think you'd get that behavior in the posted system.

This is Bleichenbacher's rump-session e=3 RSA attack. It's pretty straightforward, and is in Cryptopals if anyone wants to try it. If you don't check all the RSA padding, and you use e=3, you can just take an integer cube root.

Also on the front page of HN right now is a job posting for Optery (YC W22). Seems like they are growing really fast.


An intended use case for FOKS (https://foks.pub) is to allow long-lived durable shared secrets between users and teams with key rotation when needed.


I think one could build something nice on top of FOKS (https://foks.pub).


Thanks for these great questions!

- limiting users to delete/push/force; this is possible but I don't see how to cryptographically guarantee it. The server can't really help since it doesn't know what's a pack, index data block or ref. The clients can enforce this policy, but then it would be possible to make an evil client that skirts the policy. How much protection do you think you need?

- the server repo right now is implemented 100% as a postgresql DB, so yes, I think that means it's incremental backup-friendly? [1]

- e2ee git has trouble being as efficient as regular git since the server can't tell you which blocks it has; however, there are pretty good optimizations made using indices and packfiles, the white paper has more details, and I hope to write a blog on it soon.

- I'm not sure about the sqlite question. Is there a good way to backup sqlite incrementally over standard git? If not, then maybe the KV-store is better fit for this application.

I agree that git over E2EE is the best storage, even for things like PDFs and photos. Yeah, FOKS should be hostable with a very thin VPS. The storage needs will scale (n log n) as the number of users due to the Merkle Tree, but for small n, this is likely fine!

[1] https://www.postgresql.org/docs/17/continuous-archiving.html...


Can we fetch incremental changes from KV-store? I want to read the KV store docs. Could you share that?

Why is server repo specifically Postgres?

Thanks for the answers!


No incremental fetch right now other than what postgresql provides by default. If you're hosting a FOKS server, there is important metadata to backup too.

The best docs for the KV store are in the white paper, Section 5.1. White paper is linked to from foks.pub

Postgres seemed to check all the boxes for me. I wanted to keep things very simple at first in terms of setup and ongoing management, so didn't introduce other storage backends for other parts of the system.


This would be a great application for us! We are not exactly there yet, for reasons of privacy. Right now, there is no way for alice@host to allow unauthenticated users to view her profile. But we can definitely allow this on a host-by-host basis. With this small change, I think your application fits very naturally.

I wonder, what sort of interface is right for you? A library to compile against or a CLI app to shell out to? If a library, which languages?


Interesting! We're at a very early stage of the implementation and develop in rust. We aim to provide multi-sig capabilities, as defined in a JSON file where the public keys of the signers can be found. If a signer looses a key, we want this 'signers' file to be updatable with the new key. We decided that signers can be humans of processes, so the keys are not an identity of a person, which might be an important detail. Currently, to update a signers file, other members of the multi-sig must sign the update. This works fine, but we are early enough in the project implementation to explore other approaches, hence my question.

We'd rather not shell out to a cli, and would preferably go with a lib or rest interface.


An attack that might be of concern with this configuration is the server suppressing updates to this JSON file, or showing different versions of the JSON file to different clients. What you're describing is pretty close to what FOKS is getting at with signature chains and Merkle Trees, but maybe it's overkill for this particular application.

I wonder if the policy you describe could be implemented as world-visible team with world-visible users. Others have commented on the need for something like this, so I think it should be pursued with high priority. What's slighlty fuzzy to me is how these totally world-viewable teams and users would interact with more closed-down users on other servers.


Although I didn't consider this attack possibility (thanks for raising it!), I think we are reasonably immune to it or able to detect it with the way we manipulate and store the JSON (though completely avoiding it seems not attainable, at one point the client has to trust the response it gets from a server, am I right? Otherwise I'm very interested in pointers to learn more!)

World visible teams and users might be a way to define our multi-sigs members. But we would still need a JSON file for others characteristics of the multi-sig. I'll keep an eye on foks as if it becomes a good fit, it might let us concentrate on our service and not on key management intricacies. My email is on my HN profile, in case you want to notify me of advancement fitting our use case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You