I'm working on a multi sig file authentication solution based on minisign. Anyone knows the response of the dev regarding minisign's listed vulnerability? If I'm not mistaken, the response of the authors are not included in the vulnerabilities' descriptions.
Because the authors found out about it by chance on Hacker News.
That said, these issues are not a big deal.
The first one concerns someone manually reading a signature with cat (which is completely untrusted at that stage, since nothing has been verified), then using the actual tool meant to parse it, and ignoring that tool’s output. cat is a different tool from minisign.
If you manually cat a file, it can contain arbitrary characters, not just in the specific location this report focuses on, but anywhere in the file.
The second issue is about trusting an untrusted signer who could include control characters in a comment.
In that case, a malicious signer could just make the signed file itself malicious as well, so you shouldn’t trust them in the first place.
Still, it’s worth fixing. In the Zig implementation of minisign, these characters are escaped when printed. In the C implementation, invalid strings are now rejected at load time.
I finished the library providing all features of a multisig file signing scheme. With that it was easy to develop a cli tool. And now I'm looking at developing the server component.
Looking forward to share a complete solution! Git backed, decentralized, no account creation needed (auth by key pair), open source and self hostable!
>with swarm and traefik, I can define url rewrite rules as container labels. Is something equivalent available?
Yep, you define the mapping between the domain name and the internal container port as `x-ports: app.example.com:8000/https` in the compose file. Or you can specify a custom Caddy config for the service as `x-caddy: Caddyfile` which allows to customise it however you like. See https://uncloud.run/docs/concepts/ingress/publishing-service...
>if I deploy 2 compose 'stacks', do all containers have access to all other containers, even in the other stack?
Yes, there is no network isolation between containers from different services/stacks at the moment. Here is an open discussion on stack/namespace/environment/project concepts and isolation: https://github.com/psviderski/uncloud/discussions/94.
What's your use case and how would you want this to behave?
I like that I can put my containers to be exposed on the traefik-public network, and keep others like databases unreachable from traefik. This organisation of networks is very useful, allowing to make containers reachable across stacks, but also to keep some containers in a stack reachable only from other containers on the same network in that same stack.
Secrets -- yes, it's being tracked here: https://github.com/psviderski/uncloud/issues/75 Compose configs are already supported and can be used to inject secrets as well, but there'll be no encryption at rest there in that case, so might not be ideal for everyone.
Speaking of Swarm and your experience with it: in your opinion, is there anything that Swarm lacks or makes difficult, that tools like Uncloud could conceptually "fix"?
Swarm is not far from my dream deploy solution, but here are some points that might be better, some of them being already better in uncloud I think:
- energy in the community is low, it's hard to find an active discussion channel of swarm users
- swarm does not support the complete compose file format. This is really annoying
- sometimes, deploys fail for unclear reasons (eg a network was not found, but why as it's defined in the compose file?) and work the next try. This is never lead to problems, but doesn't feel right
- working with authenticate/custom registries is somewhat cumbersome
- having to work with registries to have the same image deployed on all nodes is sometimes annoying. It could be cool to have images spreading across nodes.
- there's no contact between devs and users. I've just discovered uncloud and I've had more contact with its devs here than in years of using swarm!
- the firewalling is not always clear/clean
- logs accessibility (service vs container) and containers identification: when a container fails to start, it's sometimes harder than needed to debug (esp when it is because the image is not available)
This is based on the Chromium Embedded Framework. I've always been surprised this kind of framework was not encouraged for Firefox by Mozilla (I've read they were even against it).
This is an honest question, not trying to get into an argument...
> I don't want extensible software. KDE is terrible in that regards. They have miriads of options, that's too much for me.
Why not use the default provided then and take the defaults as opinionated? That's what I do actually. I might change very few options, but I generally use the defaults. It's not that you have to configure kde before it becomes usable, the defaults are pretty ok.
This is only true if complexity under the hood actually affects your default experience. I don't think it's the case for KDE. "The chance" is indeed higher, except in GNOME it seems the bugs are actually real.
As I'm working on a signing scheme for release authentication, this is a welcome news.
To alleviate the issue of mutable releases I had set up a mirror of releases checksums to be able to detect releases alterations. This is not needed anymore for immutable releases.
And automatically publishing checking of releases artifacts is also a good recent change by GH: in that project mentioned above I have developed a cli downloader checking the checksums of the downloaded file [1], but to be useful, it required the project to publish checksums, and the project to be mirrored. Now both of these requirements are dropped and the tool is readily useful for all GitHub immutable releases.
Clicking on the link to valetudo.cloud redirects me to hn, but typing the address not. Seems to confirm messages complaining about the projects community/leaders....