One thing that makes me unsure about this proposal is the silent downgrading on unsupported platforms. People might think they're safe when they're not.
Go has the best support for cryptography of any language
I'm not sure there's a realistic alternative. If you need to generate a key then it has to happen somehow on unsupported platforms. You can check Enabled() if you need to know and intend to do something different but I assume most of the time you run the same function either way you'd just prefer to opt into secret mode if it's available.
This is not what secret.Enabled() means. But it probably illustrates that the function needs to be renamed already. Here's what the doc comment says:
// Enabled reports whether Do appears anywhere on the call stack.
In other words, it is just a way of checking that you are indeed running inside the context of some secret.Do call; it doesn't guarantee that secret.Do is actually offering the protection you may desire.
One of the goals here is to make it easy to identify existing code which would benefit from this protection and separate that code from the rest. That code is going to run anyway, it already does so today.
Not OP, but Go has some major advantages in cryptography:
1. Well-supported standard libraries generally written by Google
2. Major projects like Vault and K8s that use those implementations and publish new stuff
3. Primary client language for many blockchains, bringing cryptography contributions from the likes of Ethereum Foundation, Tendermint, Algorand, ZK rollups, etc
Do you mean “best support for cryptography in the standard library”?
Because there is tremendous support for cryptography in, say, the C/C++ ecosystem, which has traditionally been the default language of cryptographers.
Yeah the standard library crypto package is really good and so is the tls package. There's also golang.org/x/crypto which is.seprate because it doesn't fall under the go compatibility guarantee. You can do all kinds of hashes and generate certs and check signatures and do aes encryption all built in and accessible. There's even lower level constant time compare functions and everything.
I'm a big fan of the go standard library + /x/ packages.
4. The community seems to have realized that untangling the mess that is building C/C++ stuff is a fool's errand and seems to mostly prefer to reimplement it in Go
Guessing it's also taking sure to use assembly calls to zero out and clear the memory region as part of the GC... I would guess the clear/gc characteristics are otherwise the same, but having access to RAM in a non-supported platform could, in theory allow for stale reads of raw memory.
This is likely done for platform performance and having a manual version likely hinders the GC in a way that's deemed too impactful. Beyond this, if SysV or others contribute specific patches that aren't brute forced (such as RiscV extensions), I would assume that the go maintainers would accept it..
> Go has the best support for cryptography of any language
This isn't true at all.
Writing cryptography code in Go is incredibly annoying and cumbersome due lack of operator overloading, forcinforcing you to do method calls like `foo.Add(bar.Mul(baz).Mod(modulus)).Mod(modulus)`. These also often end up having to be bignums instead of using generic fixed-size field arithmetic types. Rust has incredibly extensive cryptographic libraries, the low-level taking advantage of this operator overloading so the code ends up being able to following the notation in literature more closely. The elliptic_curve crate in particular is very nice to work with.
I'd probably want some way to understand whether secret.Do is launched within a secret-supporting environment so that I'm able to show some user warning / force a user confirmation or generate_secrets_on_unsupported_platforms flag.
But, this is probably a net improvement over the current situation, and this is still experimental, so, changes can happen before it gets to GA.
I think C# has been doing really well... I've appreciated the efforts to open the platform since Core... Though I do know a few devs that have been at it as long as I have that don't like the faster lifecycle since the move from Framework.
Hello from one of them, while I appreciate the modernisation (with learnings from Midori as well), and officially going cross platform, it appears that some features are for the sake to keep the team size.
Most of our agency projects that have .NET in them, are brown field ongoing projects that mostly focus on .NET Framework, thus we end up only using modern .NET when given the opportunity to deliver new microservices, and the customer happens to be a .NET shop.
The last time this happened, .NET 8 had just been released, and most devs I work with tend to be journeyman they aren't chasing programming language blogs to find out what changes in each release, or online communities, they do the agency work and go home for friends, family and non programming related hobbies.
Go is supposed to be cross-platform. I guess it's cross-platform until it isn't, and will silently change the semantics of security-critical operations (yes, every library builder will definitely remember to check if it's enabled.)
If you need this for Windows so desperately why aren’t you offering to add support for that platform? It’s open source.
Many advanced Go features start in certain platforms and then expand to others once the kinks are worked out. It’s a common pattern and has many benefits. Why port before its stable?
Which is exactly why it should fail explicitly on unsupported platforms unless the developer says otherwise. I'm not sure how Go developers make things obvious, but presumably you have an ugly method or configuration option like:
dangerousAllowSecretsToLeak()
...for when a developer understands the risk and doesn't want to panic.
This is a sharp-edged tool guarded behind an experimental flag. You are not meant to use it unless you want to participate in the experiment. Objections like this and the other one ("check if it's enabled" -- you can't, that's not what secret.Enabled() means) illustrate that this API may still need further evolution, which it won't get if it's never available to experiment with.
I want something with a simpler backend than immich. I don't really want to host it because it needs lots of stuff to run. I would love one that can do sqlite and is a single binary go (or rust) program.
It auto uploads all your photos to the cloud and you can delete them locally and still have them. The biggest feature is the AI search, you can type anything and it will find your pictures without you doing any work categorizing them. It can do objects or backgrounds or colors and it can even do faces so you can search by people's name. That and there's share links to albums and multiplayer albums.
It keeps the originals locally when it uploads forever unless you delete them. There's a one click "free up space on this device" button to delete the local files. It's actually somewhat annoying to export in bulk, you pretty much have to use takeout.
What a disgusting article. It's abliest to say that disabled students won't be able to make it Stanford. The only weird part is calling anxiety and depression a disability.
Saying that people who are using accommodations are cheating is morally repugnant.
Instead of saying that we need to clamp down on people claiming disabilities, we should open up the accommodations to everyone.
It's not quite that they cannot do anything not in the training data. They can also interpolate the training data. They're just fairly bad at extrapolating.
Physics is obviously incomplete and yet nobody can solve quantum gravity. Being obviously flawed doesn't mean the solution is obvious. That's the whole problem.
I think in this case, people tend to underrate just how capable and flexible the basic LLM architecture is. And, also, underrate how many gains are there in better training vs better architecture.
I'm hosting from my home with a static ipv4 right now. It's been running for years without a single problem. I just put in a basic pf config. Everything is fine. It's not that scary.
has 691 lines. I expect it would work, as FAWK seems to be a very simple language. I'm currently working on a similar project with a different language, and the equivalent AST module is around 20,000 lines and only partially implemented according to the standard. I have tried to use LLMs without any luck. I think in addition to the language size, something they currently fail at seems to be, for lack of a better description, "understanding the propagation of changes across a complex codebase where the combinatoric space of behavioral effects of any given change is massive". When I ask Claude to help in the codebase I'm working in, it starts making edits and going down paths I know are dead ends, and I end up having to spend way more time explaining why things wouldn't work to it, than if I had just implemented it myself...
We seem to be moving in the right direction, but I think absent a fundamental change in model architecture we're going to end up with models that consume gigawatts to do what a brain can do for 20 watts. Maybe a metaphorical pointer to the underlying issue, whatever it is, is that if a human sits down and works on a problem for 10 hours, they will be fundamentally closer to having solved the problem (deeper understanding of the problem space), whereas if you throw 10 hours worth of human or LLM generated context into an LLM and ask it to work on the problem, it will perform significantly worse than if it had no context, as context rot (sparse training data for the "area" of the latent space associated with the prior sequence of tokens) will degrade its performance. The exception would be like, when the prior context is documentation for how to solve the problem, in which case the LLM would perform better, but also the problem was already solved. I mention that case because I imagine it would be easy to game a benchmark that intends to test this, without actually solving the underlying problem of building a system that can dynamically create arbitrary novel representations of the world around it and use those to make predictions and solve problems.