1. Quick and easy: Install pihole and add every reasonable list you can find of tracker urls to block. And just watch the live log.
2. Takes a bit more time: install opnsense or pfsense. Block dns out of your network (but allow pihole) and watch the live log of blocked dns requests. Assuming everythong has been told to use pihole
3 (bonus round). A bit more time again: create vlans or similar put the devices that you have checked every do not call home option on and block their internet access. And watch the live logs of blocked traffic
Its quite a depressing process and not sure its worth maintaing as a live setup, but its certainly an eye opener.
Each one of these steps blocks an order of magnitude less stuff, but is interesting whats in each bucket. Pihole gets hits at an astounding rate
I tried this exact setup with a combination of Ubiquiti and pihole config. It is really unmaintainable and I missed a verification / audit layer, especially for verifying that the Chinese grass/vacuum robots didn’t leak data, etc.
It would be a full time job, and then some, when the kids’ apps didn’t work due to my block lists…
Since then I have surrendered and now use a custom Cloudflare DNS endpoint.
Fwiw ubiquity devices are some of the "set every setting to never call home but still did" devices. I cant remember if they also tried to bypass the configured dns.
I still use it but keep the devices on a vlan that cannot dial out.
And use the software not an appliance to manage it.
Its not just the slick ui, its the devices themselves, and how well it all works. I got fed up of wifi at home not being as good as at work. And unifi are cheap compared to some corporate grade stuff
I get that anything can be insecure and its a constant battle as this article suggests, but i thought it was quite secure and stable generally (say on a par with .net or any other tool you may use to make a web app at least?)
It has essentially the same security properties of all the modern non-C-languages (ie, C, C++, ObjC), with the added bonus of largely being designed after the deserialization pandemic that especially hit Java, Python, and Ruby. ~All these modern languages are fine for security (though: be careful with serialization formats in anything but Go and Rust).
Arguably, Rust and Go are the two "most secure" mainstream languages, but in reality I don't think it much matters and that you're likely to have approximately the same issues shipping in Python as in Rust (ie: logic and systems programming issues, not language-level issues).
Be wary of anyone trying to claim that there are significant security differences between any of the "modern" or "high-level" languages. These threads inexorably trend towards language-warring.
I'd point out that one advantage Go has over Rust in terms of security are the coverage of standard libraries. Go has great support for HTTP clients/servers, cryptography primitives, SSH, SQL, JSON, secure RNG, etc. all in officially maintained standard libraries. The Rust ecosystem has some standards here but the most widely used HTTP client, just as an example, is mostly maintained by one guy[1]. I think that adds considerable security risk vs Go's net/http.
The go stdlib can't change the tar package to make it secure by default because it would be a breaking change to do so.
Rust, on the other hand, has a tar package outside of the stdlib, and so it can evolve to be more secure and over time find a better interface.
We've seen that with various other packages, where the Go stdlib HTTP implementation defaults to no timeouts, and thus makes it easy to DoS yourself. Ditto for tcp. The tls package has similar backwards compatibility warts that make it less secure by default.
Forcing backwards compatibility with network protocols by baking them into the stdlib has largely not been a security win in my experience.
You can argue that people can build packages outside of the Go stdlib too, like if the stdlib "image/draw" package is so bad it can't be used, they can make "golang.org/x/image/draw", or if the stdlib crypto package is bad, they can make "golang.org/x/crypto"... and they did, but people still reach for the stdlib because it's easier to, which makes it an active security trap.
I'm not giving rust credit, I'm giving Go a demerit for having a large stdlib which it does not have a good path to evolve around security problems.
We do have stuff like `golang.org/x/<etc>` and `rand/v2`, both of which people don't really use, which are I think clear indications that the go team screwed up here.
Things like tls and http should have been separately versioned packages from the beginning, allowing infrequent breaking changes, and for users to update at their own pace independently of the compiler version.
As-is, every time I update the go compiler, I also have to worry about setting a bunch of new GODEBUG flags (like 'x509sha1=1') to perform the compiler update without breaking stuff, and then separately deal with the breakages associated with those flags. Practically every go version in recent memory has had a breaking http or tls change which has caused issues for me.
But of course they're all tied together, so to get a CVE fix in one package, I have to update the entire stdlib at once, so I have to accept some broken http change in order to fix a tls CVE or whatever.
If tls were a separate package, I could update it separately from the compiler and http package and consume security updates more quickly, and also actually update my go compiler version without worrying about how much of my code will break.
As I said, I'm not giving rust extra-credit, it did the reasonable normal thing of saying "the stdlib is for stuff we're pretty sure is actually stable", while go instead said "idk, will net.Dial ever need a timeout? Who knows, let's promise it's stable forever anyways" and "the default zero value for tls version should be 1.0 forever right", which I think deserves an obvious demerit.
Good point. If you consider the size of your dependency graph as a risk, especially for languages that encourage large dependency graphs like JS and Rust, then Go has a very clear advantage.
> Be wary of anyone trying to claim that there are significant security differences between any of the "modern" or "high-level" languages. These threads inexorably trend towards language-warring.
Hm, I think this is a reasonable take but taken too far. Presumably this out of a desire to avoid people arguing about this-language-feature vs. that-language-feature, but in practice "the language" also gets conflated with the tooling and the ecosystem for that language, and having good tooling and a good ecosystem actually does matter when it comes to security vulns in practice. Indeed, anyone can write SQL injection in any language, but having a culture of finding, reporting, and disseminating those vulnerabilities when they happen, and then having mature tooling to detect where those vulnerable packages are being used, and then having a responsive ecosystem where vulnerable packages get swiftly updated, those are all things that make for more secure languages in practice, even among languages with near-identical feature sets.
What is the "deserialisation pandemic"? It doesn't have obvious web search results, and I'm struggling to imagine what about deserialisation what be common between Java and Python (except that, in both cases, I'd surely just use protobuf if I wanted binary serialisation).
In the early 2000/2010s there was a popular idea that it'd be neat to have (de)serialization functionality that could perfectly roundtrip your language's native objects, without requiring that the objects be whatever the language uses as plain old data storage. In the happy case it worked super well and basically every language sufficiently dynamic to support it got a library which let you take some in memory objects, write them to disk, then restore them exactly as they were at some later time.
This had the obvious-in-retrospect major problem that it meant that your deserialization was functionally equivalent to eval(), and if an attacker could ever control what you deserialized they could execute arbitrary code. Many programmers did not realize this and just plain called deserialization functions on untrusted data, and even when people did become aware that was bad it still turned lots of minor bugs into RCE bugs. It was often a long and painful migration away from insecure deserialization methods because of how darn convenient they were, so it continued to be a problem long after it was well understood that things like pickle were a bad idea.
For many years, these were the most widespread serverside RCE vulnerabilities; Rails YAML might be the best-known, but there were a bunch of different variants in Java serialization, and a whole cottage subfield of vulnerability research deriving different sequences of objects/methods to bounce deserializations through. It was a huge problem, and my perception is that it sort of bled into SSRF (now the scariest vulnerability you're likely to have serverside) via XML deserialization.
You said that Go and Rust managed to avoid these issues. Is there anywhere I can read about how they avoided it? And why other popular modern languages can't?
Given the enormity of Elixir's runtime, that seems extremely unlikely. The kinds of bugs you expect to see in interpreted/VM code are different than those in compiled languages like Rust; someone is going to find memory corruption, for instance, when you index exactly the right weird offset off a binary, or do something weird with an auto-promoted bignum. We still find those kinds of bugs in mainstream interpreted languages built on memory-unsafe virtual machines and interpreters.
I'm not saying Elixir is insecure; far from it. It's a memory-safe language. Just, it would be a weird language slapfight to pick with a compiled language.
My comment isn't about compiled vs. bytecode languages. It's about memory management. For example:
• In Elixir, each process runs in isolation, has its own heap, and prevents one process from directly accessing or corrupting the memory of another process. In contrast, Goroutines share the same address space, which means that a bug in one goroutine can potentially corrupt the shared memory and affect other code.
• Elixir uses immutable data structures by default, so nothing can be changed in place. Go, on the other hand, allows mutable state, which can lead to race conditions if not managed correctly. In other words, Elixir is inherently thread safe and Go is not.
• Elixir uses a generational garbage collector with per-process heaps, meaning that the garbage collection of one process can't impact another process. In contrast, Go uses a mark-sweep garbage collector across its entire memory space. This can cause global pauses that can open a window for denial-of-service attacks.
• Elixir uses supervisor processes to monitor operational processes and restart them if they crash. Go's error handling can lead to memory leaks and other undefined behavior if not carefully managed.
• Elixir inherently protects against race conditions, whereas Go relies on tools like the race detector and developer onus to avoid them.
Yeah, none of this is plausible. These are mostly true statements about Elixir that have no bearing on the memory safety of Go, in the sense that term is used in vulnerability research.
Ironically, a flip side of the complaints about how Go lacks power is that a lot of the "standard" security vulnerabilities actually become harder to write. The most obvious one is lacking the "eval" that a dynamic language has; more subtle ones include things like, there is no way to take a string and look up a type or a method in the runtime, so things like the Ruby YAML vuln are not assisted by the language level. To write something like that into Go, you'd have to actually write it in. Though you can, if you try hard enoough.
But, as sibling comments point out, nothing stops you from writing an SQL injection. Command injections are inhibited by the command only taking the "array of strings" form of a command, with no "just pass me a string and we'll do shell things to it" provided by the language, but I've dispatched multiple questions about how to run commands correctly in Go by programmers who managed to find []string{"bash", "-c", "my command with user input from the web here"}, so the evidence suggests this is still plenty easy enough to write. Putting the wrong perms or no perms on your resources is as easy as anything else; no special support for internal security (compare with E lang and capabilities languages). And the file access is still based on file names rather than inodes, so file-based TOCTOUs are the default in Go (just like pretty much everywhere else) if you aren't careful. It comes with no special DOS protection or integrated WAF or anything else. You can still store passwords directly in databases, or as their MD5 sums. The default HTML templating system is fairly safe but you can still concatenate strings outside of the template system and ship them out over an HTTP connection in bad ways. Not every race condition is automatically a security vulnerability, but you can certainly write race conditions in Go that could be security vulnerabilities.
I'd say Go largely lacks the footguns some other languages have, but it still provides you plenty of knives you can stab yourself with and it won't stop you.
I've been running govulncheck against my repos for a while, and I have seen some real vulnerabilities go by that could have affected my code, but rather than "get arbitrary execution" they tend to be "didn't correctly escape output in some particular edge case", which in the right circumstances can still be serious, but is still at least less concerning than "gets arbitrary execution".
> I'd say Go largely lacks the footguns some other languages have
With the glaring exception of "I forgot to check the error code", which you need a linter (e.g. as provided by golangci-lint) for. It's critically important for security that you know whether the function you just called gave you a meaningful result! Most other languages either have sum types or exceptions.
No it's not. This is what I meant, cross-thread, when I suggested being wary of arguments trying to draw significant distinctions between memory-safe-language X and memory-safe-language Y. Error checking idioms and affordances have profound implications for correctness and for how you build and test code. Programmers have strong preferences. But those implications have only incidental connections to security, if any. Nevertheless "security" is a good claim to throw into a "my language is better" argument.
I don't even use Golang, I maybe read two Golang repos a year, I find these errors in almost every repo I look at (probably because of the selection effect: I only look at the code for tools I find bugs in). One of them I remember was a critical vulnerability of exactly this form, so :shrug: Perhaps I'm just grotesquely unlucky in the Golang projects I see, but that makes maybe 10% of the Golang error-handling bugs I've found to be security bugs.
I'll gesture at it. It's not an open source tool, so I can't point at the code (and in fact I just checked and I don't have perms to see the Jira ticket I caused to be raised!), and I am wary of describing security bugs in company-internal code. But in general terms it was a service that attempted to check whether a request was allowed, and it ignored errors from that check. (I just searched history for a bit to find the error in the absence of any actual details about it, but it was a while ago and I failed.) Sorry this is not a very satisfying answer.
Any language where errors are returned as values will allow you to ignore errors (if you don’t have proper linting set up, and unless it has something fancy like linear types). I’ve even seen a similar error in Haskell code, where someone called an isLoggedIn function inside a monad with the expectation that it would short-circuit evaluation, whereas in fact it just retuned a Bool.
In that case you can just use the `?` operator to short-circuit (and clippy will warn you if your forget to do that).
In other words, while language design can't fully prevent you from ignoring precondition checks, it can make it harder to forget or even force you to actively ignore precondition failures
But isn't the idiomatic Go solution something like this?
func checkPermissions(success func())
Like anything, you can still screw it up if you try hard enough, but it should nudge most in the right direction. The talk of error handling seems like a distraction or a case of someone confusingly trying to write code in another language using Go syntax.
Obviously you are not forced to think of the user when designing an API, but you don't have to be mindful of the user in any language. Not even Haskell can save a developer who doesn't care, as noted in an earlier comment.
Go linters do a pretty good job of spotting where error return values have been ignored, so I'd suggest that the kind of bug the OP is referring to is pretty unlikely to happen in a Go project that's properly configured.
Sure - my question is "why do you need to set up the third party linters when they're so critical to the correctness of your program" really. It's the general "yeah we'll give you these footguns which are the first thing every developer will learn about during their first incident; good luck, and I hope you know you need to do things this way!" attitude I object to.
> my question is "why do you need to set up the third party linters when they're so critical to the correctness of your program" really.
No doubt the same reason it is also critical in Haskell (see comment about isLoggedIn function): Developers not knowing what they are doing.
If you work within the idioms and generally accepted programming practices this isn't a problem. It only becomes a problem when you get a developer who wants to "go their own way" without understanding why the norms exist. The linter is a crutch to support their "bad habits".
There’s no simple solution when it comes to ignoring errors. Some errors should be ignored and some shouldn’t. So your only lines of defense are linting heuristics and tests.
I would agree that languages which handle errors via exceptions have an advantage here, as they make not ignoring errors the default behavior. But even then, it’s obviously still possible to indicate error conditions of various kinds via return values, in which case they can still be ignored thoughtlessly. (And you also have all the bugs caused by unhandled exceptions to deal with.)
> Some errors should be ignored and some shouldn’t.
Assuming the function author followed Go conventions, you never need to consider the error for the sake of using the function. Granted, there are some bad developers out there who will do something strange that will come to bite you, but that is not limited to errors (or any particular language).
You may still need the error for your own application requirements, but application requirements are pretty hard to forget. At very least, you are going to notice that your application is missing a whole entire feature as soon as you start using it.
While you might be forced to if faced with a developer who doesn't know what the hell they are doing, that is not the convention.
Consider the case of (T, error). T should always be useable, regardless of error. At very least, if there is nothing more relevant to provide, the function should return the zero value for T. And we know that in Go the zero value is to be made useful. Go Proverb #5.
In practice, this often means something like (*Type, error), where the zero value for *Type (nil) is returned on failure. In which case the caller can check `if t == nil`. No need to consider the error at all. It is there if your requirements dictate a need for the error (e.g. reporting the failure), but for using the result it is entirely unnecessary.
If you leave the caller in a state where T can be invalid, you screwed up horribly or are purposefully being an asshole. Don't do that. That is Go 101 type stuff.
> Consider the case of (T, error). T should always be useable, regardless of error.
Go convention dictates that T is only valid (usable) when error is nil. Equivalently, if error is non-nil, then T is invalid (unusable). In either case, the caller is obliged to verify error is non-nil before trying to access T.
You're still only halfway there, as we discussed earlier.
Go calling convention is that you have to assume T is invalid, unless proven otherwise, because we know there are developers who have no business being developers that will screw you over if not careful. The Go style guide promotes documenting in a function comment that your function does return a valid T so that users don't have to guess about whether or not you are competent.
But the convention on the function authoring side is to always ensure T is valid. Just because others shouldn't be writing code does not mean you need to be among them.
As Go promotes limiting use of third-party packages by convention, as a rule most of the functions you call are going to be your own. So most of the time you can be sure that T is valid, regardless of error state. Yes, sometimes you are forced to rely on the error, as we discussed already in earlier comments. Such is life.
Isnt this the same as any language though.. check if have permission then ignore the result seems like something that the language cannot protect you from?
I mean, Golang has an unused variables compile error, which presumably is trying to do precisely this. It's like they got so close to forcing the user to acknowledge the possibility of errors, and then stopped just before the end!
Mmm, that's fair. I tend to forget about it because it's not something I personally struggle with but that doesn't mean it's not a problem.
I'd still rate it well below a string eval or a default shell interface that takes strings and treats them like shell does. You assert down below that you've seen this lead to a critical vulnerability and I believe you, but in general what happens if you forget to check errors is that sooner or later you get a panic or something else that goes so far off the rails that your program crashes, not that you get privs you shouldn't. As I say in another comment, any sort of confusing bit of code in any language could be the linchpin of some specific security vulnerability, but there are still capabilities that lead to more security issues than some other capabilities. Compared to what I've seen in languages like Perl this is still only "medium-grade" at best.
And I'm not trying to "defend" Go, which is part of why I gave the laundry list of issues it still has. It's just a matter of perspective; even missing the odd error check here or there is just not the same caliber problem as an environment where people casually blast user-sourced input out to shell because the language makes it easier than doing it right.
(Independent of language I consider code that looks like
architecturally broken anyhow. It seems natural code to write, but this is a form of default allow. If you forget to check the operation in one place, or even perhaps forget to write a return in the if clause, the operation proceeds anyhow. You need to write some structure where operations can't be reached without a positive affirmation that it is allowed. I'd bet the code that was broken due to failing to check an error amounted to this in the end. (Edit: Oh, I see you did say that.) And, like I said, this is independent of Go; other than the capabilities-based languages this code can be written in pretty much anything.)
I think it's a reasonable observation but it isn't a fair comparative security criteria. The subtext behind error checking critiques is that languages with idiomatic sum type returns avoid authz vulnerabilities, in the same way that memory-safety in Go eliminates UAF vulnerabilities. But authz vulnerabilities are endemic to the mainstream sum type languages, too; they're much more complicated as a bug class than just "am I forced to check return codes before using return values".
Sum types are one of the few things I miss when switching from other languages back to Go. I like them a lot. But I think they're wildly overstated as a security feature. Sum type languages have external tooling projects to spot authz vulnerabilities!
How is it that people "forget to check errors" but not other types, even though they are all just 1s and 0s? Or, to put it another way, why do programmers forget how to program as soon as they see the word "error"?
It seems to be a real phenomenon, but I can't make sense of how it can happen. It is not some subtle thing like misspelling a word in a string constant. You are leaving out entire functionality from your application. It is almost on the order of forgetting to add the main function.
I would think it's a mix of not being sure exactly what to do on error and not wanting to undergo the effort of writing error logic. You have to switch from "basic skeletal structure of the program" to "cover all bases", which isn't simple. So it's easy to have no or rudimentary error handling, and by the time you want to change it, it's hard to change. Like, "malloc can fail, but it would be a lot easier right now if I assume it won't".
Depends on the application! There's a reason we have the concept of "failing closed" vs "failing open": sometimes (very often, in fact) it's correct to shut down under attack, rather than to open up under attack.
The subtext of that comment cuts against the argument you're trying to make here: a panic following a missed error check is always fail-closed, but exception recovery is not.
One thing to note about data races in Go is that the safe Go subset is only memory-safe if you do not have data races. The original post alludes to that because it mentions the race detector. This situation is different from Java where the expected effect of data races on memory safety is bounded (originally due to the sandbox, now bounded effects are more of QoI aspect). Data races in Java are still bad, and your code may go into infinite loops if you have them (among other things), but they won't turn a long into an object reference.
The good news is that the Go implementation can be changed to handle data races more gracefully, with some additional run-time overhead and some increase in compiler complexity and run-time library complexity, but without language changes. I expect this to happen eventually, once someone manages to get code execution through a data race in a high-profile Go application and publishes the results.
These arguments would be more compelling if they came with actual exploitable vulnerabilities --- in shipped code, with real threat models --- demonstrating them, but of course the lived experience of professional programmers is that non-contrived Go memory safety vulnerabilities are so rare as to be practically nonexistent.
About footguns, I'd like to mention an important one: in Go, it's hard to deserialize data wrongly. It's not like python and typescript where you declare your input data to be one thing, and then receive something else. It's a feature that makes server code, which after all is Go's niche, considerably more reliable.
Safety isn't 0% or 100%, and the more a language offers, the better the result. Go is performant, safe, and fairly easy to read and write. What else do you need (in 99.9% of the cases)?
Only if you use a deserializer that's tied to your classes, and not put everything in a dict. And then only if the data encounters an operation that doesn't accept it. But many operations accept e.g. strings, arrays, ints and floats. Is there even an operation that throws a TypeError when using a float instead of int?
Pydantic only helps (AFAIK) when you're letting it help, and you actually use the correct type information. It's not difficult to use, but it's optional, and can be faulty.
I was referring specifically to security footguns like having a string eval. While one can construct code in which that is the critical error that led to a security vulnerability, that can be said about any confusing bit of code in any language, and I would not judge that to especially lead to security issues.
This actually is a security footgun. In Java or C# you can't get security issues by trying to update a reference from multiple threads, because it's always atomic. In Go you can create type confusion because interface pointer updates are not atomic.
This sets the bar ludicrously low for "security footgun". If this is a "security footgun" then what is string evaluation in a dynamic scripting language, a "security foot-nuke"?
Granted, there is no sharp line that can be drawn, but given my personal career I'd say I've encountered it personally at least once is a reasonable bar, if not quite excessively low. (tptacek would have to set the bar somewhere else, given his career.) Concurrency issues causing a security issue because of type confusion on an interface in a Go program is not a "every time I crack open a program, oi, this security vulnerability again" like bad HTML escaping or passing things straight to a shell. I mean, "concurrency issues causing type confusion on an interface" is already not something I've ever personally witnessed, let alone it actually being a security issue rather than a difficult-to-trace panic issue.
And I will reiterate, I already say that any bug can become a security issue in the right context. That doesn't make them all "security footguns".
> This sets the bar ludicrously low for "security footgun". If this is a "security footgun" then what is string evaluation in a dynamic scripting language, a "security foot-nuke"?
Not really. Apart from dangerous serialization formats (e.g. Python's "pickle") it's not at all easy to eval a string in modern scripting languages.
> i thought it was quite secure and stable generally
It is, but security isn't a "given" anywhere. XSS, SQL Injection, Dependency etc can be done by any language, regardless of how "secure" it claims to be.
The headings are all pretty general (versioning, tooling, scanning, testing) but the contents are Go-specific.
It's a pretty good article IMO and could/should be replicated for other languages as well.
Reminds me of when SQL injection was the hot security problem, which was mainly caused by PHP, but not the language itself but reams and reams on low quality online tutorials trying to keep things simple by just concatenating GET parameters straight into an SQL query.
There are many ways to skin a cat, but my advice how to try it an excel way (assuming a db backgound) would be..
try dumber things, sounds stupid but you dont need rules and structure, just data :)
denormalise a more often to break down the problem, the data and problem are your goal not structure (as much as db).
Yes period per tab type of thing is quite common, as at some point you want to close the period and never change it.
Lean into the non-standardisation to handle the real world. E.g for most of your budget its one line per item per month but this one are flexes with headcount so that has its own page, and tax is balnced in month x so ill just over type all the formulae there when the real numbers come in.
Also if the model is complex try naming fields and showing the formula in a cell next to it to remind you how its calculated (if not ready using it check out "format as table" to do this for tabular data)
And yes pivot the crap out of everything.
There is also "add to model" which gives you powerbi type modelling in excel which can also be handy and fast.
Not extensive list, and for lots of things db is better when you know how to use it.. but those are some of the "i get it" scenarios for me
Decentralised could be each counter putting 5000 or so ballots into piles with people wandering around witnessing for various parties all working a rigid process accross the nation. Each count publically announced in the room before witnesses.
Totally standardised, coordinated, and decentralised. But fragmented (structuraly) or incoherent.
But agree would be a million times worse with a single electronic system