For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | DarkUranium's commentsregister

I used to play on top of a giant (for a kid me, anyway) anthill in a nearby forest.

That's how I learned that forest ants, at least the local ones, are incredibly docile. I never got bothered by them.


Bullet ants, on the other hand, are not fun. Not even a little bit.

Have a look at Seafile's SeaDrive client for that.

Mind, I haven't actually used it in anger, as I prefer full file sync vs on-demand.


Seafile seems to have that feature, but upload only.

And I haven't tried it ... unfortunately, the Android app is also ...... buggy.


Note that, back when it started (pre-GDPR cookie banners), this was pure malicious compliance in 90% of cases.

Most sites didn't need a banner. Even post-GDPR, many use-cases don't need one.


There's literally a name for using this on purpose: stochastic terrorism.

There's also a very good TED talk on this topic from 8 years ago: https://www.youtube.com/watch?v=iFTWM7HV2UI


I've always found the notion of "stochastic terrorism" to be elastic, effectively transforming "speech a given person dislikes" into "danger" so censorship looks like virtue.

Not to mention - you have to account for what happens if someone you hate may be in power and could wield any sort of system to stop "stochastic terrorism" against you. This is often dismissed as an abstract what-if, but....given what's been happening with world leaders these days, it should be a central consideration.


You are worried about the “what if” fallout over the multiple world leaders actually engaging in it. Their followers enact violence on their behalf while the leader maintains plausible deniability/enough perceived distance from the act they can never be explicitly blamed.

You can be worried about more than one thing but clearly one is a bigger issue than the other right now.


I never said "multiple". Just the leader in the jurisdiction you live in.

And I'm genuinely not sure how to interpret your last sentence. In the US we have a President that is increasingly going after people for their speech, in quite a few cases by using the laws and policies put in place to go after dissent. He is going after colleges and businesses who have "bias against whites" using policies put in place to punish hate speech against minorities and women.


I agree with that all that. That is why I am surprised you’re downplaying the idea of “stochastic terrorism” and discouraging the term’s usage. I don’t really get it.

It’s also important to note that the MAGA movement doesn’t care what restraint is shown when they’re out of power, they simply use every tool in their toolbox and bury the sword to the hilt every time.


Yes - and the point I'm making is that their toolbox has a few additional, nasty tools for censorship because they were originally enacted with the belief that only good, honest people would use them.


I 100% agree on pretty much everything. The "webapp masquerading as a native app" is a huge problem, and IMO, at least partially because of a failure of native-language tooling (everything from UI frameworks to build tools --- as the latter greatly affect ease of use of libraries, which, in turn, affects popularity with new developers).

To be honest, I've been (slowly) working towards my own native GUI library, in C. It's a big undertaking, but one saving grace is that --- at least on my part --- I don't need the full featureset of Qt or similar.

My plan for the portability issue is to flip the script --- make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU). And on Android/iOS/etc, I can already do native anyway.

Though I should add that a native look is not a goal in my case (quite a few libraries already go for that, go use those! --- and some, like Windows, don't really have a native look), which also means that I don't have to use native widgets on e.g. Android. The main reason for using DOM on the web is to be able to provide for a more "web-like" experience, to get e.g. text selection working properly, as well as IME, easier debuggability, and accessibility (an explicit goal, though not a short-term one --- in part due to a lack of testers). Though it wouldn't be too much of a stretch to allow either canvas or DOM on the web at that point --- by treating the web the same as a native platform in terms of displaying the widgets.

It's more about native performance, low memory use, and easy integration without a scripting engine inbetween --- with a decent API.

I am a bit on the fence between an immediate-mode vs retained-mode API. I'll probably do a semi-hybrid, where it's immediate-y but with a way to explicitly provide "keys" (kind of like Flutter, I think?).


> make it a native library that can compile to the web (using actual DOM/HTML elements there, not canvas/WebGL/WGPU)

How interesting to hear. I've been exploring a way to write cross-platform GUI apps in C, using the Sokol library and possibly DearImgui. It's very convenient how it can build to WebAssembly and run the same way as the native app. But for larger canvas sizes it does eat more processing power, more than a typical website, and I was considering using DOM elements instead of canvas.

Good point about better accessibility too, and leveraging the feature set of modern web browsers.

A cross-platform GUI library that works with native and the web, so that the same application can be built for these targets with minimal changes. With the maturity and adoption of Wasm, I expect we'll see growing development in this direction. And some people have cautioned that treating the web as a blob of canvas and compile target to deploy opaque binaries is a step back from the potential of the web, like seeing the source (or source map at least), consistent handling of text, scrolling, accessibility features.

So I like your idea to "flip the script", I think in my own way I'm finding a similar approach.


The same is visible in having to parse a bunch of Linux's more complex of the /proc entries, vs. simply using syscalls in (say) FreeBSD.

"Everything is a file" is not a bad abstraction for some things. It feels like Linux went the route of a golden hammer here.


That's the gist of his whole talk – that doing things "the UNIX way" (which can be defined to various degrees of specificity) has been cargo culted, and that we should reexamine whether solutions that were pragmatic 50+ years ago is still the best we can do.

The specific reason I mentioned it was because his initial example was about how much more ceremony and boilerplate is needed when you need to pretend that USB interfaces are actually magic files and directories.


Not sure if I'm just misunderstanding the article or not, but it feels like an overengineered solution, reminescent of SAML's replacement instructions (just a hardcoded and admittedly way better option --- but still in a similar vein of "text replacement hacks").

I know it's not the most elegant thing ever, but if it needs to be JSON at the post-signing level, why not just something like `["75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","{\"foo\":\"bar\"}"]`, in other words, encode the JSON being signed as a string. This would then ensure that, even if the "outer" JSON is parsed and re-encoded, the string is unmodified. It'll even survive weird parsing and re-encoding, which the regex replacement option might not (unless it's tolerant of whitespace changes).

(or, for the extra paranoid: encode the latter to base64 first and then as a string, yielding something like `["75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","eyJmb28iOiJiYXIifQ"]` --- this way, it doesn't look like JSON anymore, for any parsers that try to be too smart)

If the outer needs to be an object (as opposed to array), this is also trivially adapted, of course: `{"hmac":"75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","json":"{\"foo\":\"bar\"}"}`.


You can and this will be simple and reliable.. but that's solving the different (and easier) problem that the post. In the post, author wants to have still have parsable JSON _and_ a signature. Think middleware which can check signature, but cannot alter the contents, followed by backend expecting nice JSON. Or a logging middleware which looks at individual fields. Or a load balancer which checks the "user" and "project" fields. Or a WAF checking for right fields. In other words:

> Anyone who cares about validating the signature can, and anyone who cares that the JSON object has a particular structure doesn’t break (because the blob is still JSON and it still has the data it’s supposed to have in all the familiar places).

As author mentions, you can compromise by having "hmac", "json" and "user" (for routing purposes only), but this will increase overall size. This is approach 2 in the blog.


Thats no different than the suggestion at the beginning of the article to serialize the JSON and sign the string.


> in other words, encode the JSON being signed as a string. This would then ensure that, even if the "outer" JSON is parsed and re-encoded, the string is unmodified. It'll even survive weird parsing and re-encoding, which the regex replacement option might not (unless it's tolerant of whitespace changes).

Would it be guaranteed to survive even standard parsing?

It wouldn’t surprise me at all, for example, if there are json parsers out there that, on reading, map “\u0009" and “\t" to the same string, so that they can only round-trip one of those strings. Similarly, there’s the pair of “\uabcd” and “\uABCD”. There probably are others.


Presumably when receiving the object, you'd first unescape the string (which should yield a unique output unless you have big parser bugs), check the UTF-8 bytes of the unescaped string against the signature, and only then decode the unescaped string as the inner JSON object. It shouldn't matter how exactly the string is escaped, as long as it can be unescaped successfully.


There are many ways to represent the JSON as binary… and all are equally valid. The easiest case to think about is with and without whitespace. Because what HMAC cares about are the byte[] values, not alphanumeric tokens.

Then, if you couple this with sending data through a proxy (maybe invisible to the developers), which may or may not alter that text representation, you end up with a mess. If you base64 encode the JSON, you now lose any benefit you might gain from those intermediate proxies, as they can’t read the payload…


Json encoded as a string is cursed, no one should do that and stop suggesting it. Base64 is fine or even ascii85


base64 is often even larger than an escaped JSON string. and not human-readable at all.

I'll take stringified json-in-json 90% of the time, thanks. if you're using JSON you're already choosing an inefficient, human-oriented language anyway, a small bit more overhead doesn't hurt.

(obviously neither of these are good options, just defer your parsing so you retain the exact byte sequences while checking, and then parse the substring. you shouldn't be parsing before checking anyway. but when you can't trust people to do that...)


The comment you replied to was posted in good faith AFAICT. Your “stop suggesting it” is unnecessarily antagonistic.


I've recently moved from Svelte (initially 4, then 5) to Vue 3, and much prefer it.

The big issue for me was the lack of support for nested observables in Svelte, which caused no end of trouble; plus a lack of portals (though maybe the new snippets fix that?).


Mind, you'll be unable to send emails to Microsoft-owned accounts (@outlook.com, @hotmail.com, and similar).

That's because Microsoft, in their infinite wisdom, decided that a reasonable default was to use a whitelist of allowed senders, blocking everyone else by default.

There is supposedly a process to get that unlocked, but they never replied to my own request ...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You