For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more dullgiulio's commentsregister

Of course it is up to the application to decide case-by-case.

Just keep in mind that if your query is started by a user request, but does not need to terminate in the context of the user request that triggered it, you can move the query to a separate goroutine and decouple it completely from the triggering user request.

Also, aborting a query is safe: the transaction gets rolled back, removing unwanted side effects. You cannot do harm by cancelling, bu you can do harm if you don't cancel.


Perl# basically.


Nope, this is a strawman argument you're making.

Parent is asking for granular access control over these very advanced and double-edged features, which is a perfectly valid request.

Having permissions is the norm for native apps on mobile, and it's slowly becoming the norm also on desktop, finally.

Browsers are the new OS, they have to implement permissions too without anything enabled by default.


It's not "they", it's Daniel J. Bernstein. That's the reason :)

(If you don't know: he is a top cryptographer that can amazingly correct code. However, he also has a very big ego...)


The funny thing about that is I find his code to be very difficult to read (even just the snippets in the linked CVE illustrate this).

And his attitude is just bonkers to me. "I'm not going to fix this exploitable security issue because I assume that people will configure their environment in a particular way." What? That's... flat-out irresponsible.


He does not have any responsibility against anyone. He released his software in public domain with the source included for free.


When you release software for the world to use, tell everyone it's secure, even put up a bug bounty… that kinda means you are taking responsibility.


Putting aside question of if can have responsibility for freely-released work (especially when one has made a big deal of money offered in exchange for this kind of finding), at the time this bug was discovered the software was emphatically not in the public domain and difficult to distribute modified versions of despite available source.


I just found this on Wikipedia where he raised his bug bounty and at the same time cited these "unexploitable" bugs

https://en.wikipedia.org/wiki/Qmail#Security_reward_and_Geor...


qmail was essentially unmaintained for a long time, people were distributing patches to it, but there were no upstream releases.

(Similar story with djbdns/tinydns.)

"Recently" he released his code with new licenses, so that people could finally start distributing updated versions, rather than the previous approach where lots of people were sharing conflicting patches for various features (e.g. IPv6 support for AAAA records in tinydns.)


notqmail.org is for long the defacto maintained version. They had this fixed long time ago.



Filo is upset that he did not bother to check the code (that originally came from SUPERCOP, a benchmarking tool) he blindly included in go's xcrypto. Here is DJB's tweet: https://twitter.com/hashbreaker/status/1108637226089496577

(regardless, you should not directly encrypt a large amount of data, even nacl suggest against it https://nacl.cr.yp.to/valid.html)

In addition both Filo and Garrett have a bone to pick with DJB due to their personal political beliefs and his involvement in the Appelbaum case and I found both of them to be extremely dislikeable and unable to accept their own faults in personal discussions that I had with them in the past (regarding different issues). Considering that this was a subpost my opinion of them is even lower now.


My opinions around DJB have very little to do with my political beliefs, but rather more to do with spending time with people who are far better at cryptography than he is without having anything approaching his ego.


> a bone to pick with DJB due to their personal political beliefs

"Political beliefs" is a weird way to say DJB has stepped up to defend at least three people accused of sexual abuse by multiple victims.


I actually had https://blog.cr.yp.to/20160607-dueprocess.html in mind when mentioning political beliefs, but even then I think that this still falls under the "political belief" label - the belief that anyone accused of rape should not have any form of defence. Anyway, the person who is being sued was part of a harassment campaign against multiple people, including Bernstein himself, so I can't see why it would be a bad thing for him to send his declaration.

For anyone interested, his declaration is here: https://www.courtlistener.com/recap/gov.uscourts.cand.340308...

And here is the most objective and complete story of the Appelbaum events that I have personally seen so far https://github.com/Enegnei/JacobAppelbaumLeavesTor/blob/mast... (if anyone has anything better please do let me know)

> accused of sexual abuse by multiple victims

Including false accusations by others in the name of the so called victims. Such as the "Alice" case. (if I am not mistaken this specific accusation was published by the person being sued themselves)


People can be onboard 100% with his argument for due process (I am) and simultaneously 100% against his association with Jake Appelbaum, whose (perhaps subcriminal) misbehavior was widely reported in private in our industry prior to the bevy of as-yet-unproven rape accusations.

Appelbaum is and has been a scumbag regardless of the fact of whether or not he has been adjudicated a rapist in a court of law.

People who associate with scumbags (and, indeed, defend them in particular) aren’t great, and can and should be subject to criticism for their choices regarding scumbags.

Fortunately, it isn’t a simple dichotomy. I agree with due process for imprisoning people. I also agree in public criticism of entirely legal misbehavior and freedom of association. I don’t respect people who defend scumbags socially (defending subcriminal scumbags from prison is another matter), and djb is certainly that.


> Appelbaum is and has been a scumbag

I do not know him, do you? Most accusations against him that I have seen have been either by the person being sued or some form of hearsay.

It is not too unlikely that he is a scumbag to be honest, but it is still something that I do not know.

> and, indeed, defend them in particular

I contest this claim. He did not defend Appelbaum in this instance, in his declaration even he claims that he is unaware whether Appelbaum is a rapist. The lawsuit is against Lovecruft specifically. Regardless, I do not believe that scumbags do not deserve to be defended. Everyone does, as long as the defence has reasonable points that is.

Btw, can't this post of yours be interpreted as defending Lovecruft if we follow this logic? If so I find this ironic that you try to criticize Bernstein of something that you are doing yourself.

> I also agree in public criticism of entirely legal misbehavior

You must love sites like Kiwifarms then. It is one thing to have open criticism and debate and another to have dog-piling and harassment based on roumors - the ability to defend yourself and have others defend you is one of the most important things that distinguishes the two.

> and freedom of association

Do you also believe that people should be free to refuse to deal with minorities by any chance? This is something that is implied by the freedom of association after all.

> I don’t respect people who defend scumbags socially

Again, the pot calling the kettle black. I do not get this logic to be honest, I will explain why with an example. Let's take a scambag, Jeff Bezos for example, and I start saying that he is a murderer out of nowhere. Is nobody allowed to defend him or ask for evidence just because he is a scambag?


> even nacl suggest against it

You're defending djb's decision by pointing to another of his projects, which in turn cites another email from djb. I'm not saying you're wrong¸ but it's not exactly a reviewed position.


I meant it in another way. If he had taken DJB's advice this would not be an issue, so DJB can't be blamed for it. Sorry for the misunderstanding.

That being said, he seems to agree with this. After all he followed said advice in his tool age.


Your criticism of the messenger of further evidence of djb's longstanding refusal to deal straightforwardly with security reports is not on topic, IMO.

Not everything is a simple dichotomy.


(regarding qmail) It was a security bug back in 2005. It stopped being a security bug when DJB mentioned on the official page about the memory limits.

Regarding the salsa20 implementation: I just mentioned in my previous message why this was not a bug and the only reason that people were upset over it was due to Filo's incompetence.

As for evidence of DJB dealing straightforwardly with security reports: https://news.ycombinator.com/item?id=23250748

I think that https://old.reddit.com/r/crypto/comments/72w42c/statement_re... would be a better example of DJB not properly handling security reports.


salsa20 was added in 2012, the warning file was added into the repository in 2016 the earliest (it is not clear when--which is vary bad for security and also shows the move was not advertised.)

Incompetence is a strong word on the wrong target...


> the warning file was added into the repository in 2016 the earliest

There is no official public repository. It gets released in tarballs.

This does indeed make him seem slightly less incompetent though.

> which is vary bad for security

It is a framework for benchmarking cryptographic algorithms.


http://cr.yp.to/talks/2007.11.02/slides.pdf I don't see ego in these slides. I see a brilliant programmer acknowledging his mistakes and learning from them. I really enjoyed running qmail in early 2000s and following djb's crypto work later. He is brilliant indeed.


You should read some of his public.... ekhm... “discussions” with Wietse Venema on various security forums in the 90’s. It was very entertaining, but also clearly showcasing djb’s huge ego.


A lot of it would get you banned from HN and similar fora today.


Here is my favourite that I remember to this day: https://seclists.org/bugtraq/1998/Nov/117


Hah, "How many people relaxed after installing tcpd -DPARANOID, instead of pestering their vendors for a real fix?" Sounds similar here ;)


Yeah, the lack of self-reflection is pretty palpable.


I don't understand. Most people with a big ego would not want a critical vulnerability associated with them. Can you elaborate?


he had such confidence in his software and abilities that he thought it was actually secure, and there were no bugs, and posted a bounty for any exploit that could be found. Patching it means acknowledging it's an exploit, and that his code was not without bugs.

Given that his principles of writing secure software (included in the Qmail guarantee[1]) includes this: "7. Write bug-free code." that might be a bit hard for him to swallow.

1: https://cr.yp.to/qmail/guarantee.html


He did reward some other bounties that he made, I can only think the ones from his cryptographic algorithms at the moment though.

Edit: See https://news.ycombinator.com/item?id=23250748


Well, he used it with memory limits command line switches, so it could never be exploited. So he was technically correct. One should not use so much memory for a mail server, way too risky.

Problem is, these switches were not default, people didnt use it because they are dumb, and DJB never cared to properly maintain it. like limiting memory per default, 32bit only builds or such.


> One should not use so much memory for a mail server, way too risky.

Is there a table or formula I can consult that will give this particular dumb person(myself) a handy guide for what amounts of addressable memory will introduce security risks for particular applications? Apparently more than 32-bits is obviously[0] a problem for email; what about databases? Should I feel bad I use more than 64GB of memory in my DB installations? Am I being irresponsible? What about web servers? How much risk does each additional bit of memory add?

My final question is, why does pretty much every other software maintainer not have a problem fixing the memory allocation themselves, obviating the need for external tools to fix these issues? I guess they're going the extra mile!

[0] So obvious a problem that sendmail, postfix, and exim don't require me to apply workarounds for it for some reason. Very irresponsible of them, if you ask me.


> Problem is, these switches were not default, people didnt use it because they are dumb

Pushing complexity from a very small group (in this case, one person) who knows the system intimately to many orders of magnitude more people that are meant to have a functional knowledge of how it operates but not necessarily be intimate with it is a losing proposition, and not any tenet of how I would consider developing secure software.

If the software is only supposed to be run under process limits, and over a specific process limit all bets are off security wise, then the program should probably check and report problems with large process limits when it starts. Or, as you posit, dying if built for 64 bit, since its assumptions don't necessarily hold.


My opinion on this is that if you're going to claim that you write the most secure software in the world, it should be secure by default. It shouldn't require you to modify the configuration in a particular way, or start it in a particular way, in order to be secure. The more details you need to know in order to secure something, more less likely you'll tick off all those boxes.

To me, this is just DJB's ego not allowing him to admit that he made mistakes.


His ego has transcended objective reality and he claimed (in 2005, and continues to claim in 2020) it isn't a vulnerability.


He asserted that it was unexploitable, and thus not a vulnerability. Fixing the code would have implied that it might have been a vulnerability, and he might have been wrong. Can't have that.


Go follows the Plan9 system call name instead of the UNIX one. Dial is much more powerful than the UNIX dance of "getaddrinfo"+struct sockaddr init+connect(2).

It might not seem much (and higher level languages usually abstract away the craziness that UNIX sockets are in C), but that's what the OS still gives you in 2020...


Plan 9's dial() interface is implemented in a user land library. In terms of the interfaces provided by the kernel, connecting a socket is anything but simple in Plan 9. See https://9p.io/sources/plan9/sys/src/libc/9sys/dial.c

The reason no standard interface has replaced getaddrinfo + socket + connect is probably because it's just barely simple enough for common usage in C, and C was never the language you turned to when you wanted to write something short and sweet--that's why Unix environments have always hosted a myriad of other languages. If initializing a network connection were as complex as in Plan 9, doubtless Unix would have provided a more succinct libc interface for the common case

The BSD Sockets API is also close to the simplest possible interface for supporting all the various address and socket option combinations that are possible. (The kernel provides mechanism, not policy.) So even if POSIX, Linux, or whatever included a better standard interface for initializing a connection, it would have to be in addition to the BSD Sockets API (or equivalent).


BSD Sockets are infamous for being significant issue in using anything other than IPv4 (getaddrinfo is copied over from XTI, aka Streams-related "evil" API).

They are literally a low-level API that happened to be part of IPv4-only stack because DoD had short deadline to get IPv4 ported to VAX and other new Unix machines.

And OS should provide a policy when it comes to networking, otherwise you end up with never ending story of working around other's software to implement them.


Plan9's "Dial" is definitely a product of it's time, literally meaning to "Dial" a phone.

More information here: relevant man entry: https://www.unix.com/man-page/plan9/2/dial/

It specifically mentions making a call.

The example even dials "kremvax". Usenet over AX.25 from the 80s. Pretty sure that is _actual_ dialling.

But, I'm curious as to why Go's "Dial" is more powerful than a standard connection in any other language.


Plan9's dial is derived from OSI approach of separating service and protocol (aka implementation), with name service as first-party component (BSD Sockets have it stapled over and it hurts).

The example with calling kremvax over datakit specifically goes to show-off that all that specifies datakit usage is the "dk" string - nothing else.

Similarly with XTI (and other OSI-oriented network APIs), what you are telling OS is "I want to have a stream oriented connection with graceful close, to service Y on host X", and you don't have to care at all whether it will be IPv4, IPv6, TUBA, CLNS over Ethernet, or direct serial modem exchange using HDLC.

Go doesn't have all of that flexibility because it works from userspace, but it reuses the "service, not implementation detail" approach and lets you concentrate on human-readable domain names and service names instead of providing a maze of "hardcoded IP" issues.


Go does not have exception (exactly because of this problem.)

For passing information, you use channels, which can pass more than one value back to the caller.

The big thing is not running tasks in the background but the tight integration of channels and runtime scheduler that allows having an invisible event loop on top of what is synchronous programming.


I think there are many reasons why Go doesn't use exceptions, though this is possibly one of them.

Channels are nice, but they are not that hard to replicate in other languages (probably less efficiently). They have their uses, but they can't completely supplant shared memory or async/await - all 3 are nice to have in different situations.


As the article explains (I know it's hard to comment after actually reading), Java moved away from using several green threads on top of a single system thread.

What Loom and Go do is to schedule green threads on a bunch of system threads and spawn more system threads when they get blocked doing synchronous system calls.


No need for snark, I did read the article. I was just pointing out the irony that it was a big deal when the JVM moved away from green threads and now it’s a big deal when it moves back. It’s a comment on the hype cycle. And, as I said in my OP, it’s a good change and I’m happy to see it.

(And if you want to be pedantic, IIRC the green threads were originally mapped to the Java process, not a thread, because threads were either unavailable or immature in Linux when Java 1 came out; I can’t remember which)


You're missing the point, which is what the person replying was trying to explain. We're not simply going back and forth, we're learning from past experience and improving implementations.

The green threads on the JVM were not the same kind as green threads in Go (and Loom), they would block on IO. Can't speak for Loom, but Go automagically reschedules your green thread when it blocks which allows other threads to run while waiting.

The point is they weren't rescheduled when they would block in the JVM, every process has a main thread.


It’s hard to understand how I can miss my own point... yes, I know that we are not simply going back and forth; I did read the article. I know that the new implementation of green threads is more sophisticated than the original implementation.

But I find the cycle - what I called the hype cycle - from internal scheduling to external scheduling and back again, interesting, and I wonder what, if anything, we as an industry can learn from this?

ISTM that Java 1.2 could have improved on green threads instead of moving to os threads. So, is there something we can learn from these two transitions that will help us all make better decisions in the future? The use of OS threads and all the complexity that this has caused has cost the industry hundreds of thousands of hours of developer time. If we can learn some lessons from this then isn’t that a good thing?


I don't think this is a matter of hype cycle at all. There are two things that changed:

1. Threading got much faster and lightweight. This is what Java was initially trying to work around, until it didn't have to any more.

2. The problem moved to handling as many sockets concurrently as possible. Even lightweight system threads are too heavy for scaling linearly with the number of connections (too much context-switching overhead, too much space for stack, etc.)

Green-threading has become a good idea again because we now have a kernel API that is used to multiplex a lot (but not all) I/O systemcalls.

Today Go runtime uses epoll/kqueue to read from a big bunch of sockets, whenever something new happens to any of them. This takes one system thread only.

The API model of epoll/kqueue implies some way to handle concurrency in your user code: this can either be callbacks (or async/await syntactic sugar) or green threads and CSP (channels and so on.) This is why green threading is having a comeback.

(Sorry for implying you did not read the article!)


That's OK, lots of people fail to RTFA, but I really like reading about this stuff.

> Green-threading has become a good idea again because we now have a kernel API that is used to multiplex a lot (but not all) I/O systemcalls.

OK, that makes a lot of sense. I had to read up about how epoll is different from select/poll (that's how long it's been since I worked in C :). Clearly epoll was needed to make green threads efficient, but from what I can see, by the time epoll and friends were widespread, the pthread model was entrenched in Java.


I really don't want a video to autoplay on the side of the screen when I'm reading some news article, for example.

It was so much better before when there just were less ways of doing what the website wanted and more ways of doing what you wanted with your browser and your computing power.


Sure, it's totally reasonable for somebody to dislike these features. I just object to the claims that it doesn't do anything substantially different than what it did 10 years ago.


I'm pretty sure this all happened in spite of what SoftBank tried so that investors would not discover how bad WeWork is as an investment.

We could start from the fact that is a real estate company and not a tech unicorn...


Honestly I was routing for for WeWork to IPO as I had most of my IRA lined up to short them. Yes you can't short in an IRA but I was going to buy near money PUTS against them. I was salivating for months waiting for this windfall, too bad it didn't come to pass :(


On the other hand, a protocol that has been battle-tested and proven to work in the real world makes it much easier for a standards body to accept: there is less risk of making a standard that nobody uses.

The important point is not to make proprietary deviations to the standard. Of course, Google is guilty here (Chrome-only web service, mostly) but when it comes to network protocols they haven't behaved badly, especially when others were pushing for HTTP/2 without TLS and other harmful ideas...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You