For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more old-gregg's commentsregister

Second, taxes don't disappear into nothingness - they pay for civilization. It is clearly beneficial to everyone to live in a society where people are well cared for and have healthcare, public education, welfare, etc.

Or, and that's another possibility we should consider, taxes disappear into the pockets of bureaucrats and a handful of rent seekers with very little effect on civilization.

Before arguing for additional taxes, one must first show some evidence that we're getting reasonable ROI on what we're already paying, and https://transparentcalifornia.com/ shows exactly the opposite.

I am 100% in support for public healthcare, for example, but it will continue to look like a financial absurd as long as a knee MRI is 3x of any other country. Same thing with the higher education, public housing, etc.


While we're considering new possibilities, maybe there should be a wealth tax AND more accountability in allocation of government funds.


I used to work at Mailgun. Believe me, there's a lot more to getting your mail delivered. Simply configuring DNS stopped being enough maybe around 2006-08.


Can they be voted out in the next election?


I honestly don't know. We thought we can the last time there was a vote and they won with a strong majority. The real problem here is that there is no good alternative. The left wing parties are a joke at best. I'm kinda torn between all the major parties since I don't believe that any of them would work. That's why I voted for the new kid on the block (Momentum).


That sounds like he's winning legitimately. He may not be the candidate people want, but just from your comment it sounds like he's the best they have.


"Opposition parties are so inept: they could not even operate corruption."

(Do not know if it is original or some old joke: I have heard it from hungarian stand-up comedian Tibor Bödőcs - who is btw basically the only comedian left who dare (or talented enough to get away with) criticizing every side of the political spectrum here - nowadays mostly Orbán of course...)


The answer sits on the 4th page:

We have increased the percentage of our revenue from segments which we believe benefit from attractive growth dynamics. In 2019, over 85% of our revenues came from our Multicloud Services and Apps & Cross Platform segments, which we refer to as our “Core Segments”. In contrast, in the twelve months ended September 30, 2016, less than 10% of our revenue came from our Cloud Office and Managed Cloud Services service offerings.

Basically Rackspace went from being a hosting company with "fanatical support" to someone who manages your AWS account and your cloud apps. They still offer their traditional bare metal servers, and they have their cloud, but that's not where the growth is.


Looks like gross profit margins decreased in aggregate by almost 4% in both "growth" categories of their Core Revenue (page 79). Doesn't look good for future margin expansion opportunities.


With gadgets similar to iPhones and iPads I think we'd be better off designating general purpose computers for professional use, and not having to worry about hypothetical "grandpas" installing malware while checking the weather, especially on Linux.


Yeah I actually agree with you. If the goal was to get more people to use desktop linux / PCs making them more like and iPhone would be good. But I don’t think that’s a good goal to have.


I do not know how you're doing it, but I just ran quite a few searches for obscure film photography and chemistry topics and the results were much better than DDG and less commercialized than Google. Very impressive!


While I fully support strong privacy & encryption, there is absolutely no logic in this argument because it tries to build upon an unrelated hypothetical.

A stronger argument should have tried to explain how this violates the constitution:

https://www.eff.org/deeplinks/2020/03/earn-it-act-violates-c...


It creates a committee which is extremely likely to pass policies which violate the Constitution. In fact, if there were really effective ways to reduce crime online which didn't violate it, they likely would have been passed twenty years ago without trying to piggyback on another bill.

I agree, the EFF should have elaborated more on concrete examples of how it violates the Constitution.

The potential space for bad policy is gargantuan and hard to explore without additional hints or context. Given that enumerated in rhetoric, tech company policy, recent events and the bill itself:

It may create potential for witch hunts. Think of some recent conspiracy theories here.

To censor speech which comes close to looking "suspicious" whomever is the one to define this. This is most likely the government.

It may push companies to use filters which over-censor speech and it may be impossible to get a remedy at the scale at which they operate.

Trolls could abuse mechanisms to silence opinions they disagree with. If a government decides to smear someone's reputation through a coordinated campaign, they could make the argument they are a danger to society and pressure platforms to silence them.


You'll have to explain to me how it's an "unrelated hypothetical". Both situations:

* Indicate that you have a right * Which you probably don't have a pressing need for at the moment * But the lack of a need in the immediate future doesn't mean you won't ever have a need for it

The only real difference between the two, as far as I can tell, is the actual right you're not-using-but-still-protecting.

What am I missing?


The logic used here is argument by analogy. Talking about the constitution would be a different argument.


Analogy != "something different entirely". If we accept such weak criteria for acceptable arguments, one could argue for or against anything by picking a convenient "analogy".

Pointing out that a weakened encryption is analogous to getting a wiretapping order for everyone without a cause is a stronger argument.


I believe the criticism you are attempting to make is that those are like apples and oranges. However, I completely disagree.


You were pre-conditioned to believe that it's a feature. Traditionally, datacenter itself is supposed to provide HA for both the power and the network.

"Availability zones" exist for AWS convenience, not for yours. It allows for cheaper and simpler DC ops, removes the need for redundant generators, simplifies network design, and makes "under-cloud" maintenance easier. It's a feature for them, a headache for you, and a (brilliantly addressed) challenge for AWS product marketing.


I don’t know about you, but I’ve been in several DC power failures where the fault was in the transfer switch.

It sure is nice to have separate failure domains with low enough latency between them to pretty much ignore it in application architectures.


Colos will rarely lose power, but having your line cut due to a backhoe is pretty common. Even in top tier facilities I observed some loss of service every 6-12 months, add in some misconfiguration risk and Colo failure becomes a frighteningly common affair.

This can be mitigated through redundant service providers, careful checks on shared interconnects and other measures - but having "hard" failure isolation at the facility level will also get you there with less chance of someone doing something dumb.


This kind of thinking is how you end up in a newspaper article where you're in a building in new york babysitting a generator during a hurricane while everyone sane is serving from Atlanta now.

You're doing it wrong. Plan to lose sites. If you plan to never lose a building, you're just setting yourself up for pain by optimizing for the wrong kind of redundancy.


I disagree. AZs are completely independent data centers kilometres apart. For any businesses which may need low latency but still want full HA (e.g. finance systems), it's a blessing. This requirement cannot be fully covered with separate data center regions, something like an airplane crash would still hit everything.


For the love of God, please stop using SSH keys. Almost every "company X is hacked" title on HN can be traced to leaked SSH credentials.

Use auto-expiring certificates that are issued after a proper SSO+2FA flow:

https://gravitational.com/blog/how-to-ssh-properly/


> Almost every "company X is hacked" title on HN can be traced to leaked SSH credentials.

I don’t think that’s anywhere close to true but I’d be interested if you have reason to believe it is or some examples. Or maybe it’s intentional hyperbole?


Intentional hyperbole to sling blog spam. That link is to a site the OP claims in profile.


Most of the time, you're lucky to get people to move off of passwords


Run htop or similar, sort by "shared memory" column and see how much more memory you'd need per process if shared linking did not exist.

I think the author's using a wrong method to make a point. Dynamic linking feels out of place for most long-running server-side apps (typical SaaS workload). One can argue that in a mostly CLI-environment there's also not much benefit.

But even an empty Ubuntu desktop runs ~400 processes and dynamic linking makes perfect sense. libc alone would have to exist in hundreds reincarnations consuming hundred+ megabytes of RAM and I'm not even talking about much, much heavier GTK+ / cairo / freetype / etc libraries needed for GUI applications.


Go executables are statically linked. It makes deployment a breeze.

I think you overestimate how much saving you get from dynamically linking libc. Each executable uses only a small portion of libc, so the average savings is going to be in the handful of kilobytes per executable.


In theory yes. However, in practice static linking with glibc pulls in a lot of dead weight, musl comes to the rescue though:

test.c:

  int main(int argc, char **argv) {
    printf("hello world\n");
    return 0;
  }
Dynamic linking (glibc):

  $ gcc -O2 -Wl,--strip-all test.c
  $ ls -sh a.out
  8.0K a.out
Static linking (glibc):

  $ gcc -O2 --static -Wl,--strip-all test.c
  $ ls -l a.out
  760K a.out
Static linking (musl):

  $ musl-gcc --static -O2 -Wl,--strip-all test.c
  $ ls -sh a.out
  8.0K a.out


Static linking (https://github.com/jart/cosmopolitan)

    jart@debian:~/cosmo$ make -j12 CPPFLAGS+=-DIM_FEELING_NAUGHTY MODE=tiny o/tiny/examples/hello.com
    jart@debian:~/cosmo$ ls -sh o/tiny/examples/hello.com
    20K o/tiny/examples/hello.com
Note: Output binary runs on Windows, Mac, and BSD too.


I hope you can forgive me for asking, but what exactly is this (cosmopolitan)? It looks interesting but I can’t really tell what it’s trying to be.


True, but a couple github imports and 10 lines of code generates a 100mb binary. But, to be fair, I guess we're okay with shipping huge binaries now a days because we're literally shipping whole environments with docker anyway.


To be fair, you're not really supposed to be shipping a full system in a docker image if you can help it; you're supposed to layer your application over the smallest base that will support it (whether that's scratch, distroless, alpine, or a minimal debian base). Of course, I'll be the first to agree that "supposed to" and reality have little in common; if I had a dollar for every time I've seen a final image that still included the whole compiler chain...


In my experience, most who build fat images with the compiler chain and everything do so because they're simply not aware that multi-stage builds are a thing now.


Yep. And just general awareness of how Docker works and how to best use it. And that's not a knock on them; it's a whole topic unto itself, and lots of these folks are developers who are just trying to get their stuff working without having to go off and learn all about Docker, and I have a hard time blaming them for that. Which, of course, is why they keep a handful of us sysadmin folks on staff to help tidy up;)


> Go executables are statically linked. It makes deployment a breeze.

But what are we comparing to what then ?

A few custom Go applications, compared to whole classic Linux distros, on which deployment is both not really the same thing, but still a breeze ?

So yeah, to different needs, different tools.


> Run htop or similar, sort by "shared memory"

The top two entries are 80 and 36 kB respectively. RES is 2.3 giga-bytes (over four orders of magnitude larger) between them. Even multiplying the top SHR by your ~400 processes gives 32 MB (still two orders of magnitude off). That is not a counter argument; that is a agreement that dynamic linking is useless.

Edit: RES, not VIRT.


That is an extremely misleading figure. Shared memory is page-aligned entire libraries dropped into RAM. Statically linking would, as the article shows, only use on average about 4% of the symbols available from the libraries, and the majority of this would not end up in RAM with your statically linked binary. And if you used a more selective approach, dynamically linking to no more than perhaps a dozen high-impact libraries and statically linking the rest, you'd get a lot of the benefits and few of the drawbacks.

Put the cold hard numbers right in front of someone's face and still the cargo cult wins out.


> Put the cold hard numbers right in front of someone's face and still the cargo cult wins out.

Come on.

You did not actually measure the figure GP mentioned and which you are disputing. Your methodology and assumption — that 4% external symbol use translates into 4% size used — is a plausible guess, but you haven't supported it with data.

Even if you had measured the figure you're accusing GP of ignoring, the tone of your remark is just aggressively condescending and inappropriate. Tone it down.

To address your other claims:

> Shared memory is page-aligned entire libraries dropped into RAM.

There is a good reason to page- or superpage-align code generally; it burns some virtual memory but reduces TLB overhead and therefore misses / invalidations, which are very costly. You would want to do the same with executable code in a static-linked binary.

> the majority of [the small fraction of static linked library used] would not end up in RAM with your statically linked binary.

Huh? Why do you claim that?

> And if you used a more selective approach, dynamically linking to no more than perhaps a dozen high-impact libraries and statically linking the rest, you'd get a lot of the benefits and few of the drawbacks.

I think that claim is plausible! But it wasn't an option presented on your blog post, nor was it discussed by GP. Prior to that comment, discussion was only around 100% vs 0% dynamic linking.


> There is a good reason to page- or superpage-align code generally; it burns some virtual memory but reduces TLB overhead and therefore misses / invalidations, which are very costly. You would want to do the same with executable code in a static-linked binary.

But most code isn't performance critical. Thus trying to align functions to page boundaries is just wasting memory. Even in performance critical code, aligning to cache line sizes is enough and aligning to page boundaries doesn't provide any advantage.


Not only is it wasting memory, but I wouldn't be convinced it'd not be outright hurting performance by increasing cache line aliasing, increasing TLB overhead and misses/invalidations! Avoiding page alignment can be a performance gain! https://pvk.ca/Blog/2012/07/30/binary-search-is-a-pathologic...

Like, sure, mapping your executable's code section in at a page boundary is probably fine, but I think trying to align individual functions to page boundaries would be a counterproductive mistake as a general strategy.


> I think trying to align individual functions to page boundaries would be a counterproductive mistake as a general strategy.

Yes — which is why no one does that. I don't know where bjourne came up with the idea.


> But most code isn't performance critical.

Wasting TLB slots on your unimportant code still pessimizes your hot code.

> Thus trying to align functions to page boundaries is just wasting memory.

No one aligns individual functions to page boundaries; you align the entire loadable code segment to a page (or preferably, superpage) boundary.

> Even in performance critical code, aligning to cache line sizes is enough and aligning to page boundaries doesn't provide any advantage.

This is a different kind of optimization (avoiding cache line contention) than I was talking about (optimizing TLB slot use).

Yes, forced page-alignment doesn't help cache line contention anymore than forced cacheline-alignment. But that's irrelevant for code, generally: (outside of self-modifying code, which is extremely uncommon) code doesn't share a cacheline with memory that will be mutated and thus doesn't contend in that way.


Then I don't know what you're getting at. Clearly the most efficient approach would be fitting your performance critical loop into one or at most two consecutive pages making the TLB a moot point.


That is an extremely misleading figure. Shared memory is page-aligned entire libraries dropped into RAM

First of all, I seriously doubt it (haven't looked closely, but if library-loading is similar to mmap, it should only count actually used segments).

But that shouldn't even matter, as most of these libraries are fully utilized. All of libc is used collectively by 400+ processes, that is also true for the complex multi-layer GUI machinery. The output of ldd $(which gnome-calculator) is terrifying, run it under a profiler and see how many functions get hit, you'll be amazed.

Put the cold hard numbers right in front of someone's face and still the cargo cult wins out.

The coldness of your numbers did not impress. And calling a reasonable engineering trade-off "cargo cult" doesn't get you any points either.

Static linking is better than dynamic linking. Sometimes. And vice versa. That's how engineering is different from science, there are no absolute truths, only trade-offs.


@ddevault: performance is only part of the story. The poverty of semantics of C static linking is atrocious -- it could be fixed, but until it is we should not statically link C programs.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You