For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more tuldia's commentsregister

The job description of a sysadmin always required knowing at least one interpreted and one compiled programming language.

Programming is a subset of skills within the sysadmin role.


> The job description of a sysadmin always required knowing at least one interpreted and one compiled programming language.

You'd be surprised. A large chunk (say, more than 50%) of the sysadmin job market are guys that can set up a Windows LDAP and/or some Cisco and Wi-Fi equipment, but no more.

It's why the 'DevOps' moniker was invented, to differentiate the old-school Unix sysadmins from the LDAP-and-Exchange guys.


Scalable in which sense?

Assuming that this is something like pgbouncer or pgpool that sit between the client and the database, and that you have a limit of connections with the database as well as the number of client connections you can keep, what value does this adds compared with those other (more mature, battle tested, included in major distros) projects?


Pgbouncer is single-threaded. If you have 80-core box it is a huge waste to terminate your SSL in single-threaded proxy pooler.


Starting from version 1.12.0 you can actually run multiple pgbouncer processes listening on the same port, that eliminates some of the issues.


Yes, we use a cascade of PgBouncers. But it's hard to maintain, actually. Also, PgBouncer was not actively developed for years. I consider new PgBouncer features (port reuse, SCRAM-SHA-256) accomplishment of Odyssey to some (small) extent :)


Not sure what you mean that it wasn't developed for years.

They had releases at least once a year (only in 2018 they had one release).


This year they had 3, and the year is not over yet :)


Given the amount of M$ ads DDG has showing latelly does not surprises me they might be partnering with bing.


> M$

Hello, 1990's Slashdot called and would like its tired old '$' satire back.


M$ would like their tired old 1990s business practises back, oh wait they never left. Just like the M$ shorthand which we see less simply becuase we talk about them less.

It's reputation they fully earned. You can write "Microsoft" in full whenever you like if you prefer too. I think it's pretty reasonable to casually express contempt at every opportunity for any business that puts money vastly further ahead of ethics than they need to. You can disgaree with that if you like too.

Facebrick is the new M$. What do we have for Goog? "do no evil" with strickeout for more money? $GOOG?


My friend, I think you missed the sarcasm and possibly some satire in my comment.


DuckDuckGo is already partnering with Bing - that's where they get their results from.


Might? They are getting most of their search results and ads from Bing. It's no secret. https://help.duckduckgo.com/results/sources/



Search for:

1. "fix incorrect mayonese".

2. "fix incorrect lazagna"

and compare with:

1. "fix incorrect mayonnaise".

2. "fix incorrect lasagna"


I guess pretty much any tool that copies files will do that too. I like that!


Wifi firmwares are non-free and are not included in the official iso for a reason.

Search for "debian non-free iso" and you probably will find what you are looking for.

> ...nightmare maze of wrong links and secret knowledge.

Please, don't exaggerate.


> Wifi firmwares are non-free and are not included in the official iso for a reason.

And the result is that 99.99% of people can't get 99.99% freedom because of 0.01% of people doubling down on the 0.01% of freedom. I love Debian, but I consider that gatekeeping a great crime against regular people who could be empowered and informed from minimal freedom to nearly total freedom instead of kept out by arcanity and hardlining.

That's why I like the link I posted. Because it says "this will actually work on your computer" AND also informs you about user freedom and the harm of proprietary device drivers.

> Please, don't exaggerate.

Let's perform an exercise, shall we? Starting from www.debian.org, reading from top left to bottom right as the English language is read, how many words must you pass over, how many links must you pass over, which links do you have to click, and how do you know to click those links and not the other ones in order to find an iso that installs on a market standard laptop over WiFi? The decision tree is much deeper than you acknowledge.


> Starting from www.debian.org, reading from top left to bottom right as the English language is read...

I guess you should start with the first link, the one that says "About Debian". There you can know what Debian is about and align your expectations.

If you think it works for you, then go back to www.debian.org and do a ctrl+f and type "download", there is a big button in the top right corner. It is probably what you need (amd64, netinstall).


All I hear you saying is that you think regular people with regular lives and regular products and regular concerns don't deserve to get almost total freedom from Debian because of that missing last little bit, and, too bad so sad, they should just go somewhere else. I disagree with that stance.


It's not what most people probably need, though. A standard market laptop requires non-free firmware for the wifi to function and thus let the netinstaller proceed.


I think it's actually important to understand how much the computer manufacturers don't really appear to want the free state to become the default. (Not from a "look at these jerks" perspective, but from a benchmarking perspective.)

The project is explicitly about free software, so you should not be able to walk in the door and casually/accidentally walk out with non-free software. But more importantly, it should be a fairly straightforward process to identify "how many non-free sources did I need to incorporate in order to make my piece of 2004 hardware boot and run acceptably?" vs "how many ... in 2019?"

One of the ideals of the Debian distro is also reproducibility, for reasons I won't go into since it's not particularly relevant, but for anyone who hasn't seen it before already: https://isdebianreproducibleyet.com/

The non-free parts will (potentially, depending on what kind of non-free I guess...) always keep this number under 100%

I love the Debian that Gotham Needs, I've never seen this, and the big red asterisk on "should" handles this concern neatly ;)


> Please, don't exaggerate.

I am a software developer who uses Linux exclusively since about 2000 and has contributed indirectly to Debian.

Last week I wanted to setup Debian on my new work laptop. Naively and without thinking much I downloaded the default image, so the wifi card didn't work.

I was on holiday, abroad with friends. Even with both the buster and testing non-free images the wifi card didn't work.

After three hours (downloading, installing a cd creator on a mac of a friend etc) I got frustrated, downloaded an Ubuntu image and it just worked.


I think of Ubuntu as Debian + the yucky driver bits.


In recent history, though, it was also + the yucky terminal advertising, secret Amazon searching, and Unity interface bits. Thankfully at least they gave up on Unity so that Gnome can progress faster.


Yeah, the Unity interface really messed up the recent releases of Ubuntu.


As a long time, on and off Debian user, I've actually never had debian-installer install firmware properly. For the longest time, using the minimal install image, putting the firmware debs (or loose files) in the correct directory just plain didn't work. I think at some point, it started working and I could use wifi instead of ethernet to install, but even now the installer still doesn't install AMD firmware, so some kernel modesetting funny business doesn't work and I just get a black screen on first boot (but the system is otherwise functional). The same applies to the firmware-included non-free image.

I mean, I can (and do) manually apt-get the right firmware packages at some point, either popping a shell during install or after first boot, but it is definitely a maze of some kind, especially if you don't know what package contains the firmware you need.


Agreed. In rare cases when WiFi doesn't work right at first boot for me, I just attach a mobile phone or connect via ethernet, download one package and... connect via WiFi. It's just a very minor inconvenience - I prefer to know that official Debian iso is 100% free and having to manually opt-in for anything else.


Sometimes there is no problem to be solved, all these tools just complicate everything.

Do this instead:

1. Use flat yaml files. No loops nor conditionals, no complexity.

2. One (single) yaml file templated by ansible just for secret/sensitive stuff.

3. Done.

Boring is better and is easy to diff.


I work at a small company - maybe 40 engineers. Our app is still a monolith, but we have secondary systems for processing data and communicating with external systems. It's not the most elegant thing ever engineered, but it's not bad given our security and compliance requirements.

I mention all this to illustrate that we are nowhere near NetAppAmaGooSoft scale. Nevertheless our deployment is complex. We can't just have a few hundred identical machines. There are many heterogenous parts, and they have to be hooked up to each other. We currently do this with ~20K lines of HCL applied with Terraform. It's mostly written by some hella-smart infra engineers, and is very well factored.

Still, it's a BEAST. The Terraform Enterprise workflow is tedious, and writing configuration is a lot of work. We would love to replace it, with ...something better. There are alternatives out there, but nothing that's obviously much better. It wouldn't be worth the migration effort. As far as I can tell, this is the state of the art, and it sucks.

My coworkers are sick of hearing me say "We are not Google", so I'm sympathetic to the YAGNI argument, but there really is a problem here. Flat YAML files would be a nightmare for us. I bet there are a lot of companies out there that have worse solutions than we do. The default of "each team rolls their own ad hoc deployment tools" masks it somewhat because it's not obvious that the company has 17 solutions to the same problem, with varying levels of effectiveness and reliability, all of which are expensive to write and maintain, and none of which will be reused when a new team gets organized.

Terraform is a valiant attempt to solve the problem with middling results. We can do better, and we need more attempts! I'm excited that CUE exists. It's not ready for my use case yet, but it's very promising. The best thing about it is that it scales well, up and down. If you have simple needs, you can just write flat CUE to start. It's just JSON with some syntax sugar. The fancy stuff can come later, but it's there when you need it.


What if you are using already existing tooling that takes very verbose yaml files? I've seen Concourse CI pipelines that push past 5k lines of YAML, where 50% internally is repeated 10-line blocks, and there is tons of repetition across different YAML files.


Duplication is far cheaper than the wrong abstraction.

I do prefer to have lots of dumb files than having to deal with the cognitive load of a tool that in the end will generate lots of files anyways.


> Duplication is far cheaper than the wrong abstraction.

I'm going to chew on this. For me, Don't Repeat Yourself has been one of my highest values. My former colleagues would copy and paste code everywhere. It was a pain to make changes to, and I relished refactoring it.

But I also regret some of the libraries I wrote in my early years. They aren't designed how I would today, but now several applications depend on them.

One thing I will say is that it's okay for your first draft to be ugly. It helps to see all the duplication before you design the abstraction.


I would second the parent. DRY was a false God. I would say RCL: reduce cognitive load.

Obviously if there is a manageable way to reduce repetition, take it, but I would not add a lot of complexity for the sake of brevity. That's turning your dev team into a data compression algorithm made of meat.


Cognitive load is also present from vastly duplicated code and config: having to remember to update or take into consideration 20 other places in your codebase any time you make a change.


Yes, both extremes can increase cognitive load. The ideal is simple, effective, parsimonious abstractions, but that's much easier said than done.

I was just arguing that adding a lot of complexity to reduce duplication is an exercise in diminishing returns pretty fast.


This is so true. I say this all the time to my team. Given two choices, go with the one that produces less cognitive load.

I used to say "less complexity" but I've found that even for me, sometimes when I do something that seems simple and less complex, it ends up requiring more cognitive load (it's not descriptive enough, or it tries to do too much "magic", etc).


Perhaps DRY was a good shortcut to RCL and SST (single source of truth). Those two are really important. DRY is an approximation to them (a good one in most cases).


Are you speaking about DRY on a syntactic basis, or the version from The Pragmatic Programmer ("Every piece of knowledge must have a single, unambiguous, authoritative representation within a system")?


Ah, thanks, now I realise I've been taking DRY too literally and what I described as "single source of truth" is actually what DRY tends to mean:

https://en.wikipedia.org/wiki/Don't_repeat_yourself


I would say "was intended to mean.". The acronym is catchy and seems sufficiently self explanatory that I think your initial interpretation may be more common.

I've been jokingly pushing for over-application of syntax-focused DRY to be called "Huffman coding"


One thing that's missed in the common conception of DRY is that the original formulation in The Pragmatic Programmer spoke in terms of pieces of knowledge.

I won't speak to which is "really" DRY, but I think the original formulation is more useful. The more typical focus on surface syntax misses out in two respects.

First, a single piece of knowledge might be represented in multiple places even when they look different. For instance, if I'm saying "there's a button here" in HTML and in JavaScript and in CSS, there won't be any syntactic repetition but that's not DRY per TPP.

Second, just because there is syntactic repetition doesn't mean you're encoding the same piece of knowledge. Sometimes two pieces of code happen to be the same, but each looks that way for different reasons and are more likely to change independently than together. In that case, unifying them isn't DRY (per TPP). I joke that you're not improving that code, simply compressing it.

I think that original formulation is pretty spot on, although even then would note an exception for deliberately saying important things in different ways for error detection and clarity, but only if they are actually checked against each other.


The thing is, for me, configuration and code feels like two different things. It's like worrying more about the system components and their relationships than the "code" itself.

When you treat configuration as flat files your configuration becomes a 1:1 mapping of the thing you have running.


YAML has shorthand for objects. https://blog.daemonl.com/2016/02/yaml.html if you're repeating them, there's always this option.


Or just use native data structures.

If you're throwing it all into a docker container as a single artifact you don't need a separate language to patch into a complicated config parser. Just use hardcoded objects/maps in whatever language you use.


What if your language is compiled?


PostgreSQL never ceases to amaze me.

The great thing is that it is a mature tool and you don't need to bring another beast to the zoo.


> Without Docker everything new becomes a project on its own where you need to call in "ops" for the smallest of things.

I hope that one day we will stop echoing that or framing it as a bad thing.

It's always good to involve people. Writing, build and running software is a shared responsibility.

I truly believe that if you be nice with your ops, explain what you want to achieve, they will help you. It is a two-way street.

It is not a tool that will "fix" how humans interact.

Also, put yourself in their shoes, it could be that:

"With Docker everything new becomes a snowflake on its own where you need to call in "dev" for the smallest of things." -- or uncompress the image and look around how/what a specific image does things when it start, place app/config/data files, etc...


Depends on the incentives placed on ops.

All too often, they're responsible to handle things breaking, but not responsible for getting features out the door.

Hence, they become a huge gate to get anything out; they have no incentive to help you.

Meanwhile, you get dev owning some of the operational burden, and it works pretty well. Until you have compliance needs to separate them.


I remember working at a startup with separate ops and no docker. I just remember it causing a lot of friction and it slowed us down. I see no reason why ops people wouldn't enjoy docker just as much as devs. I found it cleaner on average than system config trackers like ansible or puppet, and could reduce the complexity of your configuration management.


If a dev can do it on his own without assistance with minimal issues then having someone to "maintain" the install is just dumb.


Excuse me but this is "crazy shit":

".. in 2009: nobody could remake our centos4 build machine for our C++ code ... so absolutely fucking cursed"

What lead a computer running a supported operating system[1] in a state like this? Distill that and you have docker :)

1. https://en.wikipedia.org/wiki/CentOS#End-of-support_schedule


Most likely, custom installed libraries and tools (./configure; make; make install), that were never packaged. And a package manager, that can't even tell you if any of the installed files were changed, after they were installed.

Just a guess...


If you wanted a RPM that installs on multiple versions of RHEL, you would take the oldest RHEL you could find, build a rpm with all your dependencies except glibc included, and ship that. The resulting rpm would work on all later versions of RHEL

Problem is, you still need to update your dependencies, so sooner or later you’re bootstrapping modern GCC and Stdc++, openssl etc all from a Centos 5 base just to get a forwards compatible rpm

A that point you start tarring up your chroots so you know how to reconstruct this Frankenstein build enviroment

Thank Docker for giving an easily portable environment that’s almost as easily accepted by customers as a RPM


First you should not mix CentOS 5 packages into a CentOS 4 system. That is how you create a problem in the first place.

Building software on top of frankenstein build environments is how you end with broken software, and should not be encouraged :)

Let alone development environment.


What the parent is describing by "Frankenstein build environment" is a bog-standard cross-compilation toolchain/SDK. You don't infect the host with the toolchain's packages; you install the toolchain's packages in the toolchain chroot. (If you've ever developed for an embedded arch, this will be painfully familiar.)


> You don't infect the host with the toolchain's packages; you install the toolchain's packages in the toolchain chroot.

Infect is a too strong word, because normally the cross compiler will coexist well with your system. But you are right about the chroot, throw it away and start again, repeat. That is great!

I normally install gcc-arm-linux-gnueabihf + qemu-user-static and enable binfmt. It works well for building armhf, but I can imagine for things like ESP32 where you don't have the toolchain sources and etc.


> Most likely, custom installed libraries and tools (./configure; make; make install), that were never packaged. And a package manager, that can't even tell you if any of the installed files were changed, after they were installed.

Software has to be read or at least installed in a safe place (before installing on a live server) so then one can be sure it does nothing silly.

Packaging is not hard, but there are some rules, including placing binaries in /usr/bin, configuration files in /etc and /etc/default (deb) or /etc/sysconfig (rpm), general package files in /usr/share. If you place files in the standard locations probably the package manager will detect changes in config files.


No, it just made something that existed 18 years ago become more popular.

It's just a rootfs compressed tarball. Running on top of Linux technology.

Docker was always a brittle tool with lots of limitations and huge compromisses.

Want to ${FOO_PORT:-1234}? Forget about it. Running random stuff as root? Why not?

It also made very easy to consume and produce junk/snowflakes.

Every docker image place its files/configuration/data in a special place. FHS? Who need that?

People have been running, packaging and distributing software more efficiently and more secure way before docker even existed.

Another tool that made existing technology more popular is Kubernetes.

But it did a better job in this "container" space.


This is an example of "unpopular" post. As expected, it received no counter arguments.


There is an easy counter argument which is the post you were replying to talked about the impact Docker had had, it made no arguments about Docker's technical merit.

You replied by talking about your perceptions of Docker having a bad technical design.

These two points are entirely orthagonal, which is why I'd guess your comment got downvoted.

It is frequently the case that the most popular technology is not the one with the "best" technical design.

For example the popularity of JavaScript or PHP. Neither is one that is regarded generally to have good technical design, however they have had a huge impact.


I know. This was a social experiment.

Thanks for your comment, I appreciate that! Both tools you mentioned are great and kinda relate to this experiment too.

The existence of docker and how it was developed and absorved by people brought nice things too. The kernel had to workout some areas.

The tool itseft is ok. I use something like this:

.

|-- Dockerfile

|-- README.md

`-- rootfs

    |-- etc

    |   `-- sample

    |       `-- sample.conf

    `-- usr

        `-- bin

            `-- sample

5 directories, 4 files

But I'm not talking about that.

For me is just interesting, you know, the tool itself doesn't matter, one can do that with any tool.

This experiment was to measure how humans entering an existing field will react... just like language has been shaped.

It was a bit sad that nobody was curious enough to talk about ${FOO_PORT:-1234} and why we should deprecate (links) or about "defaults". It was a success, though.


Whats the problem with ${FOO_PORT:-1234}?

Sure, it doesnt exist in the language of Dockerfile but you can achieve the same effects in other trivial ways.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You