There is an outward force from the neutrons, called neutron degeneracy pressure. It's not really a case of the Pauli Exclusion Principle, as much as they are both different descriptions of what's going on with the wave function.
The author is unhappy with how signals work, and proposes some really wonky ways to get the behavior he thinks is correct. But for a programmer, it's a lot easier to just do things the way designers of signals intended.
For example, multiple SIGCHLD coalesce, so you are supposed to call waitpid in a loop:
This API is mature and obvious problems like the author points out have already been solved before. The author seems to reject the established practice on purely aesthetic grounds, which is a... choice. But not one you should make, if you want your life to be easy.
waitpid(-1) in a program composed of libraries means that loop is stealing PIDs from other things that manage children within the same process and try to wait on them by PID, which means they'll lose access to the exit status and can also lead to hangs due to pid recycling and then waiting on the wrong pid.
It can work in a self-contained program, but not in anything complicated.
For children specifically pidfds are more reliable. But that doesn't help with other signals.
Yes waitpid(-1) doesn't compose, but lots of things in the C world don't compose
- using library A which uses say libuv, and one that uses libevent (two different event loops)
- forking while holding locks
- two libraries using two different thread pools -- concurrency policy is a global concern. (and this one isn't specific to C)
FWIW the solution I use in a shell is simple - create a Waiter object that wraps waitpid(), and only code that has a reference to the waiter can do anything with processes.
It's basically like making an event loop object and passing it around, rather than making it global.
This is also an argument for libraries not doing I/O -- they should be pure. They should be parameterized by I/O, including starting threads, etc.
I haven't ever wanted to use a library that starts processes behind my back
> using library A which uses say libuv, and one that uses libevent (two different event loops)
It's not ideal, but I don't see a reason this wouldn't work if you ran the two event loops on separate threads
> forking while holding locks
Yes. You can't use fork in a multi-threaded program, unless it is followed by an exec. Which is one reason forking is somewhat rare in modern code.
> two libraries using two different thread pools -- concurrency policy is a global concern.
That doesn't break the semantics of the program.
> create a Waiter object that wraps waitpid(), and only code that has a reference to the waiter can do anything with processes.
This doesn't help if you use a library that calls waitpid directly.
This is actually probably more of a problem in non-c code, where the standard library likely to have an abstraction that calls waitpid on child processes. So handling SIGCHLD can interfere with other code waiting for a child to finish.
How well does this work in practice? I am skeptical about the power of static analysis with a language like Python. Many times, the hidden state in Jupyter lives in another module, or somewhere deep in an object hierarchy.
If you’re going to reinvent the Python notebook for reproducibility, I wonder why not go further, and fully snapshot the program state?
I really want this to work, because hidden state in Jupyter notebooks bites me frequently, but the post and website haven’t convinced me this is a robust approach - maybe I’m missing something?
You get the behavior you incentivize. In the “early days”, design docs were a tool to agree on a direction and provide context to your coworkers about what you’re doing. Later, as new people joined at an exponential rate, they were told by well meaning managers to write the docs for perf reasons, and things kind of spiraled from there. Google’s culture became a cargo cult of itself.
Some companies I worked at after Google were reluctant to discuss their promotion process in detail, because they saw what happens when people microoptimize for it.
When I was at Google I mentored a lot of early career engineers and I always told them to write more docs than they think they need. Sometimes docs will catch problems beforehand but early in the career writing easy docs is a simple way to show independence and competence. It slows down projects but Google is probably gonna cancel that project anyway and my theory was "Google doesn't care about you so caring about it just makes you a chump". It was in most ways a great place to work but it slowly drove me insane.
This sounds like you are championing the very mindset that has made google crumble?
Write docs that slow things down, don't code/build projects, and do it all in the name of greed and the only justification is that it will get cancelled anyways (probably because no real work was being completed).
One of the things that had been problematic at Google is the creation of many more products than are really needed, leading to the infamous Google Graveyard. One of the reasons I have come to like writing design docs for my ideas is that a lot of the ideas turn out to be crap when they get written out. Quite a few are good enough that they have already been done, or close enough, which my colleagues can then inform me of. This is a very lightweight way of trimming out useless projects.
Docs are very useful. I liked them, especially considering how slow development could be there, It's just that in my experience they incentivized them more than they needed to. For example writing a doc while you are already implementing something only takes like a day but it looks a lot better than a day of coding to people without context. Luckily at least for lower levels they switched to perf evaluators being closer organizationally and the manager being there so at least there was a little context (though often my manager was busy enough that they didn't fully know what I was doing either). There were a lot of specifics essentially check boxes for promo and to a much lesser extent ratings and eventually I decided it was easier to just give them what they want
The problem is with the culture, that the company does not want a "boring" product that can earn a stable amount of per year (say 100M). It only wants products that grow expotentially to billions.
Google could still keep the boring, but profitable products. Just like Microsoft does (although they bundle them).
But it seems that in google promotions are not made for boring things that work.
I was just a cog in a machine. The mindset didn't come from me but how they setup incentives internally and ran projects. To be clear, I ended up really resenting how it works and the advice was based on my experiences with trying to care (in fact I'm trying to get out of tech entirely). I ended up channeling my care and effort into building up coworkers rather than into the company or it's products and that's how I got through it for a while. Different people had different experiences though (it's a huge company after all), anything I say about my experience is very personal.
Edit: also if Google wanted me to really care about my job more than for self interest that relationship needed to be a two way street. The relationship of the devoted employee and disinterested company is common but to me comes across as a bit pathetic. I did a good job there purely because that's how I enjoy doing my job, if the company collapsed I wouldn't have shed a single tear (that's a lie, I miss the free food)
But if they are going to cancel it anyway I would rather at least code and build just to have fun and at least have a prototype rather than doing the boring work. Also I feel like I can't get into right frame of mind of problem solving by writing a doc compared to if I am coding.
If they like coding and didn't want to prioritize career advancement I would tell them to put effort into finding the projects (or parts of projects) they find more interesting to code and make sure they get assigned that. My advice was always personalized. The thing is that for most early career engineers they are already coding enough (and it's not like the code is generally particularly interesting) so most of the time the advice is to do more docs since the lack of docs was hurting them for perf/grad but it varied from individual to individual
If the project is anyway eventually going to be deprecated and shutdown, maybe it's better to take one of the Google bike and go round the campus drinking one of these tasty granitas from the cafeteria.
The craziest thing to me is that at my location not that many people did the free workout classes. Between those classes and the food I was in the best shape of my life while working there.
I steadily got in better shape as they kept cancelling the projects I worked on and I slowly lost and sort of will to care about my actual job. The quality of my work ended up suffering a lot as I realized just how pointless trying to work there was for me but the pay was too good to want to leave.
Edit: I was unlucky though. I knew some people with more stable projects. I went through 4 cancelled projects in close to 6 years
If a company doesn't have consistent metrics for performance reviews uniformly across teams, get out of that company yesterday. That's just a hotbed of nepotism.
Just proves the point, this person just cargo culted what makes google so famous in recent years. If you only metric is to launch new projects, you’re only gonna launch new projects. Who’s gonna get promoted for maintenance.
>Conceptual examples (not looking to argue about the specific bar)
Again, not trying to debate what the specific bar should be, just giving examples of the kind of metrics you can use. What's your preferred alternative? Lines of code? Whenever your boss feels like it? Stuff like that is why no one believes a "senior staff superstar" from the hot startup of the week is any good and requires them to do whiteboarding.
With sufficient granularity there is no difference. I can't give exact quotes without violating NDAs but variations on these have been the performance review standard at every major company I've worked at or known people who work at.
The metrics which determine if someone gets promoted. Those decisions are being made by some group, presumably with guidelines. If the process for getting promoted is "this group does whatever they feel like" that company is a disaster and you need to get out.
Thus, there are rules, and they are written down. If they're not visible to normal employees, they guess the rules based on who they see get promoted (cargo cult) while the committee uses the true metrics.
It's not like becoming a manager is a secret cult where you're dropped in and you suddenly have carte blanche to do anything. At established companies there are rules and procedures, and while you will likely have access to more information than individual contributors, you won't just be promoting whoever you feel like without having to go through others.
Either way -- 'twould seem that if such a level of microstrategizing is what monopolizes one's attention on a day-to-day basis -- then one is definitely in the wrong place.
It’s a meh from me - either is bad. If you are in a highly complex and professional field (in the traditional sense of the word) single axis optimization hollows out the whole craft. Engineering is delicate and constant battling of tradeoffs, and decision consequences are often delayed by longer than average tenure, let alone a 6 month perf cycle. The more you McKinsey the process, the more mediocre and incoherent the results.
You get what you measure. You can't hire smart motivated people and then be surprised when they figure out to optimize for their rewards.
If you want a team that lands stable customer-loved products, make THAT your performance review. Turns out that's hard to do objectively and consistently.
According to your other comment, then, a company that "lands stable customer-loved products" would be a "hotbed of nepotism" that people shouldn't work for.
Nonsense were a lot of your replies in my experience :), this "measure everything" is very much another cargo cult. Software engineering is complex, people are complex, thinking you can come with a true metric and who doesn't have it doesn't know what they're doing is the nonsense in my world view.
Have excited engineers, pay them well, have them work on something that is aligned with their career & deliver value to their customers, they don't really need much more than that. All the rest is simply trying to "processi-ze" what is a human
> You can't hire smart motivated people and then be surprised when they figure out to optimize for their rewards.
Never said that. I don’t have a solution. Well I have a partial solution, but not complete:
Reward all team members equally for the performance of the team, or even the company. For instance: the pie is divided into 3 pieces: company, team, and individual performance. Candy is given out based on performance of both.
It doesn’t fix the hackable metrics issue, but when people collaborate the side effects are better than when they think of their own promotions, and worse when they’re competing against their direct peers for a fixed bucket of candy per team. Such incentives are directly opposing collaboration between the very people that need to collaborate the most.
Seems to be tailored specifically for Apple Silicon Macs, considering that it attempts to spin up an AArch64 Docker container to compile some of its tools.
At the risk of sounding grumpy, a big difference between the tech community today and in the Usenet days is that the Usenet crowd's interpersonal skills weren't two standard deviations to the left of the mean at your local Target.
Not a Usenet user, still an average idiot with an open inbox and a love for talking with random people. If you miss those days ping me via email, I’m always happy to meet new people.
The author's response is perfectly calibrated to drive someone up the wall. Sling some mud and then hide behind "help, I'm being cornered."
Imagine doing this in the offline world. How well would this kind of behavior go over with people at the grocery store, do you think? Why is it acceptable online to behave like this?
As alternative perspective in terms of power dynamics: The Kagi CEO is a somewhat powerful figure as the CEO of a well-known tech company. The blog author is a random person from outside the tech startup culture.
The internet levels the playing field so the random person has the power to post criticism of the more powerful person and be heard. It doesn’t make sense to compare with the offline world because this wouldn’t be able to happen outside the internet.
In response the CEO is attempting to force them into a different context where he once again has power. The author recognizes this and therefore refuses.
But you’re not talking about Tim Cook, this is a guy running a company of ~10 people. Someone on the internet, with a following and an audience, has written an essay about how Vlad is a bad person, and now is implying the latter is abusive for trying to have a conversation.
This is psychotic behavior.
There’s a huge spectrum between NY Times writing a sourced article about a powerful business magnate and someone disparaging an SMB owner on their blog. If I took the posts and emails of someone I knew in my life and posted them online, I would probably get a call from the police.
Seriously, what audience is Apple playing this to? The insinuations made in this press release should get them sued again for libel. Not to mention how off putting this kind of tantrum is for their customers.
This is marketing copy, I think, and not great copy at that. Even after skimming it for 3 minutes, I still have no idea what it's about, other than that they're trying to sell a container control plane of some kind.
as best I can tell: It seems their innovation is that instead of having a GitHub Action in JavaScript as the reusable step in CI, one can build up company-specific "Dagger Actions" in any language the Dagger Engine supports, then invoke them in a versioned way that receives rich objects as parameters, ala PowerShell (e.g. not text, actual live objects)
# GHA way
jobs:
example:
steps:
- name: old way
uses: example/my-awesome-step@v5
# Dagger way
steps:
- name: Call Dagger Function
uses: dagger/dagger-for-github@v5
with:
version: "0.10.0"
verb: call
module: github.com/example-co/our-awesome-step-in-our-language@v0.5
and one may wonder, "but why?" and the theory is that unlike nektos/act which is a very, very skeletal implementation of GHA for local, dagger should(!) allow a local build step (or even GitLab CI, or Jenkins) that runs the same as GHA, promoting CI agnostic build pipelines:
Dagger is an API exposed through GraphQL, offering specialized functions for constructing and executing containers. Imagine substituting Dockerfiles with GraphQL calls, and then the Dagger daemon executes the provided instructions. The outcome is a kind of "Turing complete" replacement for Dockerfiles.
The final aspect is that, typically, you wouldn't interact directly with the GraphQL API. Dagger provides wrappers for it in several languages.
Dagger 0.10 enables you to define and invoke your own functions, in addition to those already defined in the core Dagger API.
The initial purpose was to simplify the process of running the same task in any environment. For example, it allows you to replace the cumbersome YAML step definitions of CI/CD systems and effortlessly execute the same task both remotely and locally.