For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | alecbz's commentsregister

How can you tell the difference?

> if enough people want to willing vote in a corrupt president

Why do people do this though? Maybe it's inevitable, but I think there was a lot of pent up frustration with the government that led a lot of people to just say "fuck it". Not really excusing it (especially for his second term), but I feel like we're reaping years and years of a dysfunctional and ineffectual congress. Not that that's an especially easy problem to solve either.

I think this also explains a lot of the frustration with SCOTUS. In-theory, SCOTUS is supposed to just interpret and flesh out the policies decided on by congress. In practice, congress doesn't really do anything, and people started depending on SCOTUS's ability and willingness to make far-reaching and impactful decisions. Now a more conservative SCOTUS isn't doing that.


It's worth noting that an ineffective and gridlocked congress is specifically a problem of presidential-style democracies. Parliamentary systems with a prime minister have some of their own shortcomings (notably a weak executive), but the government is actually controlled by the legislature.

Countries that follow the presidential model regularly succumb to strong man type leaders. Ironically, in the modern era when the US had a hand in helping other countries establish their governments, we specifically helped them establish parliaments.


I don't think parliamentary systems help the legislature remain effective, since they're still elected in roughly the same way, no?

But yeah, it prevents an ineffective legislature from leading to strong-men, which does seem nice. :)


I agree there is a lot of pent up frustration in the U.S. and the GOP did a bang-up job of cultivating this frustration. And now that they have their chance at bat they seem to be striking out.

At the risk of my analogy making something serious sound like a game, I'd like to see another team have a chance at bat.


> Although [touch typing] refers to typing without using the sense of sight to find the keys ... the term is often used to refer to a specific form of touch typing that involves placing the eight fingers in a horizontal row along the middle of the keyboard (the home row) and having them reach for specific other keys.

https://en.wikipedia.org/wiki/Touch_typing

I think they're referring to the latter.


The strict definition of touch typing reminds me of how when I was a kid, my parents would always tell me that there’s a specific way of holding chopsticks. You gotta hold the top one like a pencil, and rest the bottom one between the crook of your fingers and your ring finger, and make sure they’re the same length and the bottom one isn’t moving and you’re just using it as a base to press against.

And then I became an adult and visited China and met actual Chinese immigrants and married a native chopstick holder. And half of them don’t hold chopsticks “the real way”. Somehow it all works out. As long as you can eat a peanut with them, you pass.

As an adult I learned that there’s also a whole lot of prescriptive bullshit that basically nobody pays attention to. The strict definition of touch typing seems like one of those. If you can type without looking at the keys, you can touch type.


I will say you are far faster touch typing proper. I never fully learned it in school. I kind of half do it. Left hand is pretty religously touch typing byt right doesnt' stay on its home row.

Just never cared to get perfect at it in school. I would get absolutely crushed on typing tests though with the kids who actually learned touch typing. They all had piano experience and could reach the modifiers while holding on to the home row still. I still can't really do that on my right hand, its like my pinky doesn't reach.


I use a Dvorak keyboard, so usually outpace the touch typers. By the strict definition, it's not technically touch typing. By any colloquial definition, it absolutely is, if I looked at the keys I'd be touching the wrong letters. I just have the Dvorak layout burned into my brain so it's what I type regardless of what the keys say.

> You gotta hold the top one like a pencil

I've heard this before too but apparently most people hold a pencil wrong anyway so it doesn't actually help.


With such a strict definition the OP’s comment becomes basically meaningless. They could be referring to using index fingers only. They could be using an alternative keyboard layout. They could mostly be using left-hand only. Pretty much any WPM between 1 and 200 seems possible with the statement: “I don’t keep my fingers on home row in between key presses.”

Craft, in coding or anything else, exists for a reason. It can bleed over into vain frivolity, but craft helps keep the quality of things high.

Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.

This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.


A "deep" tool that fully automates fairly specific tasks works this way. LLMs are more of a "shallow", general tool that can partially help with lots of different things, but none so completely that they alleviate the need for human involvement in them.

You're making the opposite argument of what you think you are.

Say more.

A car that can self-drive 100% of the time is a new tool that could make driving an obsolete skill. A car that can self-drive successfully 99% of the time is dangerous because it trains people to not be ready to take over for the 1% they need to.

What actually happens is that the 1% is ignored or outlawed. The shovel doesn't do 100% of human excavating tasks better than hands, but we rightly realized that the space of possibilities involving a shovel was much greater than the 1% of hand powered excavation.

If the 1% is just a bit less efficient with the new tech, sure, but it's different if the 1% means your car crashes.

This is only a problem if regulators and/or courts and/or consumers all fail to recognise that said 99% car isn't safe enough.

Sure -- I think articles like this are a warning that the skills we're losing are likely _not_ so completely supplanted by AI that they'll soon be irrelevant.

Big Tech is not the same thing as all technology.

There's a lot of software in between Air Traffic Controller and Facebook. And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes? I'd think at this point that'd be considered a fairly severe incident.

Even if we ignore criticality, things just get really messy and confusing if you push a bunch of broken stuff and only try to start understanding what's actually going on after it's already causing issues.


> And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes?

sure, they coined the term “move fast and break things”

and not every “bug” brings the system down, there is bugs after bugs after bugs in both facebook and insta being pushed to production daily, it is fine… it is (almost) always fine. if you are at a place where “deploying to production” is a “thing” you better be at some super mission-critical-lives-at-stake project or you should find another project to work on.


> there is bugs after bugs after bugs

These are the bugs after bugs after bugs after bugs after bugs.

Simply put they are going through dev, QA, and UAT first before they are the bugs that we see. When you're running an organization using software of any size writing bugs that takes the software down is extremely easy, data corruption even easier.


I wholeheartedly agree. I just don't agree with:

> We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever

Things should 100% go to prod whenever they need to go to prod. While this in theory makes sense, there is insane amount of ceremony in large number of places I have seen personally where it takes an act of congress to deploy to production all the while it is just ceremony, people are hunting other people with links to PR sent to various slack channels "hey anyone available to take a look at this" and then someone is like "I know nothing about that service/system but I'll look at approve." I would wager a high wager that this "we must review every line of code" - where actually implemented - is largely a ceremony. Today I deployed three services to production without anyone looking at what I did. Deploying to production should absolutely be a non-event in places that are ran well and where right people are doing their jobs.


Even with code review, a well configured CI/CD system is going to include a wealth of automated unit and integration tests, and then also a complex deploy system involving canaries and ramp-up and blue/green deployment and flags and monitoring and alerts that's backed by a pager and on-call rotation with runbooks. Code review simply will never be perfect and catch 100% of issues, so systems are designed with that in mind.

So then then question is what's actually reasonable given today's code generating tools? 0% review seems foolish but 100% seems similarly unreal. Automated code review systems like CodeRabbit are, dare I even say, reasonable as a first line of defense these days. It all comes down too developer velocity balanced with system stability. Error budgets like Google's SRE org is able to enforce against (some) services they support are one way of accomplishing that, but those are hard to put into practice.

So then, as you say, it takes an act of Congress to get anything deployed.

So in the abstract, imo it all comes down to the quality of the automated CI/CD system, and developers being on call for their service so they feel the pain of service unreliability and don't just throw code over the wall. But it's all talk at this level of abstraction. The reality of a given company's office politics and the amount of leverage the platform teams and whatever passes for SRE there have vs the rest of the company make all the difference.


I'm sure some companies do this poorly but there's lots of places where code review happens on every PR and there's processes and systems in place to make sure it's an easy process (or at least, as easy as it should be). Many large tech companies have things pushed to prod automatically many, many times per day and still have code review for all changes going out.


>sure, they coined the term “move fast and break things”

Yeah I'm aware, but as any company gets larger and has more and more traffic (and money) dependent on their existing systems working, keeping those systems working becomes more and more important.

There's lots of things worth protecting to ensure that people keep using your product that fall short of "lives are at stake". Of course it's a spectrum but lots of large enterprises that aren't saving lives but still care a lot about making sure their software keeps running.


Yeah in many places we had two humans with context on every line, and now we're advocating going to zero?


How do you know which lines you need to review and which you don't?

Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You