For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | love2read's commentsregister

You're right, I offer these alternatives:

Consistency Preservation Update (CPU)

Guided Probability Update (GPU)

History-aware Distillation Driving (HDD)

Probability Smoothing Update (PSU)


The required cover letter for a $120k-$130k/y job is 3 - 4 short paragraphs explaining why you're interested in Innolitics and this particular position.

It's like they are asking to waste their time reading AI spam. If they read it at all or just check length.

Hey! This link doesn't work.

This link seems to work https://priorlabs.ai/careers

What about this article raises this question? If anything, this article makes it pretty clear that memory safe languages are a win. It seems like a serious disadvantage to require a nondeterministic program to evaluate your code's safety.

In general I agree and suspect that memory safety is a tool that will continue to pay dividends for some time.

But there are tradeoffs and more ways to write correct and 'safe' code than doing it in a "memory safe" language. If frontier models indeed are a step function in finding vulnerabilities, then they're also a step function in writing safer code. We've been able to write safety critical C code with comprehensive testing for a long time (with SQLite presenting a well known critique of the tradeoffs).

The rub has been that writing full coverage tests, fuzzing, auditing, etc. has been costly. If those costs have changed, then it's an interesting topic to try to undertand how.


> If frontier models indeed are a step function in finding vulnerabilities, then they're also a step function in writing safer code. We've been able to write safety critical C code with comprehensive testing for a long time (with SQLite presenting a well known critique of the tradeoffs).

More like: a few people have been able to write C code where the vulnerabilities are obscure enough that we mostly don't discover them very often.

The result of the phenomenon described in the article is that the gap between 99.9% secure and 100% secure just got a whole lot wider. Using herculean amounts of testing and fuzzing to catch most of the holes in a language that lacks secure-by-construction qualities is going to be even less viable going forward.


This is crazy. It's especially crazy how nonchalantly the employees are replying. The person suggesting that Railway should clearly show the effected logs is right.

Might be worth taking a weekend day and letting claude code reverse engineer the apk (just download the apk off google) and then build an open source app with the functions you need

Is there an equivalent for macOS?

This is really good stuff, I just wish they had an email list I could subscribe to. I get that they have an RSS feed but an RSS reader is more ceremony than I'm willing to devote to one website without an email form. It's a shame really because it's a pretty cool site.

Interesting. So you’re one of the ones who prefers email updates to a reader? What do you estimate that breaks down to %-wise of people?

my guess:

95% prefer email (anyone nontechnical)

5% prefer reader (a select group of technical people)


Thanks! Why do you think some technicals like readers?

> It's super addicting and gives you an illusion of productivity

Why is it an illusion of productivity if you work faster with it than without it?

> It has never been a competitive advantage to write code fast but writing the right code. If you could also do that fast you are golden.

Who decides whether or not the code is right? You, no?

> The same foundamentals that makes software stable, robust and good haven't changed. Writing software has always been constrained by thinking, design, trade-offs and understanding/mapping real world problems into proper systems.

I don't believe this to be true at all. I believe a significant portion of what writing software has been constrained by is the priority of the work in proportion to the amount of effort to perform the work. Yes, systems thinking also has a place in programming, but it has never been the constraint.

> It's very tempting to replace that flow with automation so you can get some hard earned sleep. But what happens when the agent(s) can't figure out what's wrong. What happens when you have to dig through an ungodly amount of unfamiliar code because you are the one responsible.

Hasn't this always been a possibility when using dependencies that you didn't write. And if anything, AI's "Build It In House" attitude has made debugging deep in the stack a much easier stack than it was when you were debugging someone else's code that's been published on some dependency registry.

> Add to this that these tools are nondeterministic. It was likely the tool that introduced the bug in the first place, how confident are you that it will actually fix the bug.

How confident are you in yourself fixing the bug if you wrote the buggy code? About the same, no?

> He might be an extreme case of this. But it happens if you are not consistently monitoring/watching these things to a point where you might as well write it yourself. The productivity gain is gone, atleast in the form that I read the commonly made arguments from Big AI.

> if you are not consistently monitoring/watching these things to a point where you might as well write it yourself

Why get any help with anything if you could take the full burden on yourself? Probably because with code it's much faster to verify than to produce.

> Add to this, that it seems like these models are performing worse after their initial release suggesting that we can't trust we get the same quality output for the same amount of money, since the providers are incentivised to make it cheaper to run them, i.e. slow use a less quantized version.

Well, you could just stop paying and write the code yourself, that seems to be the main alternative in this situation. But I don't hear of anyone doing that.

> You might still prefer the role of being an agent manager compared to being a developer but I thin the discourse around it making you automatically faster needs to change. Faster to what end? 92-95% uptime?

This has turned pretty quickly into an AI Bad post.

Also,

ridiculas => ridiculous

foundamentals => fundamentals

thatI => that (also double win because the font looks like an L not an I until copied into HN)

"and is lobster project" => and his* lobster project

incentivised => incentivized

Did people stop checking their writing for typos? Just throw the text into google docs, it's really not that hard.


Thanks for the spellcheck!

>Why is it an illusion of productivity if you work faster with it than without it?

You don't work faster if you're just slipping out the wrong code.

>Who decides whether or not the code is right? You, no?

When was the last time you just LGTM'd code you couldn't be arsed to read through? The point is, you need to building an intuition around the actual solution to the problem. If you don't have the blueprint in your mind, how the hell do you think you have a good spot to validate it from?

>I don't believe this to be true at all. I believe a significant portion of what writing software has been constrained by is the priority of the work in proportion to the amount of effort to perform the work. Yes, systems thinking also has a place in programming, but it has never been the constraint.

No, the constraint is, can I maintain and reason about this, while not blowing out the machine with the implementation details or having to rewrite it 10000 times. If you think priority and effort to do are the major constraints, oooo boy. That's manager talk right there. No wonder you're seeming so gung ho on it.

>Hasn't this always been a possibility when using dependencies that you didn't write. And if anything, AI's "Build It In House" attitude has made debugging deep in the stack a much easier stack than it was when you were debugging someone else's code that's been published on some dependency registry.

When was the last time you really audited a dependency? Like really. I do it before importing something new. I don't trust people. So my calculus is to defend myself and my employers from unwarranted trust.

>How confident are you in yourself fixing the bug if you wrote the buggy code? About the same, no?

Fairly confident, actually. I hate clever code. Whenever I see something makes me go "that was slick", I tear it down, refactor, or do everything I can to thoroughly document it. Admittedly, I'm more of a software verification guy in my normal modality of work, but I also have been on the development side as well and there is no one I hate more than my inner developer who just yeets whatever works in the moment in front of me to have to unscrew later, but at least I've managed to beat into him that he isn't to ever write his most clever code, or we're both going to be stuck in the chair for a while.

>Why get any help with anything if you could take the full burden on yourself? Probably because with code it's much faster to verify than to produce.

WRONG. Objectively so. Want proof? Give your AI model a set of requirements. Ask it to generate you code to implement it. Time the forward pass. Clear the context after putting aside the code you just generated. Toss your cleaned context session the previously generated artifact. Tell it to work backwards and tell you what the requirements were. It'll take at a minimum from my testing about 20x as long to come back, and the answer will be lossy. By the way, we're not even constraining it by insisting the code works. Verification is always way more work than producing. I can make 10 prototypes that don't meet the spec fully in the time it takes me to verify and fully trace one arbitrary chunk of code to whether or not it satisfied all requirements. What you just asserted right there, is a common prejudice amongst developers. One that is generally grown out of with maturity.

>Well, you could just stop paying and write the code yourself, that seems to be the main alternative in this situation. But I don't hear of anyone doing that.

You mean like I always have done?

If all you're doing is chasing the dopamine high of a mostly okay software artifact you aren't serious about having anyone else rely on, then you go on and keep on vibe coding my man, but don't try to handwave perfectly valid criticisms. If you really understood these machines, and how they work, and the peril of bridging the gap from a PHB's feature request in the real world to a reliable, maintainable implementation in the digital, you'd not be so comfortable with the cavalier attitude of "The AI spit it out".

If by the end you can't why it works the way it does, and why it doesn't work one way or another, you aren't crafting, you're playing slots. I don't trust my safety to gamblers. You shouldn't either. But that's all your personal risk tolerance, and I can't do anything about that.


I really hate that the author tugged at the heart strings of someone who is not alive anymore to back their cause of hating AI.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You