For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | spacetime_cmplx's commentsregister

Negative income tax is an alternative to UBI, but it's discussed less often. Does anyone know why? I feel it should be easier to implement (plugs right into the existing tax system), should be cheaper (the more you earn, the less you get), and feels more in line with the progressive tax system, whereas UBI feels like an extremely coarse first approximation.

Essentially, the tax curve starts at 0 and goes up. Why not shift it down? There's nothing special about 0.


I agree that combining UBI with the existing income taxes seems like a sensible way to implement it.

It also automatically makes it available to every legal tax resident, regardless of whether they are in the country via a work permit, a residence permit, or citizenship. Taxing somebody and not providing them with an UBI would be unfair.


Because negative income tax credits and tax deductions are already common place and boring.

The US has a $3000 child tax credit which applies to approximately 80 million children [1]

The US has a $13,000 tax deduction, taken by >200 million people

last, Lower income folks making less than 20k already get ~40K of benefits, credits, and deductions.[2]

https://www.irs.gov/credits-deductions/individuals/child-tax...

https://www.illinoispolicy.org/illinois-warped-welfare-syste...


UBI is easier to understand conceptually. There could be more advantages to it though, over negative income tax, but they’re a bit difficult to discover.


Ironic. Every time a company measures its employee output with number of lines of code or commits, devs are the first to point out that management is dumb for using a surface-level metric. But you're doing the same thing here, except with execs and powerpoint.

Powerpoints are just the final output you see. The real work execs do is in the decisions that went into the powerpoint.

No sane board will give decision-making power to an AI they can't blame. Besides, there are probably 100 devs for every exec, so it makes no financial sense to automate execs.


I know how that works. And my point was not that they should or will be replaced, but rather that they are no less expendable than developers (not very much).

But the decisions they make are one of the things that can be automated. I do not know if you have been inside one of these places but the executives are not doing a great job deciding (at mine they decided opensearch was a better bet than elastic and switched existing installations).

A new regime came in and then bad decision after bad decision drove our best talent away. Consultants, everywhere.

Also, that number is much lower. Full time devs are down, contractors and consultants are up. As a full time dev at one of these places, it felt like the number of executives was growing as everything else shrank.

Perhaps you are right about the highest levels, but think about all of the middlemen executives and what they do.

And even that -- I think an AI could choose to not spend millions on Deloitte or Accenture on software that inevitably failed.


As a serious security person, this is why I use 999999 as my passcode


I use 999997. You'd think they'd try 999999, and 999998 and go "fuck this" and try something else.


All of the best security practices rely on the attacker being easily bored. Thank you attention economy!


What's stopping someone from placing the silicon under ab electron microscope and reading the data visually? I mean, the circuit that encrypts data has to have some way to load the key (the circuit might not be in the CPU, but it exists somewhere).


There is not an infinite amount of budget to solve any particular case


Somewhat related, this sort of concern was mentioned in this talk about security on the XBOX One, https://youtu.be/U7VwtOrwceo?t=668

Essentially they set an max budget for cost-effective attacks on the hardware, "modding the console needs to be more expensive than 10 games" (about $600), and ignored attacks that cost more than that to for an end user to execute.


That's concerning. Could you elaborate on how you identified the traffic as cloudflare workers? Also, what sorts of HTTP attacks? wp-admin probes? Plain DDoS?

Cloudflare has (had?) a murky history with not taking down DDoS for hire services ironically hosted behind cloudflare. But while you could argue they had an incentive to do that (sell protection), I can't think of any incentive to let Workers be abused.


> Could you elaborate on how you identified the traffic as cloudflare workers?

Trivial based on the fact that HTTP requests coming from CloudFlare Workers has a cf-worker header. Also, any traffic coming from cloudflare-owned IP blocks clearly belongs to cloudflare and can be safely blocked.


On the second point, with the introduction of Cloudflare WARP VPN, that's not quite true. Additionally, I believe Safari Private Relay may end up looking like it originates from CF as well.


> Additionally, I believe Safari Private Relay may end up looking like it originates from CF as well.

Cloudflare reserves IP ranges just for Private Relay: https://developer.apple.com/support/prepare-your-network-for...


> and can be safely blocked.

Well no, not if you yourself are also using Cloudflare


You can block third party Workers with a CF WAF rule. Here's an example:

cf.worker.upstream_zone ne "" and not cf.worker.upstream_zone in {"aimoda.workers.dev" "ai.moda"}


You mean like server<>server communication? Hopefully that communication stays within the network rather than going from server<>internet<>server


I mean if you are using Cloudflare with their proxy, so origin<>cloudflare<>client


Yeah, then you'd just block based on the client IP which is in a header, rather than the IP on the connection.


While OP's reply answers your question, it's important to not apply current costs to predict the future of AI. Hardware for LLMs is one step function away from unimaginable capabilities. That breakthrough could be in performance, cost, or more likely, both.

Imagine GPT-4 at 1/1000th the cost. That's where we're going. And you can bet your ass Nvidia is working on it as we speak. Or maybe someone else will leapfrog them like ARM did to Intel.


For starters, Spotify exists


Why is everyone so confused about this? Isn't verifying the easy part? You put it into the GPT-3.5/4 API as a system prompt and see it answers like the actual chatbot. If it does, you've either extracted the actual prompt (congrats!) or something else that works just as well (congrats!). If it doesn't, it's a hallucination. If you're worried about temperature setting throwing you off, keep trying new questions until you find one that the original chatbot gives the same answer consistently.

It's like a trapdoor function.

Am I missing something?


It may not be the exact same model as GPT. They may have tweaked some parameters and almost definitely trained it on additional content relevant to the task of helping with coding. So you probably can't get the same output with just the same prompt.


Sure, in which case the real prompt is as useless as a hallucinated one, so what's the difference?


I guess that now verifying it isn't the easy part, as you boldly claimed the comment before?


I don't think the purpose of getting the prompt leaked was to then use the prompt but just to expose the limitations of this approach to steering an LLM.


If 't be true thou art so 'gainst w'rds changing meaning, then wherefore aren't thee speaking liketh this


It's far easier to do productive, useful things with AI if you just treat it as a tool, like a chainsaw or a pulley.

Everyone agrees there's a lot of hype right now but no one thinks _they're_ a part of it. You know why? It's because we're told we're on an exponential growth (we are), so everyone is trying to force it. So each time there's a breakthrough, we're desperate for it to be rocketship that takes us there.

It's time to step back and let the exponential thing happen on its own. We didn't get here because people in the 20th century sat down and plotted a way to achieve exponential growth; it just sorta happened.


>It's far easier to do productive, useful things with AI if you just treat it as a tool, like a chainsaw or a pulley.

I am seeing a lot of comments paraphrasing this, without pointing to anything.

A lot of comments also say confidently and repetitively that AI is different from crypto.

The best use case I have found so far for ChatGPT is...editing HN posts. [1] But after being mildly satisfied with it once, it seemed like too much bother to use it regularly. Like being a nobody and getting an autopen [2] to sign for you.

But more than a month ago, there was a "Show HN" by someone who claimed that they had an AI-powered solution to writing SQL. [3] The tagline was literally Never write SQL again. That sure sounds like something that could replace real people's jobs, that could be spun into a multibillion dollar market cap.

I tried it, made an attempt at a constructive comment without being negative, and there was not one response, from the submitter or anyone else.

I could explain in scathing terms how useless it appeared, but anyone capable of understanding what writing code is could read between the lines, and nobody like that engaged, so I let it lie.

What is a reasonable person to think about real applications?

[1] https://news.ycombinator.com/item?id=35487015

[2] https://en.wikipedia.org/wiki/Autopen

[3] https://news.ycombinator.com/item?id=35427229


I've noticed the same thing. So many proponents of AI don't validate their results.

I'm reminded of the Microsoft "make more robust" AI feature in VSCode. Their flagship example screenshot was flat out wrong.

The starting code is an html form with a clear bug. It has an onclick rather than onsubmit handler, which means pressing the enter key won't submit the form properly.

Their advertised fix doesn't address that issue. Instead it adds a CSS vendor prefix. First, manually adding vendor prefixes is almost never the right solution, just have one of the existing tools do that automatically. Second, this specific vendor prefix was only in use for a very short period of time years ago. So almost all users currently use browsers that don't need it, and almost all users of outdated browsers aren't helped by the prefixed version.

And this is a case where Microsoft would have had subject matter experts right down the hall from whomever wrote this announcement. It makes me even more skeptical of applications outside of tech.


Generative AI is at the point where it can BS at a college level. This is great for applications where things don't need to be right (disinformation bots, term papers), but doesn't work when there's little room for error (software engineering). It's already being used to write ad copy; I'm waiting for the first lawsuit because of a deceptive ad on Google.


I just use it as a super search engine and validate the results myself. Sometimes I only know 1/2 the words I need to find something so I just ask GPT-4 to give me 10 things it could possibly be then I look them up, rather than waste time with pedantry on web forums trying to find the words that way.

I fail to understand why people get so up in arms about how some people find use in these tools. Does not work for your niche? Cool, then just don't use it.


That's how I use it too. You try to search some topics on search engines nowadays, you get a lot of blog drivel with the same cliches all over and very little on the way of factual explanations. You ask ChatGPT and it gives you one to four paragraphs straight to the point. You can query it further and validate what it says with literature or more focused searches.

It feels like I'm getting massively subsidized by the AI hype tho. Even if I were to pay 20 dollars a month for that, it wouldn't come close to covering how much it cost to train and host it.


But we are here because they drove hard for each big breakthrough that would “change the world” — and many did, from mechanization to electrification to digitization. (To say nothing of agriculture, chemistry, and medicine.)

That growth curve doesn’t “just happen”, but is precisely the result of people trying to constantly push for the next Big Thing (TM).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You