For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | qq66's commentsregister

I don't like it but I like that you did it.

This is why I love iOS in many ways: it's very difficult for an app to have "action at a distance" inside your OS. If you force-quit the app it's pretty much like it never existed.

I haven't used this one but WisprFlow is vastly better than the built-in functionality on MacOS. Apple is way behind even startups, even for fundamental AI functionality like transcribing speech

WisprFlow has a lot of good recommendations behind it but the fact they used Delve for SOC2 compliance gives me major pause.

The fact that a company could slurp up all of your data and then use Delve for their SOC2 is a great reason to use local models.

I use the baked in Apple transcription and haven't had any issues. But what I do is usually pretty simple.

What makes the others vastly better?


I’ve rarely had macOS TTS produce a sentence I didn’t have to edit

Whisper models I barely bother checking anymore


What is the benefit of 25Gbps home internet?

You literally cannot think of a single benefit for having fast Internet?

I can think of a benefit of having 1Gbps. I can't think of anything that a home user would benefit from 25Gbps that is something a government should actually care about (watching 10 different 8K Netflix streams at the same time is not a human rights issue)

Why does it have to work long term? Claude Code probably built it in 2 hours. Sell it for as long as it works. If it provides some value to some people during that time, good for them.

What a rotten state of affairs that we’re now openly suggesting producing garbage with the least effort possible and selling it until caught. We used to criticise those who did that, calling them spammers and scammers and worse. Now, “telling some LLM to take a dump and trying to sell it to some chumps without a sense of smell” is viewed as a smart business model. Anything for an extra buck.

Yup. The fast food philosophy has entered the software development world. Produce cheaply, don't think too much, shove it down your throat, move on.

Why is it garbage? If you want something to block YouTube shorts, here's something that does it. It won't work forever, but you won't pay forever. Not all software needs to be high-craft and high-quality. Sometimes it can be just something a guy sells you off the back of his truck.

> Why is it garbage?

You misunderstood. I’m not criticising this specific software, I’m criticising the attitude suggested by the parent comment. It was a general commentary, it has nothing to do with this particular app, which I have no idea if it was built that way.


Maybe I'm misreading but the parent doesn't seem to be suggesting it as good but asking sarcastically. And yeah, the site has all the LLM-hallmarks.

Anything that's a service and has a single-payment "lifetime subscription" is immediately suspect.


Lifetime payment was highly requested by users (including existing users), since they have subscription fatigue. Since I use the app myself every day to reduce screentime myself I'm extremely motivated to fix every bug and make the UX as seamless as possible.

> are now openly suggesting producing garbage with the least effort possible and selling it until caught

Well, this is a VC adjacent forum, so...


> Sell it for as long as it works.

I agree with this in principle, but this seems conceptually at odds with selling lifetime licenses (which this product does). The lifetime license option reads like a statement of intention that they'll be around for a long time, but when the TOS of the underlying services come into play as they do here, offering (or buying) a lifetime license seems like a gamble.


How about: The creator is trying to make some money and is not super concerned with the long view. For-profit activist software.

It's still questionably legal (at least here in Europe) to sell a yearly subscription for something and then have it stop working halfway through the year.

They should probably care about not getting sued so easily.


> [for the] lifetime [of the current version of the service]

>unlimited data [up to a certain limit]

> ~~no~~ gimmicks

I'm sure I'm missing some


Interesting perspective! Are we in the „fast fashion“ period of software now?

Isn't making it a subscription more honest? Don't pay an outright price for this, just pay monthly until it stops working

That's an unreleased product in closed beta. Might not any name conflict with some unreleased product in closed beta?

He put many of the photographs right there in his blog post - he obviously does not see them as secrets


I entered the prompt:

> Write me a stanza in the style of "The Raven" about Dick Cheney on a first date with Queen Elizabeth I facilitated by a Time Travel Machine invented by Lin-Manuel Miranda

It outputted a group of characters that I can virtually guarantee you it has never seen before on its own


Yes, but it has seen The Raven, it has seen texts about Dick Cheney, first dates, Queen Elizabeth, time machines and Lin Manuel Miranda.

All of its output is based on those things it has seen.


What are you trying to point out here ? Is there any question you can ask today that is not dependent on some existing knowledge that an AI would have seen ?


The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing.

It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely.

ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math.

It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless.

[1] https://en.wikipedia.org/wiki/Clever_Hans


> all LLM output is based on likelihood of one word coming after the next word based on the prompt.

Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it.


No, it does not reason anything. LLM "reasoning" is just an illusion.

When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.


This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.


This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable.

And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative.


Chess engines are not a comparable thing. Chess is a solved game. There is always a mathematically perfect move.


> Chess is a solved game. There is always a mathematically perfect move.

This is a good example of being confidently misinformed.

The best move is always a result of calculation. And the calculation can always go deeper or run on a stronger engine.


We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist.


Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is


Is that so different from brains?

Even if it is, this sounds like "this submarine doesn't actually swim" reasoning.


> ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math

What am I as a human doing when I "Do math" ?

1.I am looking at the problem at hand, identifying what I have and what I need to get

2.I am then doing a prediction using my pretrained neural net to find possible courses of action to go in a direction that "feels" right

3.I am using my pretrained neural net to find pairs of values that I can substitute with each other (Think multiplication tables, standard results, etc...)

4.Repeat till I arrive at the answer or give up.

As a simple example, when I try to find 600×74+42 I remember the steps for multiplication. I recall the associated pairs of numbers from my tables and complete the multiplication step by step. I then recall the associated pairs of numbers for addition of single digits and add from left to right.

We need to remember that just because we are fast at doing this and are able to do it subconsciously it doesn't mean that we can natively do math, we just do association of information using the neural networks we have trained.


sigh; this argument is the new Chinese Room; easily described, utterly wrong.

https://www.youtube.com/watch?v=YEUclZdj_Sc


Next-token-prediction cannot do calculations. That is fundamental.

It can produce outputs that resemble calculations.

It can prompt an agent to input some numbers into a separate program that will do calculations for it and then return them as a prompt.

Neither of these are calculations.


So you don't think 50T parameter neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad.


They do not. The fundamental technology behind LLMs does not allow that to be the case. You are hoping that an LLM can do something that it cannot do.


https://arxiv.org/html/2502.16763v2

You are wrong. Especially that we are talking about models with 50T parameters.

Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases.


You do realize that training a neural net to do addition is a beginner level exercise in ML?


Humans can't do calculations either, by your definition. Only computers can.


Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt.

Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers.


After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description.


I don't think P-zombies can exist. There must be some perceptible difference between an intelligence w/ consciousness and one without. The only way there wouldn't be a difference is if we are mistaken about the consciousness (either both have it or neither do).


> There must be some perceptible difference between an intelligence w/ consciousness and one without

I think there are differences, and I think we can make good guesses, but I'm not sure we can reliably classify a P-zombie from a normal human from their behaviour with 100% accuracy..


In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

-- from the jargon file


> All of its output is based on those things it has seen.

Virtually all output from people is based in things the person has experienced.

People aren't designed to objectively track each and every event or observation they come across. Thus it's harder to verify. But we only output what has been inputted to us before.


I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for, and when there was a bug, you simply sat down and fixed the bug.

With the rise of open source, there started to be more black-box compositing, you grabbed some big libraries like Django or NumPy and honestly just hoped there weren't any bugs, but if there were, you could plausibly step through the debugger and figure out what was going wrong and file a bug report.

Now, the LLMs are generating so many orders of magnitude more code than any human could ever have the chance to debug, you're basically just firing this stuff out like a firehose on a house fire, giving it as much control as you can muster but really just trusting the raw power of the thing to get the job done. And, bafflingly, it works pretty well, except in those cases where it doesn't, so you can't stop using the tool but you can't really ever get comfortable with it either.


> I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for

Not just that, but the fact that with programming languages you can have the utmost precision to describe _how_ the problem needs to be solved _and_ you can have some degree of certainty that your directions (code) will be followed accurately.

It’s maddening to go from that to using natural language which is interpreted by a non-deterministic entity. And then having to endlessly iterate on the results with some variation of “no, do it better” or, even worse, some clever “pattern” of directing multiple agents to check each other’s work, which you’ll have to check as well eventually.


> bafflingly, it works pretty well, except in those cases where it doesn't

so as a human, you would make the judgement that the cases where it works well enough is more than make up for the mistakes. Comfort is a mental state, and can be easily defeated by separating your own identity and ego with the output you create.


I mean, you could make that judgment in some cases, but clearly not all. If you use AI to ship 20 additional features but accidentally delete your production database you definitely have not come out ahead.

https://www.reddit.com/r/OpenAI/comments/1m4lqvh/replit_ai_w...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You