This is why I love iOS in many ways: it's very difficult for an app to have "action at a distance" inside your OS. If you force-quit the app it's pretty much like it never existed.
I haven't used this one but WisprFlow is vastly better than the built-in functionality on MacOS. Apple is way behind even startups, even for fundamental AI functionality like transcribing speech
I can think of a benefit of having 1Gbps. I can't think of anything that a home user would benefit from 25Gbps that is something a government should actually care about (watching 10 different 8K Netflix streams at the same time is not a human rights issue)
Why does it have to work long term? Claude Code probably built it in 2 hours. Sell it for as long as it works. If it provides some value to some people during that time, good for them.
What a rotten state of affairs that we’re now openly suggesting producing garbage with the least effort possible and selling it until caught. We used to criticise those who did that, calling them spammers and scammers and worse. Now, “telling some LLM to take a dump and trying to sell it to some chumps without a sense of smell” is viewed as a smart business model. Anything for an extra buck.
Why is it garbage? If you want something to block YouTube shorts, here's something that does it. It won't work forever, but you won't pay forever. Not all software needs to be high-craft and high-quality. Sometimes it can be just something a guy sells you off the back of his truck.
You misunderstood. I’m not criticising this specific software, I’m criticising the attitude suggested by the parent comment. It was a general commentary, it has nothing to do with this particular app, which I have no idea if it was built that way.
Lifetime payment was highly requested by users (including existing users), since they have subscription fatigue. Since I use the app myself every day to reduce screentime myself I'm extremely motivated to fix every bug and make the UX as seamless as possible.
I agree with this in principle, but this seems conceptually at odds with selling lifetime licenses (which this product does). The lifetime license option reads like a statement of intention that they'll be around for a long time, but when the TOS of the underlying services come into play as they do here, offering (or buying) a lifetime license seems like a gamble.
It's still questionably legal (at least here in Europe) to sell a yearly subscription for something and then have it stop working halfway through the year.
They should probably care about not getting sued so easily.
> Write me a stanza in the style of "The Raven" about Dick Cheney on a first date with Queen Elizabeth I facilitated by a Time Travel Machine invented by Lin-Manuel Miranda
It outputted a group of characters that I can virtually guarantee you it has never seen before on its own
What are you trying to point out here ? Is there any question you can ask today that is not dependent on some existing knowledge that an AI would have seen ?
The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing.
It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely.
ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math.
It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless.
This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.
This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable.
And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative.
We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist.
Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is
> ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math
What am I as a human doing when I "Do math" ?
1.I am looking at the problem at hand, identifying what I have and what I need to get
2.I am then doing a prediction using my pretrained neural net to find possible courses of action to go in a direction that "feels" right
3.I am using my pretrained neural net to find pairs of values that I can substitute with each other (Think multiplication tables, standard results, etc...)
4.Repeat till I arrive at the answer or give up.
As a simple example, when I try to find 600×74+42 I remember the steps for multiplication. I recall the associated pairs of numbers from my tables and complete the multiplication step by step. I then recall the associated pairs of numbers for addition of single digits and add from left to right.
We need to remember that just because we are fast at doing this and are able to do it subconsciously it doesn't mean that we can natively do math, we just do association of information using the neural networks we have trained.
So you don't think 50T parameter
neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad.
You are wrong. Especially that we are talking about models with 50T parameters.
Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases.
Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt.
Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers.
After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description.
I don't think P-zombies can exist. There must be some perceptible difference between an intelligence w/ consciousness and one without. The only way there wouldn't be a difference is if we are mistaken about the consciousness (either both have it or neither do).
> There must be some perceptible difference between an intelligence w/ consciousness and one without
I think there are differences, and I think we can make good guesses, but I'm not sure we can reliably classify a P-zombie from a normal human from their behaviour with 100% accuracy..
> All of its output is based on those things it has seen.
Virtually all output from people is based in things the person has experienced.
People aren't designed to objectively track each and every event or observation they come across. Thus it's harder to verify. But we only output what has been inputted to us before.
I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for, and when there was a bug, you simply sat down and fixed the bug.
With the rise of open source, there started to be more black-box compositing, you grabbed some big libraries like Django or NumPy and honestly just hoped there weren't any bugs, but if there were, you could plausibly step through the debugger and figure out what was going wrong and file a bug report.
Now, the LLMs are generating so many orders of magnitude more code than any human could ever have the chance to debug, you're basically just firing this stuff out like a firehose on a house fire, giving it as much control as you can muster but really just trusting the raw power of the thing to get the job done. And, bafflingly, it works pretty well, except in those cases where it doesn't, so you can't stop using the tool but you can't really ever get comfortable with it either.
> I think it's because traditionally, software engineering was a field where you built your own primitives, then composited those, etc... so that the entire flow of data was something that you had a mental model for
Not just that, but the fact that with programming languages you can have the utmost precision to describe _how_ the problem needs to be solved _and_ you can have some degree of certainty that your directions (code) will be followed accurately.
It’s maddening to go from that to using natural language which is interpreted by a non-deterministic entity. And then having to endlessly iterate on the results with some variation of “no, do it better” or, even worse, some clever “pattern” of directing multiple agents to check each other’s work, which you’ll have to check as well eventually.
> bafflingly, it works pretty well, except in those cases where it doesn't
so as a human, you would make the judgement that the cases where it works well enough is more than make up for the mistakes. Comfort is a mental state, and can be easily defeated by separating your own identity and ego with the output you create.
I mean, you could make that judgment in some cases, but clearly not all. If you use AI to ship 20 additional features but accidentally delete your production database you definitely have not come out ahead.
reply