I like robots but with current uncertainty about capabilities I would buy one outright. I would like to rent, but I can’t. So I thought maybe if I am not alone that could be useful to some. It’s a market test, not a product.
> Everyone sees themself as at the "top" of their information pile. Many years ago I felt that people asking "give me the view from 30,000ft" were just lazy and entitled.
That’s exactly how I feel. I do go on and craft stories and make things nice. But after years of doing that I still wonder if all that “simplicity” does any good to the world. All the ignorance and missed learning opportunities…
It's always worth giving people the benefit of the doubt on the first
time. After that....guess there's one exception I still make. I
learned that it's called "sealioning". When someone repeatedly,
dishonestly professes ignorance as a way of making an obtuse
argument. I see a bit of that wilful ignorance here on HN when people
say things like "I don't see why... <self evident, commonly known
truth>"
Great to have these improvements! However, it seems the overhead to create the shareable files exceeds the utility for most cases. Would love this work continues until we get a one line „pip share“ and „pip load“ type tooling.
They say that the move is to keep up their ethics policy. I feel The Verge has had so many „push-to-buy“ articles in the past that paying for the few well written articles makes little sense. So very sadly I am saying goodbye Verge
Wondering about wider implications. If technical interactions reduce online, how about RL and how do we rate a human competence against an AI once society gets a habit from asking an AI first? Will we start to constantly question human advice or responses and what does that do to the human condition.
I am active in a few specialized fields and already I have to defined my advice against poorly crafted prompt responses.
> Will we start to constantly question human advice or responses and what does that do to the human condition.
I'm surprised when people don't already engage in questioning like that.
I've had to be doing it for decades at this point.
Much of the worst advice and information I've ever received has come from expensive human so-called "professionals" and "experts" like doctors, accountants, lawyers, financial advisors, professors, journalists, mechanics, and so on.
I now assume that anything such "experts" tell me is wrong, and too often that ends up being true.
Sourcing information and advice from a larger pool of online knowledge, even if the sources may be deemed "amateur" or "hobbyist" or "unreliable", has generally provided me with far better results and outcomes.
If an LLM is built upon a wide base of source information, I'm inclined to trust what it generates more than what a single human "professional" or "expert" says.
If an LLM is built upon a wide base of source information, I'm inclined to trust what it generates more than what a single human "professional" or "expert" says.
---------
That, and if the prompt to the LLM has been made with a minimum of thought you may get a reasonable answer, perhaps a better one. The consensus is powerful but the two limitations "wide base" and "thoughtful prompt" are big limitations because for specialised fields these limitations apply often. So I am surprise people are inclined to believe the machine more than the human.
does this mean you trust complete randoms just as much?
if i need advice on repairing a weird unique metal piece on a 1959 corvette, im going to trust the advice of an expert in classic corvettes way before i trust the advice of my barber who knows nothing about cars but confidently tells me to check the tire pressure.
this “oh no, experts have be wrong before” we see so much is wild to me. in nuanced fields i’ll take the advice of experts any day of the week waaaaaay before i take the advice from someone who’s entire knowledge of topic comes from a couple twitter post and a couple of youtube’s but their rhetoric sounds confident. confidently wrong dipshits and sophists are one of the plagues of the modern internet.
in complex nuanced subjects are experts wrong sometimes? absofuckinlutely. in complex nuanced subjects are they correct more often than random “did-my-own-research-for-20-minutes-but-got-distracted-because-i-can’t-focus-for-more-than-3-paragraphs-but-i-sound-confident guy?” absofuckinlutely.
does this mean you trust complete randoms just as much?
-----
Personally i trust the consensus, not necessarily one random person.
I have the same problem as the guy above, at this point i assume doctors are almost always wrong if the problem isn't something really common for specific enough
Also AI lulled me in positive feedback instead of being critical and direct. How would this toll address that challenge?