For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | maxutility's commentsregister

Here’s John H Cochrane (the grumpy economist, Substack) praising refine. This is the post that first clued me into the service: https://www.grumpy-economist.com/p/refine

Here’s the Wikipedia for Cochrane: https://en.wikipedia.org/wiki/John_H._Cochrane

I haven’t tried refine myself, but from second hand reports it seems potentially well designed and useful.


It seems like Disney’s departure from its business “deal” with OpenAI is newsworthy distinct from the Sora closure. Six months ago Sam Altman was announcing massive deals left and right, now nearly all of the impressive deals have fallen through or been scaled dramatically back.

It all seems like a big pump-and-dump scheme doesn't it?

I found Sam's early 2015 posts on machine superintelligence and regulation [1] [2] to be even more interesting in hindsight, given OpenAI's accelerationist bent of late, OpenAI president Greg Brockman's lobbying efforts against AI regulation, and frequent accusations of attempted regulatory capture.

[1] https://blog.samaltman.com/machine-intelligence-part-1 [2] https://blog.samaltman.com/machine-intelligence-part-2

Sam's recommendations at the time include: 1) Provide a framework to observe progress… 2) Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers…, require that certain parts of the software be subject to third-party code reviews, etc. 3) Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world. … 4) Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research. 5) Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI…

Also, in his acknowledgments he gives the greatest thanks to onetime partner, now rival, Dario Amodei.


Arizona is permanent standard time rather than permanent DST, and is thus unaffected by the permanent-DST winter mornings issue.


Previous discussion of the offending claim: https://news.ycombinator.com/item?id=45633482

An OpenAI researcher tweeted: “ Using thousands of GPT5 queries, we found solutions to 10 Erdős problems that were listed as open: 223, 339, 494, 515, 621, 822, 883 (part 2/2), 903, 1043, 1079.

Additionally for 11 other problems, GPT5 found significant partial progress that we added to the official website: 32, 167, 188, 750, 788, 811, 827, 829, 1017, 1011, 1041. For 827, Erdős's original paper actually contained an error, and the work of Martínez and Roldán-Pensado explains this and fixes the argument.”

This was taken (out of context?) to be claiming ChatGPT solved the open problems, when in fact it “just” found them through a literature review. (Though an earlier tweet in the same thread made the literature review interpretation more explicitly)

The ensuing controversy around whether there was false hype buried a potentially significant demonstration of LLMs’ ability to unlock lost and forgotten knowledge in a way Sebastien explains and makes the case here as being a big deal.


It would be great to implement a browser extension that lets you highlight a term or phrase on any webpage and open a GPT rabbit hole for that term or phrase.

@maxkreiger - if I were to build one as a proof of concept, would you object to me having it hyperlink to your UI?


You could call it “bootstrapping is all you need” :)


Act III of episode 585 of This American Life (a WBEZ radio show broadcast on NPR and distributed via podcast) discussed this phenomenon and spoke with a few individuals with HSAM:

https://www.thisamericanlife.org/585/in-defense-of-ignorance...

One of the individuals was a script supervisor in Hollywood responsible for ensuring continuity between scenes during filming. But it also ventures into powerful emotionally resonant territory, touching on the bittersweet implications of experiencing loss when memories never fade.


I re-listened to this and I think all of this episode is worth the time investment if you have not heard it. The first act is with director Lulu Wang and the real life inspiration for her movie The Farewell and the second part is an interview with Dunning behind the infamous Dunning-Kruger effect.


I submitted using the title from the article metadata instead of what's displayed in the article, since the metadata title was more descriptive and less clickbaity.


Related article from the same series: We Can’t Stop Writing Paper Checks. Thieves Love That. [0]

[0] https://www.nytimes.com/2023/12/09/business/check-fraud.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You