For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | diwank's commentsregister

had a bad experience with pg_search (paradedb) in the past

Which bad experience?

we have been using pg_textsearch in production for a few weeks now, and it's been fairly stable and super speedy. we used to use paradedb (aka pg_search -- it's quite annoying that the two or so similarly named), but paradedb was extremely unstable, led to serious data corruption a bunch of times. in fact, before switching to pg_textsearch, we just switched over to plain trigram search coz paradedb was tanking our db so often...

also shoutout to tj for being super responsive on github issues!


this is so disingenuous on symbolica's part. these insincere announcements just make it harder for genuine attempts and novel ideas

Angels & Demons anyone?

The mention Dan Brown in the article! This book occupies a special place in my heart and I was glad to see it mentioned.

Not being funny but I only ever see Dan Brown mentioned in a mocking tone. I've genuinely no idea, but are the books actually good in some sense?

They are very entertaining stories, that's why they're so popular. If that's what you're looking for then you'll probably like them. If you're easily annoyed by plot holes or historical/scientific inaccuracies then you might not, and if you're looking for sophisticated or artistic prose then he isn't the right author. Obviously "good writing" is subjective, but I think most people would agree that Dan Brown's writing is relatively simplistic, but that often isn't a problem when the story is good.


I just read Angels and Demons. My take is that it is quite gripping and entertaining, and has no other virtues. The prose is just ok, and everything built above that is increasingly nonsensical. However, I'll endorse burkaman's reply as an equally accurate and more charitable review. :)

Sorry for the late reply. When I was studying my A Level physics me and my teacher read Angels and Demons at the same time. It is a silly story really but just scratched that sci-fi bit of my brain in the right way at the right time. It is pulp fiction but sometimes that's just what you want.

His books are perhaps in the same category as Nickelback albums: people love to rag on them, but if you look at the sheer number of units shifted, clearly lots of folks enjoy them.

Read them for what they are (fictional novels with allusions to truth and fact) and you will truly enjoy a good story!

I'm currently writing a review/analysis of this book, so this was certainly a funny story to run into.

opus 4.6 gets it right more than half the times


Working on Memory Store: persistent, shared memory for all your AI agents.

https://memory.store

The problem: if you use multiple AI tools (Claude, ChatGPT, Cursor, etc.), none of them know what the others know. You end up maintaining .md files, pasting context between chats, and re-explaining your project every time you start a new conversation. Power users spend more time briefing their agents than doing actual work.

Memory Store is an MCP server that ingests context from your workplace tools (Slack, email, calendar) and makes it available to any MCP-compatible agent. Make a decision in one tool, the others know. Project status changes, every agent is up to date.

We ran 35 in-depth user interviews and surveyed 90 people before writing a line of product code — 95% had already built workarounds for this problem (custom GPTs, claude.md templates, copy-paste workflows). The pain is real and people are already investing effort to solve it badly.

Early users are telling us things like one founder who tracked investor conversations through Memory Store and estimated talking to 4-5x more people because his agents could draft contextual replies without manual briefing. It helped close his round.

Live in beta now. Would love feedback from anyone who's felt this pain! :)


I implemented a process. - First snapshot. Generated day by day files based on last 2years commit, used Claude to enrich every commit message (why,how,where and output). Also create File.java.md file enriched commit message per file. - I have the history. Next I embedded all of this to Postgresql database. - Implemented an MCP to query anything. - Create whatchdogs to followup projects that I setup, git hook creates stub files per commit. - And then process enriches new stub files. then indexes it to Postgresql.

I did for all internal projects. And first rule of CLAUDE.md is ask MCP for any information. Now Claude knows what other related projects did, what should adopt for.


I dont think this is Cerebras. Running on cerebras would change model behavior a bit and it could potentially get a ~10x speedup and it'd be more expensive. So most likely this is them writing new more optimized kernels for Blackwell series maybe?


Fair point but it remains to answer - why isn’t this speed up available in ChatGPT and only in the api?


yeah I agree. this is really unfortunate because it seems that there is something systemic here at play which has become twisted up in a cult of personality and that's made a rigorous scientific investigation very difficult



Just in time UI is incredibly promising direction. I don't expect (in the near term) that entire apps would do this but many small parts of them would really benefit. For instance, website/app tours could be just generated atop the existing ui.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You