> Instead of downloading files to your desktop and then uploading them to ChatGPT, you can now add various file types directly from your Google Drive or Microsoft OneDrive. This allows ChatGPT to understand your Google Sheets, Docs, Slides, and Microsoft Excel, Word, and PowerPoint files more quickly.
As far as I know, this is the first integration of other cloud products in ChatGPT that's done as a 1st party integration (vs. "GPTs", which are all 3rd party)?
Could you sidestep inference altogether? Just return the top N results by cosine similarity (or full text search) and let the user find what they need?
https://ollama.com models also works really well on most modern hardware
I'm running ollama, but it's still slow (it's actually quite fast on my M2). My working theory is that with standard cloud VMs, memory <-> CPU bandwidth is an issue. I'm looking into vLLM.
And as to sidestepping inference, I can totally do that. But I think it's so much better to be able to ask the LLM a question, run a vector similarity search to pull relevant content, and then have the LLM summarize this all in a way that answers my question.
You might not want them to have that information, but I think Google's history search now supports that for Chrome users: https://myactivity.google.com/myactivity
Firefox (on linux). But it seems to also happen in Chrome.
I took a quick screen capture: https://imgur.com/a/jpPtj2f - you can see when I click back, the button shows the spinning icon.
Thanks! I couldn't see it on Chrome due to React Devtools interference.
Turns out the page was ending up in bfcache (https://web.dev/articles/bfcache), which in turn led to the button's loading state being preserved when clicking back. Listening to the `pageshow` event enables one to handle that somewhat gracefully.
There's a place for both, yes! I think Hacker Search should offer each (https://news.ycombinator.com/item?id=40241332) and ideally intelligently figuring out what to do based on your query.