Agreed, exa is great - particularly, it's the best thing I've found for fast web retrieval of slightly more complex topics than Perplexity, Google, etc can handle.
The results from their website aren't sharable, but their lists of references do not seem relevant (ie. they miss the fact that shuttling needs to be in 3D, and the list of experiments for laser cooling to BEC is missing all of the relevant papers).
I think, like other research tools, they're more focused on the summarization/extraction of information, rather than the discovery process (though they are similar to us in the way they say they do multi-stage retrieval and it takes some time).
For a meta-analysis, you might want to try the "extend" feature. It sends the agent to gather more papers (we only analyze 100 carefully initially), so if your report might say "only 55% discovered", could be useful.
(Also, if you want, you can share your report URL here, others will be able to take a look.)
A few possibilities:
- We only use abstracts for now. Have to make sure you ask for something present there.
- Did you ask for a scientific topic? (Sometimes people ask for papers by a specific author, journal, etc. The system isn't engineered to efficiently find that).
Regarding citations: we use them, but only for figuring out which papers to look at next in the iterative discovery process, not for choosing what to rank higher or lower at the end (unless you explicitly ask for citations). It's ranking based on topic match.
If you're comfortable, posting the report URLs here can let us debug.
We're trying to bias the system toward more autonomous execution, rather than a "copilot"-like experience where you iterate back and forth with the system. That lets us run more useful subroutines in parallel in the backend, as long as you specified your complex goal clearly.