Apologies if this is off-topic, but having spent more time than I'd like to admit having to create and edit webapps that emerged entirely out of Claude Code, Cursor, Codex, etc. with minimal to no direct code-writing by their human subscribers, this website has strong AI smells:
- Inter font
- all caps section headers
- Lucide icons
- em dashes, of course the em dashes
- bubble status badges (of course with all-caps "IN PROGRESS" and "COMING SOON" that mean the same thing)
- Uncited claims like "Most founders are overconfident in the 70-90% range" and "Most people score between 0.20 and 0.30"
- No less than FOUR blog articles all published April 4
None of these points is by any means a dealbreaker. And after all, I suppose a product should be judged on its merits and the value it delivers to its users, not on the tools used to create it. But together, the frontend bears the unmistakeable generative AI "smell" that telegraphs that the human(s) directing the tools building this app might be optimizing for speed over rigor and quality (further supported by the volunteer QA/QC happening in the comments), and may only be as good and reliable as the uncritically accepted outputs of a $20/month coding assistant.
That's all true. I'm a solo founder and have been using Claude heavily to build this. It definitely shows in many places, and I'll make sure to clean those up. I did not expect to get this many visits from a show HN (almost at 1600 quiz takers from the last few hours alone). The core math is sound, but I agree the presentation needs more care. Appreciate the honest feedback!
The source, personal significance, and intent of images and videos will matter a lot, though. I'll cherish photos of my family members forever, regardless of technical excellence.
Or a photo of my freshman dorm room during exam season. Subpar image quality, lousy lighting, etc. but so many memories, positive and negative, are elicited by that fleeting glimpse from an era of excitement, boredom, stress, uncertainty, and optimism, not knowing where I was going in life, when I'd ever look back at that snapshot, but deciding on a whim to grab it during a break from cramming topics now long forgotten.
But I roll my eyes at the idea of injecting my likeness into a short clip depicting random over-the-top action sequences, no matter how photorealistic, because I've never wanted to do that.
Do you mean the new default datetime resolution of microseconds instead of the previous nanosecond resolution? Obviously this will require adjustments to any code that requires ns resolution, but I'd bet that's a tiny minority of all pandas code ever written. Do you have a particular use case in mind for the problems this will cause?
I would describe it as the huge majority, reflecting on my pandas use over the years. Pretty much all of the data worth exploring in pandas over excel, some data gui, or polars involves timestamps.
This could have been due to refactoring a text written by the stated, human author. Not only is Anthrophic a deeply moral company — emdash — it blah blah.
Also, you just when you say the word "genuine" was in there `43` times. In actuality, I counted only 46 instances, far lower than the number you gave.
Would you like to provide actual proof that your favorite toy benefits people's health before daring others to challenge you? The imagined data you’ve yet to provide can't possibly justify the harm it's causing by pushing people on the edge to suicide.
From my limited experience trying exactly this, it gets you 80% of the way there, then devolves into an infuriating and time-wasting exercise in endless iteration and prompting to sweep clustering parameters and labeling details to nail the remaining 20% needed for acceptance by downstream "customers" (i.e., nontechnical business people).
If your end goal is to show an audience of nontechnical stakeholders an overview of your dataset in a static medium (like a slide), I would suggest you do the cluster labeling yourself, with the help of interactive tooling to make the semantic cluster structure explorable. One option is to throw the dataset into Apple's recently published and open-sourced Embedding Atlas (https://github.com/apple/embedding-atlas), take a screenshot of the cluster viz, poke around in the semantic space, and manually annotate the top 5-10 most interesting clusters right in Google Slides or PowerPoint. If you need more control over the embedding and projection steps (and you have a bit more time), write your own embedding and projection, then use something like Plotly to build a quick interactive viz just for yourself; drop a screenshot into a slide and annotate it. Feels super dumb, but is guaranteed to produce human-friendly output you can actually present confidently as part of your data story and get on with your life.
This is so nostalgic. I remember feeling like I was so good at Jezzball. In later levels I'd start a wall near one corner of the screen, closer to one edge than the opposite edge, to ensure the shorter wall would connect, and sacrificing the longer wall. The surviving wall would create a "corridor" in which to trap balls with tiny horizontal walls, often such that they ended up completely stationary.
- Inter font
- all caps section headers
- Lucide icons
- em dashes, of course the em dashes
- bubble status badges (of course with all-caps "IN PROGRESS" and "COMING SOON" that mean the same thing)
- Uncited claims like "Most founders are overconfident in the 70-90% range" and "Most people score between 0.20 and 0.30"
- No less than FOUR blog articles all published April 4
None of these points is by any means a dealbreaker. And after all, I suppose a product should be judged on its merits and the value it delivers to its users, not on the tools used to create it. But together, the frontend bears the unmistakeable generative AI "smell" that telegraphs that the human(s) directing the tools building this app might be optimizing for speed over rigor and quality (further supported by the volunteer QA/QC happening in the comments), and may only be as good and reliable as the uncritically accepted outputs of a $20/month coding assistant.
reply