For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | camwest's commentsregister

Yeah, I am very open to skills and REST APIs, but REST APIs have an issue where the token gets injected, and so it's not really a good auth story for everyone. I think that skills have a feedback problem: how do you refine your skill that you've shipped to your customers? The suggestion you are giving isn't great for something like the ChatGPT or Claude iOS app. Where would you even store the token in that case, for example?


I wrote this piece to share my view about what cursor should do to manage some of the negative sentiment they’re getting online. What do you folks think?


(No AI slop edition. See https://x.com/cjwestland/status/2002049554133229924?s=46 for context)


Serious question for you, without the judgement that it's certainly going to imply. (There's just no way to ask this without sounding snarky.) I'm genuinely curious: is writing so difficult that you need to use AI to help you?

I understand the "dump voice notes -> AI transcription" step, but why the "clean up" and "iterate" steps with AI?


For me it's not about difficulty, it's about friction. BJ Fogg's behavioral model B=MAP says Behavior = Motivation × Ability × Prompt. When you increase ability (lower friction), you get more behavior for the same motivation.

AI lowers my writing friction. Same motivation, more output. I write more, I get feedback faster, I iterate more. That's the value for me.




Agreed. I'm happy they're training on my data.

My reasoning: I use AI for development work (Claude Code), and better models = fewer wasted tokens = less compute = less environmental impact. This isn't a privacy issue for work context.

I regularly run concurrent AI tasks for planning, coding, testing - easily hundreds of requests per session. If training on that interaction data helps future models be more efficient and accurate, everyone wins.

The real problem isn't privacy invasion - it's AI velocity dumping cognitive tax on human reviewers. I'd rather have models that learned from real usage patterns and got better at being precise on the first try, instead of confidently verbose slop that wastes reviewer time.


Never? No. Way less likely? Yes!

In dev we do 100 consistency checks and get green. In CI we do 10.


This feels similar to a GitHub issue.

1. Editable description 2. Comments


A couple of example hooks: https://cameronwestland.com/building-my-first-claude-code-ho...

I'm happy to see Claude Code reaching parity with Cursor for linting/type checking after edits.


I agree it's not about the 10x engineers or the greenfield. I think YC's selection process is still focused on finding distinguished individuals, but within two specific constraints.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You