> Find one YC startup whose public job postings mention Jira + Claude/Cursor. They already have the exact stack. DM the CTO directly on X with a one-liner: "built something that automates your easy Jira tickets into PRs automatically — want to try it?" That's your shortest path to a real user.
Typical LLM-isms:
- "find this. do that." phrasing
- "the exact X"
- send a one liner
- "that's your shortest X to a Y"
---
Here's what Claude had to say about it:
Yes, it does have some telltale signs. Here's why:
*Structural giveaways:*
- The "here's the insight → here's the action → here's the payoff" format is very common in LLM outputs — it's almost algorithmically tidy.
- The em dash used as a dramatic pause ("automates your easy Jira tickets into PRs automatically — want to try it?") is a pattern LLMs lean on heavily.
- "That's your shortest path to a real user" feels like a summarizing closer that an LLM adds to signal it's wrapping up with a punchline.
*Word/phrase patterns:*
- "exact stack" — this phrasing is very popular in AI-generated startup/GTM advice
- The overall register (confident, tactical, slightly bro-ish but polished) is a very common output of prompts like "give me a GTM strategy"
*What makes it not obviously AI:*
- It's specific enough (Jira + Claude/Cursor, YC, CTO on X) that it doesn't feel like generic filler
- The one-liner pitch itself is actually pretty natural
*The bottom line:* It reads like someone prompted an LLM for "what's the fastest way to find my first user" and lightly edited the output — or didn't edit it at all. The advice isn't bad, but the packaging has that characteristic "polished tactical bullet" energy that's hard to fake as organic thinking.
If you wrote it yourself, the main culprit is probably the closing sentence — humans tend to just stop rather than narrate their own conclusion.
So you would have preferred if this was managed actively? But then how would that be better than providing MCPs to cursor? I use mcps which can access all the comments in the company I work at including the PRDs, logs, databases, etc.
The idea of my project is that it is all done asynchronously, is that what you mean? You want it to happen outside of your personal computer?
Your flow can still work but for those wanting to test it out, you may need to give them a direct path to seeing what it can do locally first then they can add async flow. The one caveat is supporting various tools like ClickUp / Gitlab which we use as an example.
Lately I’ve noticed coding agents getting significantly better especially at handling well-scoped, predictable tasks.
It made me wonder:
For a lot of Jira tickets especially small bug fixes or straightforward changes most senior developers would end up writing roughly the same implementation anyway.
So I started experimenting with this idea:
When a new Jira ticket opens:
-It runs a coding agents (Claude/cursor)
-The agent evaluates the complexity. If it’s below a configurable confidence it generates the implementation.
-It opens a GitHub PR automatically.
From there, you review it like any normal PR.
If you request changes in GitHub, the agent responds and updates the branch automatically.
So instead of “coding with an agent in your IDE”, it’s more like coding with an async teammate that handles predictable tasks.
You can configure:
-The confidence threshold required before it acts.
-The size/complexity of tasks it’s allowed to attempt.
-Whether it should only handle “safe” tickets or also try harder ones.
It already works end-to-end (Jira → implementation → PR → review loop).
Still experimental and definitely not production-polished yet.
I’d really appreciate feedback from engineers who are curious about autonomous workflows:
-Does this feel useful?
-What would make you trust something like this?
-Is there a self made solution for the same thing already created at your workplace?
This is a really interesting approach to automation! The idea of treating it like an "async teammate" rather than a copilot is a clever mental model.
For trust, I'd want to see metrics on how often it gets the implementation "right enough" on first try vs. needs significant rework. The confidence threshold tuning sounds crucial - too conservative and it barely helps, too aggressive and you spend more time fixing than coding from scratch.
Have you tested it on tickets with ambiguous requirements? That seems like where it would struggle most, but also where the confidence evaluation becomes really important.
Good question.
Atlassian MCP provides access to Jira.
Anabranch focuses on orchestration, deciding when to act, estimating complexity, and attempting to automatically open PRs for low-complexity tickets.
The goal isn’t to replace existing agent setups, but to explore whether the “boring majority” of tickets can be automated without manually going into the IDE, prompting, waiting for a result, opening a PR, waiting for a review of a second team-member. It is asynchronous by nature, and you just need to review it. I would argue that at some points agent would be so good that you would trust it to auto merge the result as well.
It’s still experimental, and part of the project is validating how reliable this approach can be in practice.
Re: README tone agreed. It was auto-generated. I’ll update it to be more neutral.