Most tools in this space are about the efficiency of the development flow, sometimes down to the individual's level. With Echoes we're trying a different angle which is less about the efficiency of the process, and more about the value and the impact of our efforts. We are deliberately staying away from anything that could be interpreted as a proxy for individual performance.
We believe that the bottleneck to most organizations is not engineer's productivity but a lack of clarity & focus, and improving the development process alone is a bandaid rather than a solution. In the end Echoes tells you less about the quality of the engineer's work, and more about the quality of the organizational context.
Absolutely! We have among our customers a YC company who uses Echoes exactly for that purpose. Their issue was that every two weeks during all-hands, engineering would present a list of merged GitHub pull requests which everyone would applaud to show support with little understanding of what it contributed to. Echoes helped them materialize the missing link between the day-to-day engineering work and the strategy. As you put it perfectly, it creates more visibility and more empathy across functions, and we're quite proud of that :-)
Interestingly we originally designed the product for companies of 50 engineers and above, but we quickly had to add a self-serve onboarding as we were seeing unexpected interest from startups as small as 6 engineers.
I think there are two different challenges at play:
- For larger organizations, the difficulty to understand how efforts are allocated and get a consolidated view they can communicate across the company. Some example we are seeing 1) balancing product and technical roadmaps (for example in companies who have agreed on spending some % of their capacity in paying off technical debt, but who don't know how to measure that) 2) assessing during the course of the quarter whether "the rubber meets the road" and if we're properly executing against our objectives (avoiding the pitfall of realizing only at the end of the quarter that we got sidetracked).
- For startups, the need to be extremely deliberate in how you allocate your limited engineering capacity. This is pretty much our own dogfooding use case at Echoes (a team of 3), where there's an infinite amount of things we _could_ be doing, but we have to be laser-focused on the things we _should_ be doing. In our case Echoes acts as our compass and a forcing function to stay focused.
I certainly hope too, especially as those teams are typically at the crossroads of every initiative and therefore unfairly perceived as "what is slowing us down" rather than "what carries everything else". Echoes can shine a light on their contribution, and on the challenges of being in the middle of it all.
Our model of a unit of effort typically originates from a commit. Deployments are declared through an API: because we already know the commits, and most importantly their intents, this tells us what a given release aims to achieve. The next link is to connect that to observable impact.
Nice. Do you plan to match it with marketing/sales budget per product line or Business unit? To be sure that the effort of "maintaining legacy" is equally measured versus over funded efforts on new features for growth?
Disclaimer : I am running a non profit in my country called "The Maintainers", this is why I look for products that can give more merit to code maintainers.
That's not currently in the plan, but as Echoes models the engineering organization you could easily see how much of your engineering capacity goes toward maintenance versus new features.
I'd be happy to show you a demo of the product and how it can fit your use case! My time as engineering manager for the maintainers of the Docker open source project was a significant inspiration in creating Echoes, and I'm very familiar with the difficulty of highlighting maintenance work.
At this point Echoes doesn't let you connect goals to actual metrics. In other words: we measures efforts invested into the goals, but not the observable impact of these efforts (yet!).
Your question is however absolutely spot on as this is where we going. But in order to measure impact we needed to start by capturing _why_ we're doing what we're doing in the first place, and what we realized is that this in itself was solving a significant pain point for engineering organizations.
I haven't read Impact Mapping, thank you for the recommendation!
You're right. I made a generalization out of multiple threads here on HN and on Twitter (another example coming to mind being this one https://twitter.com/GergelyOrosz/status/1440242068102680595), but only cherry-picked one to illustrate the post.
The point I'm to make in this post is that _regardless_ of JIRA qualities (or lack thereof), the way many companies try to use it for conflicting purposes is a recipe for failure. Using JIRA for what it is designed to do is perfectly fine :)
Hi, author here! Appreciate the comment, we'll work on our copy.
Most companies have a quarterly planning cycle which starts with setting goals, ends with collecting results, and in middle is most often a blurry mess of juggling between priorities and the reality of the day-to-day.
Our customers choose us because getting consolidated visibility on how we allocate engineering efforts in a way engineers don't hate is an unsolved problem. Yet without it, your decisions and plans are very much based on hope and gut feelings. Our belief is that most organizations are held back by a lack of alignment, and a disconnect between planning and execution: if we can help prevent just one team chasing the wrong topic for a month, then the cost is well amortized, and for large organizations that is a no brainer.
We also have small startups adopting us because it helps them shine a light on the engineers work in a way that the non-technical groups understand. Their use case is less about decision making (because the uncertainty isn't so high when the number of engineers is <10), and more about communicating the value of their work.
Hi HN! I’m Arnaud, founder of Echoes HQ (https://echoeshq.com). We build dashboards on the activity of engineering teams, focusing on the value of engineering work.
I’m passionate about developer empowerment and building engineering organizations. This is what got me into engineering management in the first place, and why I later joined Docker to lead the core team. Throughout my career I’ve come to realize that organizations are often held back not by a lack of developer productivity or talent, but by the lack of proper context to achieving good results (a theme commonly discussed here in comments).
To help companies solve this, we surface why engineers are doing what they do in the simplest way we could think of. You define within Echoes what you’re investing efforts into (e.g., the categories of work, ongoing initiatives, or OKRs relevant to your organization), which Echoes publishes across all of your GitHub or GitLab repositories as labels. Applying these labels on pull requests is all it takes to surface how teams are truly allocating their efforts over time. You can later connect intents to external measurements, showing you whether efforts are actually making a difference.
Engineering managers can use this data to inform decisions on priorities, confront the plans to the reality, and communicate on the activity to their CEO / bosses / business partners with the right level of detail. One of the first use case we're using to illustrate what Echoes can do is the very common challenge of balancing the amount of efforts that should go into features versus technical work (https://www.echoeshq.com/recipes/managing-technical-debt).
I can see this being very useful for the managers, if the engineers reliably add the labels. In your experience, how do you overcome the problem of "laziness" where the engineers skip over the labeling step?
Great question :) There are several answers to it.
1. We integrate with the GitHub Checks API and surface missing labels as a failure (similar to failed tests), which acts as a reminder to add the labels. GitLab doesn't have an equivalent to our knowledge, but we have a Docker image and a snippet of yml you can include as a build stage for a similar result.
2. We had customers ask for a JIRA integration which we are about to ship that can help with that. It creates a custom field on your JIRA instance which gets populated with your configured intents, just like GitHub labels. GitHub pull requests which reference a JIRA issue will automatically inherit its labels, meaning that if the intent was expressed at planning time, then there's no additional work to do for these.
3. When discussing with organizations who request that every pull request be linked to a ticket for the sake of reporting, it's a no brainer: would you rather file a ticket for every commit or add a label?
4. Remaining untagged pull requests can be examined and labeled directly from the product itself (making it easy to erase the pesky leftovers).
Finally, the product is indeed targeted at managers at this time, but we have plans to make it more directly useful for the engineers too.
Hi Arnaud. Looking good! Looking at the pricing I don't think it would work for us. While we have 10 engineers, we have many more people who commit very irregularly. Some months we'd certainly go over the "active contributor" limit, but we wouldn't be getting any value out of tracking those additional contributions. Do you have an allow-list of contributors?
Thank you, and yes! We only account for contributors who are dispatched into teams within Echoes configuration. This is also meant for open source projects who don’t want to track contributions from the community, or for very large organizations who want to ramp-up progressively on the product.
This is quite interesting, I didn't think of this type of approach. When I was at Instacart I could have seen this being really useful. I always had IC eng ask me "why does my work matter".
Thank you JJ! Indeed, most engineers care about their impact and how their work contribute to the big picture. Unfortunately, the incentives structure in many companies in not designed to encourage that. That's why we're trying to create this missing link between engineering work and business results.
Congratulations, Arnaud! I really like the idea of your tool. I was using many tools to track dev productivity in the past – with all kind of charts and plots. Somehow, I never got answer to the question "what do we really spend time on? Is this mostly bug fixing, delivering new features, and how does it affect our KPIs?". I like that Echoes focuses exactly on that.
I've got one question – would it be possible in the future to generate some kind of alerts for the managers when for example the technological debt is growing above some threshold?
Thank you! We do get asked about alerts, both on metrics (as in your example) and on allocation (for example when the activity is significantly and durably diverging from the current expectations).
We haven’t started work on this but it’s very likely to happen at some point indeed.
Love the approach of keeping the planning very close to the execution. It's very similar to what we did at Open Listings (though the labeling system was maybe a little more complex and applied to both PRs and Issues).
Anything you can share about a grand vision here? In 5 years is Echoes a tool for product managers (OKR alignment), engineering managers (IC performance management), a replacement for one of those functions?
Thank you! We have tons of ideas of where this product could go, but I can stay with good confidence that IC performance management is not one of them: I don’t believe that this is something that can nor should be automated. If you look closely, we’re actually shifting the focus _away_ from IC performance and more into the effectiveness of the organization as a whole (which in a sense says more of the performance of the management).
Our hope is to break down silos between engineering, product, and “business”, which are at the source of so many organizational inefficiencies. We need this shared context where CEO/CFO understand where we’re headed, where engineers understand how they work fit in the bigger picture, where product can focus on market research.
I’m obviously extremely biased here, but being thoughtful and deliberate about the way we allocate our efforts just sound fundamental to me, and it’s not the sole focus of any particular function.
You are correct that you could achieve similar activity reports with JIRA epics but it requires a level of rigor and homogeneity that I believe is hard to achieve in practice.
1. JIRA most often captures what we _plan_ to do rather than what we _actually_ do. You cannot build exhaustive activity reports from JIRA unless you request all contributions to be linked to issues. This is especially true for technical work which tends to not be tracked and become invisible.
2. Most sizable organizations have an inherent diversity of processes and tools across teams which makes producing consolidated dashboards extremely hard (e.g., one team using JIRA, a second using Linear, and a third using GitHub issues).
For these reasons, our approach is to capture work where it happens rather than where it is described, and to use a central definition of intents as the ontology. Finally, capturing efforts is only one part of the equation: we allow you to associate intents with metrics to evaluate impact.
Thank you! It is important: there's so much potential wasted in suboptimal organizations, and no amount of engineering productivity can compensate for that.
We don’t filter on branches, so any merged work counts.
By default all pull requests are considered equally weighted, but there’s a set of labels that allow you to optionally influence that weighting (using basic XS to XL t-shirt sizing), so you could already tag it XS and have it count for very little. We actually had that question from a user yesterday, and we might add a way to ignore a pull request entirely (i.e., give it a weight of 0).
We believe that the bottleneck to most organizations is not engineer's productivity but a lack of clarity & focus, and improving the development process alone is a bandaid rather than a solution. In the end Echoes tells you less about the quality of the engineer's work, and more about the quality of the organizational context.