We used ECharts to build our charting library at Evidence and it’s been a great experience overall (https://evidence.dev).
We started with D3 and a few other tools, but felt that we get a lot more out of the box with ECharts, like interactivity and an events API. ECharts is also a lot more extensible than people give it credit for.
We're trialing ECharts on a couple dashboards at the moment and it's improved velocity a lot over previous D3-based approaches. This has been my experience in other companies too, where a purist developer will come along and start building yet another custom D3 solution which takes so long to deliver that the users just build a pipeline to dump the data and hack their own charts together in a different tool instead.
My feeling is that D3 just isn't worth it unless you are building a bespoke visualization for a unique dataset. For most everything else, ECharts is so much faster to deliver value to the users. The ease-of-use of ECharts reminds me of Highcharts back in the day, except it's free! I'm not sure it has a very strong support network behind it, but perhaps that will improve as more popular tools switch (Gitlab, Superset etc).
Yes, Evidence was my intro to ECharts. I’m more on the building datasets side of things lately, but on the occasions that I do need to produce some analyses Evidence is the model I want to use. Thank you.
That fiddling used to drive me nuts in any of the tools I used to work with. It's part of the problem I'm trying to solve with my current open source project (evidence.dev) where we're tackling viz-as-code. You might find it interesting.
I spent 5 years leading a data team which produced reports for hundreds of users.
In our team’s experience, the most important factor in getting engagement from users is including the right context directly within the report - definitions, caveats, annotations, narrative. This pre-empts a lot of questions about the report, but more importantly builds trust in what the data is showing (vs having a user self-serve, nervous that they’re making a decision with bad data - ultimately they’ll reach out to an analyst to get them to do the analysis for them).
The second most important factor was loading speed - we noticed that after around 8 seconds of waiting, business users would disengage with a report, or lose trust in the system presenting the information (“I think it’s broken”). Most often this resulted in people not logging in to look at reports - they were busy with tons of other things, so once they expected reports to take a while to load, they stopped coming back.
The third big finding was giving people data where they already are, in a format they understand. A complicated filter interface would drive our users nuts and turned into many hours of training and technical support. For this reason, we always wanted a simple UI with great mobile support for reports - our users were on the go and could already do most other things on their phones.
We couldn’t achieve these things in BI tools, so for important decisions, we had to move the work to tools that could offer text support, instant report loading, and a familiar and accessible format: PowerPoint, PDF, and email. Of course this is a difficult workflow to automate and maintain, but for us it was crucial to get engagement on the work we were producing, and it worked.
This experience inspired my colleague and I to start an open source BI tool which could achieve these things with a more maintainable, version controlled workflow. The tool is called Evidence (https://evidence.dev) if anyone is interested.
Ironically one of the major uses of analytics has been to highlight the impact of slow response time on user retention for a wide class of applications.
I also feel that speed builds trust, although I don't know specifically why. Perhaps people envision more errors or error-prone processes when a system is slow. It certainly shows more understanding of the data to present it quickly.
We have support for duckdb (and CSVs and Parquet through duckdb). We don't support python, but some people have also told us they have used evidence as the front-end for a python project - used python to do data transformation and calculations, then dumped the results into a duckdb file in an evidence project and built the visuals and narrative in evidence.
"Containerized" approaches with evidence are also quite interesting - lets you combine several tools and use evidence as the last mile. Here's a great example: https://github.com/matsonj/nba-monte-carlo
Quarto's a great tool, but I think it would be hard to maintain it as a reporting solution for a company. My co-founder Adam and I used to love using RMarkdown in our previous job, but found that as we tried to scale that in our team, it became unmaintainable. Keeping the tool limited to SQL, markdown, and components is a way to help data teams maintain a project across a large organization.
We also think that reporting outputs should have a very high bar for visual appearance, and it takes quite a bit of work in Quarto to get charts up to that standard of quality. It sounds like a nice-to-have feature, but in our experience, publication quality data viz is a very important way to build trust with businesspeople who read the reports. We spend a ton of effort thinking through design choices at Evidence and want to give data teams a library of components they can pull off the shelf and write in a simple, declarative syntax.
Interesting. I use Observable with quarto, instead of R or Python, which gives you access to any JavaScript charting library. I don’t know what’s available for R charts, but hopefully you can get some higher quality charts built for Evidence. The good thing about quarto is that it’s also pandoc, so you can get the outputs in all the business formats you need.
It doesn’t support MSSQL yet, but several people have requested it, so it’s definitely something we’ll support. We accept open source contributions for anyone who wants to add a database connector (a few of our connectors are open source contributions) - as long as there is an npm package for connecting to a database, setting up the connector is fairly straightforward.
Thanks for the feedback. There’s some info under Data Sources in the Core Concepts section, but maybe we should make the db support more prominent in the docs: https://docs.evidence.dev/core-concepts/data-sources/
The db connections are set up in the Settings page when you spin up an Evidence project, so we’ve also left a lot of the documentation in the product itself.
We started with D3 and a few other tools, but felt that we get a lot more out of the box with ECharts, like interactivity and an events API. ECharts is also a lot more extensible than people give it credit for.
If anyone is curious, we documented the process of selecting a charting library after assessing several options: https://github.com/evidence-dev/evidence/issues/136