I have tried the underlying library, MediaPipe/TensorFlowJS, and it is only okay. Once you try to use it for visual augmentation, I found it doesn't track that well and it has problems with scale.
So, while it may look good with the skeletons, notice they do not have useful demos, mostly just proof of concept demos that do almost nothing.
I'm very sorry you don't think Handsfree.js has useful demos. That kind of hurts because I put a great deal of effort into making them useful as starting points (I started all this while I was homeless to help someone). I work on Handsfree.js full time and my goal is really to create simple demos so that others like yourself can run with it!
Have you tried the newest MoveNet from google?
That thing works really well. I only started experimenting with the Lighting model (not Thunder) for our VR game but even that is pretty robust.
Just the 2D positions plus a confidence level. In the video we are using only one view so it's no true full body tracking yet. We are currently trying to use two views to get a rough 3D tracking.
The good thing is that the positions of the detected points are relatively stable so with a big enough baseline that should work out for a rough 3D pose estimation
Viewing decay graphics in this form is incredibly misleading.