For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more esens's commentsregister

Would be better as a long graph, or as the derivative (e.g. year-over-year change).

Viewing decay graphics in this form is incredibly misleading.


I enjoy Fusion 360 actually. I think it isn't that bad.


I have tried the underlying library, MediaPipe/TensorFlowJS, and it is only okay. Once you try to use it for visual augmentation, I found it doesn't track that well and it has problems with scale.

So, while it may look good with the skeletons, notice they do not have useful demos, mostly just proof of concept demos that do almost nothing.


I'm very sorry you don't think Handsfree.js has useful demos. That kind of hurts because I put a great deal of effort into making them useful as starting points (I started all this while I was homeless to help someone). I work on Handsfree.js full time and my goal is really to create simple demos so that others like yourself can run with it!

Maybe you'll find some of these more useful?

- Face Pointer, and accessibility tool to help people who can't use a mouse/keyboard: https://handsfree.js.org/ref/plugin/facePointer.html

- Palm Pointers, scroll the page and even multiple scroll areas at once: https://handsfree.js.org/ref/plugin/palmPointers.html

- Gesture Mapper, map static poses in seconds: https://handsfree.dev/tools/gesture-mapper/

- Face Coding (not ready), to help people with disabilities code by snapping blocks together: https://twitter.com/GoingHandsfree/status/140056015231972966...

- The library itself doubles as a Chrome Extension, here are some examples of what you can do with it: https://handsfree.dev/sites/

I have dozens and dozens of more experiments. Most of them are pretty basic, but the point is to show you what's possible :)


Also, here is a simple demo that has been helping some of my friends with disabilities. It uses the Face Pointer to play desktop games: https://twitter.com/GoingHandsfree/status/140022171083874713...


Have you tried the newest MoveNet from google? That thing works really well. I only started experimenting with the Lighting model (not Thunder) for our VR game but even that is pretty robust.

If you want to see how stable it is, have a look at that: https://youtu.be/zz9S5hgrWpM?t=147

It's tracking me with the Oculus Quest on the head, so it does not even have facial features to go on


Thanks for the link - Does MoveNet give you 3D positions / rotations for each joint or just a 2D position on the image?


Just the 2D positions plus a confidence level. In the video we are using only one view so it's no true full body tracking yet. We are currently trying to use two views to get a rough 3D tracking. The good thing is that the positions of the detected points are relatively stable so with a big enough baseline that should work out for a rough 3D pose estimation


BlazePose does give 3D portions for non-facial key points


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You