The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.
Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.
>what if I pointed the camera at someone else? That would be filtered out
I'm no expert at this but that sounds a lot harder to implement than you're implying, especially if it's all locally stored and not checked over by a 3rd party. What's to stop me from just doing it anyway?
That’s why the plan is to invert the usual logic: instead of capturing everything and trying to filter later, the system would reject everything by default and only respond to what the user explicitly enables --> similar to how wake word detection works.
I’ve also thought a lot about trust. Would you feel differently if the system were open source, with the critical parts auditable by the community?
I mean maybe this is just ignorant of me but can you really build an app where the AI is completely disabled out the box, is totally locally controlled by the user, who is then able to customize its activation with such granularity/control that it will only film them and not other people? Is that something one can actually build and expect to be reliable? Can this actually work…?
I mean generally speaking yes open source but the issue is that if it’s open source then people can easily disable the safeguards with a fork so idk I feel mixed on it. I’m still leaning towards yes because in general I am for open source. But I’d have to think about it and hear other people’s takes
Oh, so you'll peep on everything we do, but don't worry, only you and your team will be able to be the voyeurs. lol, lmao even. Do you ppl even hear yourselves talk?
The link goes to our GitHub Manifesto. It explains the 'Why' behind aurintex: I believe the future of 'always-on' AI companions must be built on a foundation of absolute trust.
The default "Cloud-First" model can't provide this. So I'm proposing and building a model based on two principles: *"Fully Functional Offline"* and an *"Open Core"* trust model.
The `README` on GitHub explains this mission in detail. (The landing page, aurintex.com, is linked from there.)
I've cleared my entire afternoon and evening, and I'm here for your brutally honest feedback on this approach.
Is this a trust model you could actually get behind for an 'always-on' AI?
(P.S. This is the reboot of my 'Show HN' from Tuesday. My original launch failed because my 0-day-old account got rate-limited and I couldn't post this context. My mistake, and I appreciate you giving this a second look.)
I can only agree. I'm not an python expert, but I always struggled when installing a new package and got the warning, that it could break the system packages, or when cloning an existing repo on a new installed system.
Always wondered, why it became so "complicated" over the years.
This is a great read and something I've been grappling with myself.
I've found it takes significant time to find the right "mode" of working with AI. It's a constant balance between maintaining a high-level overview (the 'engineering' part) while still getting that velocity boost from the AI (the 'coding' part).
The real trap I've seen (and fallen into) is letting the AI just generate code at me. The "engineering" skill now seems to be more about ruthless pruning and knowing exactly what to ask, rather than just knowing how to write the boilerplate.
This is really interesting. As a new Zed user, I've read about GPUI, but have no insights.
Coming from years of working with Qt, I'm always fascinated by the search for the "holy grail" of GUI libs. It's surprising how elusive a true "write-once-run-everywhere" solution (that's also good) still is.
My main question is about your long-term, cross-platform vision: How are you thinking about the matrix of Desktop, Web, and Embedded systems? Qt (for all its baggage) made a real run at desktop/embedded. Do you see GPUI components eventually covering all three, or is the focus purely on desktop/web for now?