For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more aurintex's commentsregister

Could you give more specific reasons?


Could you give more specific reasons?


How about I just cut peep holes in all the rooms you live in and video tape you. How would you feel about that hmm?


No


Because I don't want my life to be training data for some stupid LLM so that you can try to become to the next tech oligarch.


Would it make a difference if the data were only stored locally on your device?


If that is verifiable by the stack being libre software then perhaps.


It might, depending on how you intend to anonymize the data, and whether you intend to market that data to other companies.


The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.

Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.


>what if I pointed the camera at someone else? That would be filtered out

I'm no expert at this but that sounds a lot harder to implement than you're implying, especially if it's all locally stored and not checked over by a 3rd party. What's to stop me from just doing it anyway?


That’s why the plan is to invert the usual logic: instead of capturing everything and trying to filter later, the system would reject everything by default and only respond to what the user explicitly enables --> similar to how wake word detection works.

I’ve also thought a lot about trust. Would you feel differently if the system were open source, with the critical parts auditable by the community?


I mean maybe this is just ignorant of me but can you really build an app where the AI is completely disabled out the box, is totally locally controlled by the user, who is then able to customize its activation with such granularity/control that it will only film them and not other people? Is that something one can actually build and expect to be reliable? Can this actually work…?

I mean generally speaking yes open source but the issue is that if it’s open source then people can easily disable the safeguards with a fork so idk I feel mixed on it. I’m still leaning towards yes because in general I am for open source. But I’d have to think about it and hear other people’s takes


Oh, so you'll peep on everything we do, but don't worry, only you and your team will be able to be the voyeurs. lol, lmao even. Do you ppl even hear yourselves talk?


I was often frustrated because I mostly heard “what can't be done” instead of “how it could be solved.”


Hey HN, Riccardo (founder) here.

The link goes to our GitHub Manifesto. It explains the 'Why' behind aurintex: I believe the future of 'always-on' AI companions must be built on a foundation of absolute trust.

The default "Cloud-First" model can't provide this. So I'm proposing and building a model based on two principles: *"Fully Functional Offline"* and an *"Open Core"* trust model.

The `README` on GitHub explains this mission in detail. (The landing page, aurintex.com, is linked from there.)

I've cleared my entire afternoon and evening, and I'm here for your brutally honest feedback on this approach.

Is this a trust model you could actually get behind for an 'always-on' AI?

(P.S. This is the reboot of my 'Show HN' from Tuesday. My original launch failed because my 0-day-old account got rate-limited and I couldn't post this context. My mistake, and I appreciate you giving this a second look.)


And the NPU part is also in the mainline kernel https://blog.tomeuvizoso.net/2025/07/rockchip-npu-update-6-w...


I can only agree. I'm not an python expert, but I always struggled when installing a new package and got the warning, that it could break the system packages, or when cloning an existing repo on a new installed system. Always wondered, why it became so "complicated" over the years.


> Always wondered, why it became so "complicated" over the years.

Please see https://news.ycombinator.com/item?id=45753142.


This is an important application of 'augmenting perception' tech.

I'm interested in the technical architecture, specifically regarding the data processing and privacy trade-offs.

Is the analysis done locally on the device/phone, or does it require a cloud connection to function?


Hey, thanks for your interest. the analysis is done on the cloud at the moment.


Same - I had to read twice to understand it's purpose


This is a great read and something I've been grappling with myself.

I've found it takes significant time to find the right "mode" of working with AI. It's a constant balance between maintaining a high-level overview (the 'engineering' part) while still getting that velocity boost from the AI (the 'coding' part).

The real trap I've seen (and fallen into) is letting the AI just generate code at me. The "engineering" skill now seems to be more about ruthless pruning and knowing exactly what to ask, rather than just knowing how to write the boilerplate.


This is really interesting. As a new Zed user, I've read about GPUI, but have no insights.

Coming from years of working with Qt, I'm always fascinated by the search for the "holy grail" of GUI libs. It's surprising how elusive a true "write-once-run-everywhere" solution (that's also good) still is.

My main question is about your long-term, cross-platform vision: How are you thinking about the matrix of Desktop, Web, and Embedded systems? Qt (for all its baggage) made a real run at desktop/embedded. Do you see GPUI components eventually covering all three, or is the focus purely on desktop/web for now?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You