For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | lucid-dev's commentsregister

Found his web page with some basic demos/vids. Had the same curiosity.

https://jimishol.github.io/post/tonality/


Thanks for digging that up! That blog post covers the early "from scratch" version—essentially a mind experiment.

Interestingly, Dmitri Tymoczko arrives at a similar prism structure (Figure 14b) in his paper "The Generalized Tonnetz" ( https://read.dukeupress.edu/journal-of-music-theory/article/... ).

I reached a similar shape (Figure 11 in my pdf: https://jimishol.github.io/thoughts_on_harmony_en.pdf#page=2... ), but the specific, even arbitrary, twisting I used to realize the torus topology gives it a unique advantage: it immediately reveals the "hinge note" of a scale.

I discuss that specific geometric comparison here: https://github.com/jimishol/cholidean-harmony-structure/disc...

The new documentation in this repo ( https://github.com/jimishol/cholidean-harmony-structure ) represents the mature "Umbilic-Surface Grammar" that explains why those shapes happen.


FANTASTIC!!

I was just thinking about this the other day, and wondering about directionality...

For example, if you had a camera facing a space, and the receiving antenna was within that space... and you were able to (somehow?) from the antennas perspective, see the "direction" the frequency was coming from..

And then map the different specific frequencies within the desired bandwidth to colors... and of course intensity map like you have in the slit device..

And then "look through the camera"... you would see a live three dimensional overlay of all signals within range (colored!) "interacting" with the antenna... but kind of more the "looking through the camera" sort of view, like you could "see" how those waves were interacting..

And then wouldn't it be interesting to put a tin-foil hat to one side of the attennas.. and see how the waves change in real time... etc.!!!

(I guess it takes three antennas, to triangulate the field? Maybe all three can still be mounted on a single device in close proximity?)


> and you were able to (somehow?) from the antennas perspective, see the "direction" the frequency was coming from..

You can kind of do that quite easily at low frequencies, by measuring the phase of signals coming in from a pair of aerials. If you put two aerials a quarter of a wavelength apart and switch between them very quickly at audio rate, then you'll get a tone when there's a difference in phase. If there's no tone the two signals are exactly in phase - the two aerials are exactly the same distance from the transmitter.

If you look on some police cars you'll see a group of four aerials about 15cm apart stuck to the roof which used to be used for "Lojack" style trackers.

There are a whole bunch of circuit diagrams floating around for doing this kind of thing, with the simplest being Ye Olde 555 timer and a couple of PIN diodes!



The title is: This ESP32 Antenna Array Can See WiFi

And every time I see something like this I like to remind to myself and imagine what spherical grid of Starlink satellites linked by laser is really capable of instead of mere internet as it is advertised.


your link has the "si=" tracking parameter in it



If you buy three (or more) Phillips Hue bulps you can have them respond to motion detected by things moving around and disturbing the radio waves they use to communicate. So they must have pretty much the kind of map you want, but I dont know how easy it is to export it.


This is sort of that idea with sound:

https://www.youtube.com/watch?v=jL2JK0uJEbM


By using the *API*

True I spent a year making a platform for using the API.. but the results... are stupendous!! Very cheap and unlimited access and custom tooling, etc... to get large amounts done of anything you want to do with an LLM!


But why are you considering tokens so precious?

At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.

The way to use LLM's for development is to use the API.


I'm not so worried about the money but more about context rot. I used spec driven development for a week and I had constant compacting with Claude code. I burned 200€ in one week and now I'm trying something different: only show diffs and try to always talk to me in interfaces. I do think that at some point there will be frameworks or languages optimised for LLMs.


Pretty successful in terms of the content representing the intent. Which is in part, don't skim, don't scroll, read something if you want to actually read something, or go elsewhere for doom-scrolling and skimming.

I also found half-skimming it worked pretty well, using the images as markers to find what I really wanted.

Also it looks like it works pretty good on mobile, I thought it was small on my laptop too, but hey, thanks the heavens for built-in-browser zoom...


Love this. May a recommend some kind of "disperse pieces to edge" feature/button, (or perhaps automatic behavior, flag-to-enable or not), so that when you zoom-out a bit, you can automatically "disperse" all the pieces to the edge or at least "equally space them" in the available space, etc.

I.e. the problem is a lot of time spent on moving the pieces off-of each other. While this is more pleasent in real-life tactile space, not as much fun when using the computer to have to click-and-drag all the pieces around (of course, sorting them etc, is up to the user, but just some kind of initial "see all the pieces in the space without them overlapping each other to the greatest extent possible depending on the total space avaliable given the current zoom settings" ...


That's a good idea! I honestly futzed way too long trying to make this playable on smaller screens with that in mind.

The Shuffle button actually tries to spread the pieces out to cover the current zoom level, but it can still result in some of them being obscured. I'll look into implementing a more even distribution.


This is going to really transform some data visualization things I've been thinking about. I've always loved SVG since working with Illustrator and Inkscape back in the day, but I didn't realize how much it could tie in with the modern web and interactivity. Thank you!


SVG has been transforming web-experiences (particularly for viz) for quite some time now, see:

- https://mlu-explain.github.io/neural-networks/

- https://www.nytimes.com/spotlight/graphics

- https://pudding.cool/


Surely D3 is what you're referring to


Pretty funny if you ask me. Maybe we can start to realize now: "The common universal subspace between human individuals makes it easier for all of them to do 'novel' tasks so long as their ego and personality doesn't inhibit that basic capacity."

And that: "Defining 'novel' as 'not something that you've said before even though your using all the same words, concepts, linguistic tools, etc., doesn't actually make it 'novel'"

Point being, yeah duh, what's the difference between what any of these models are doing anyway? It would be far more surprising if they discovered a *different* or highly-unique subspace for each one!

Someone gives you a magic lamp and the genie comes out and says "what do you wish for"?

That's still the question. The question was never "why do all the genies seem to be able to give you whatever you want?"


You use git worktrees, and then merge-in. Or rebase, or 3-way merge, as necessary.

I have a local application I developed that works extremely well for this. I.e. every thread tied to a repo creates it's own worktree, then makes it edits locally, and then I sync back to main. When conflicts occur, they are either resolved automatically if possible (i.e. another worktree merged into main first, those changes are kept so long as they don't conflict, if conflicted we get the opportunity to resolve, etc.).

At any merge-into-main from a worktree, the "non-touched" files in the worktree are automatically re-synced to main, thus updating the worktree with any other changes from any other worktree that have been already pushed to main.

Of course, multiple branches can also be used and then eventually merged into a single branch later..

---

Also, this is very clearly exactly the same thing OP does in their system, as per the README on their github link..


Well check this out. I know you all might hate python + react. But for this platform, it works.

I'd like to open source it..?

This video demonstrates "how" I actually get the LLM to "do stuff faster than you can" (I'm talking about coding/dev work here).

You have to understand what the LLM is, how it works, and manage it properly. You have to give it a "world state", and give it blueprints, and give it tools. Then, you can do this several times in parallel, and watch the magic happen.

It's not a small amount of work - learning to use the LLM is a completely new skill. As many have pointed out it's not like writing code. It's different kind of thinking/management.

But I believe that if we collaborate on creating the new *TOOLS* and *PLATFORMS* together instead of just trying to "use some shitty chat applications" that can't even properly handle context window truncation and document management, then we will eventually succeed in creating a new wave of actual interaction with the LLM through systems that are designed particularly for that process.

This is also an example of "the LLM writing code". I didn't write any of this code for the application being demonstrated. The LLM wrote 100% of the code. I started in chatGPT and then moved onto the custom platform as quickly as possible. So this is a "dogfood" project - i.e. fully feeding the LLM back the code it wrote, with the next desired state indicated by user (me).

Now, the system is so complex and operates so effectively that given the standardized set of initial instructions describing the system to the LLM, the LLM can review the codebase and add docs to the context window, discard those that are unnecessary, write it's own reviews and implementation plans, begin a tasks checklist, and sequentially complete the task, receive feedback for apply-errors and linter failures, execute terminal commands, run tests, etc. This is multi-turn. IT CAN ALSO spawn NEW LLM calls, allow those calls to return in parallel, allow those calls to feed their data and stop their own automation chains, eventually updating the "orchestrator" thread with the "reports" from the agents, etc. And then after a while, it will later on give you a ping and let you know, your $20 on API tokens was well spent - it actually works.

I'm talking about working on codebases here that are large enough to where only say 10% can be included in a context window at a single time, if you don't want to blow past 200k tokens.

I'm talking about where almost any reasonable task is multi-turn/multi-shot, if you want real verification and results and not just some BS.

I'd like to see people start to test platforms like this for other kinds of projects, and tweak the open-source platform itself (through the LLM which is very good at tweaking this kind of platform) to be more aligned with their needs - less popular coding languages, other kinds of environments, ability to do research and create vector DBs and use that data in the process... The sky is the limit really, but we need some kind of real platforms for *doing it all on*. Otherwise we are all just complaining about the corporate money making tools that everyone's being sold thinking they can get shit done with the LLM.

TL:DR; Turns out "LLM chat threads and some shitty MCP server linkages that halfway work + your IDE" is not actually a great recipe for "saving time". But we can build a platform that actually IS the right recipe for saving time and getting accurate results.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You