For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more jostylr's commentsregister

You are correct that the pilot wave theory (Bohmian mechanics) says that instead of wave OR particle, it is wave AND particle.

But the particle does not generate the wave. There is one wave function governing the whole universe. It is a function on the 3n-dimensional configuration space of all of the particle positions. To find the velocity of a given particle at a given time, one needs to put in the position of all of the particles of the universe. Practically speaking, in an experimental setup, the macro state of the environment is sufficient to create an effective wave function of the particle which is how we can effectively use quantum mechanics on a subsystem of the universe. The collapse of the wave function in measurements is a reflection that once the little system interacts enough with the environment, then the separate environmental configurations have separated out the behavior of the wave relative to the one particle so that an effective collapse wave function can be used.

This plugging in the configuration of all the particles is a gross violation of a relativistic outlook (what is the universal now?). Bell after seeing Bohm's theory immediately grasped the implications and wanted to know if that nonlocality could be removed. His work, along with EPR, was to demonstrate that there was no theory of any kind that could avoid the non locality if results of experiments actually happen when we think they do.

The double slit experiment is perfectly explained by the approximate wave function of the 1 particle system going through both slits and interfering with itself while the particle is guided by that wave which is why there is an interference pattern that builds up out of particular particle dots. There is nothing other than practical difficulties to make the wave separation happen later but have outcomes as if it didn't; it is all about what the wave function is doing as the particle is most likely to be where |psi|^2 dictates it to be. That is what the law of motion assures. One could theoretically simulate the paths conforming to make this happen though the paths themselves could have quite unexpected behavior.

There are various extensions to Bohmian mechanics to deal with particle creation, annihiliation, quantum field theory, and relativistic versions. None are as complete as non-relativistic quantum mechanics in having a mathematically proven existence, but a large part of that is quantum field theory being unsound; the Bohmian part is not a problem. There are avenues being pursued to solve the quantum field theory infinite divergences using Bohmian insights (basically use wave functions that respect probability flowing along with particle creation and annihilation). The work is promising but difficult.

For the relativistic versions, it is easy enough to create a foliation of space-time to create a "now". There are even versions where the foliation is created out of what is already existing structure. Mathematically it seems fine as far as I know. But philosophically, it is weird to have an invisible fundamental structure existing that seemingly contradicts the main lesson of relativity.


It is all about competition which is, at its root, differentiation. The productivity can lead to increased wages for workers if workers are able to jump jobs because of this stuff. Not sure that it works like that with the AI tools. It could also work if programmers started leaving jobs to do their own thing. Programmers need to get scarce to bump up salaries. As for companies, their profits are also linked to competition. If they have equally good competitors, then extra productivity is likely to lead to lower prices to keep attracting customers. The customers profit. If they are more unique, then the extra productivity can lead to higher profits as they shrink their costs (fewer programmers, earlier deliverables, better customization to what customers want). All of these forces take time to shake out.

The most likely path is to enable a million independent projects to flourish and to find unserved niches that lead to a good, but not exorbitant, income, at least for a time.


Feeling the same, I just started a dialectic blog, with ChatGPT arguing with itself based on my prompt. It can have some biting words. I also have been using Suno AI to make a song with each post; while not perfect, it certainly allows for totally non-musical people like me to have something produced that can be listened to.

One post that fits with this feeling is https://silicon-dialectic.jostylr.com/2025/07/02/impostor-sy... The techno-apocalyptic lullaby at the end that the AIs came up with is kind of chilling.


I have been managing Claude to work on a rational math library in JavaScript: https://calc.ratmath.com

I am particularly enjoying the Stern-Brocot tree exploration: https://calc.ratmath.com/stern-brocot.html#0_1 I hope people will find it to be a nice way of understanding good rational approximations and how they tie into continued fractions and mediants. A nice exercise is to type x^2 in the expression box and go down the path to always advance towards x^2 being 2. This gives the continued fraction representation of the square root of 2.


I have used Zed's plane with Claude and also Claude Code. They are very different experiences. Zed's agent work is very much a set it, go away, review, give some tips to it, iterate. As long as you use the Sonnet and absolutely avoid the burn mode (formerly max mode), it should do a lot of work for you. The main limitation I hit is the context window. As the codebase gets larger, it takes more context for it to get going and then it tends to have a hard time finishing. I find that about 4 prompts works for a feature that would take me a few hours to code.

For Claude Code, the limit is reset every 5 hours so if you hit it, you rest a bit. Not that big a deal to me. But the way it works I find much more stressful. It is reviewing just about everything it is doing. It is step-by-step. Some of it you can just say Yes, do it without permission, but it likes to run shell commands and for obvious reasons arbitrary shell commands need your explicit Yes for each run. This is probably a great flow if you want a lot of control in what it is doing. And the ability to intercede and redirect it is great. But if you want more of a "I just want to get the result and minimize my time and effort" then Zed is probably better for that.

I am also experimenting with OpenAI's codex which is yet a different experience. There it runs on repos and pull requests. I have no idea what their rate/limit stuff will be. I have just started working with it.

Of the three, disregarding cost, I like Zed's experience the best. I also think they are the most transparent. Just make sure never to use the burn mode. That really burns through the credits very quickly for no real discernible reason. But I think it is also limited to either small codebases or prompts that limit what the agent is going through to get up to speed due to the context window being about 120k (it is not 200k as the view seems to suggest).


Try claude --dangerously-skip-permissions


Thanks for the tip. That does work much more like Zed's integration. I used multipass to setup a VM, created a non-admin user, restricted its internet with tinyproxy, mounted the repo I am working on, and I don't worry about the danger. Just have to make sure to ensure the directory mounted is backed up. I do find that I hit the limits and have to wait for it to reset. That is either a good time to take a break or maybe supplement with Zed. Zed has the feature that one can pay for extra prompts. The context window with Claude Code seems less of an issue than in the Zed integration. It also has memory comapctification if necessary though I find most of my feature work finishes before hitting that limit.


Helpful feedback, thank you!


I've been wondering if AI coding agent world makes literate programming valuable again. I got into it with JavaScript being a mess prior to the modern changes. Needed a lot of workarounds. Then they improved the language and it felt like coding could be efficient with them. But if the programmer switched from coding to reviewing, maybe it would be good to be able to have various snippets, with an explanation preceding it and then verifying it. Haven't tried it yet. But I do wonder.


Could be. We tend to think of a number line going in that order, that is, the lower numbers are to the left. What is interesting is that being > 0 is often a condition, such as epsilon > 0. Though that is often paired with something like 0 < |x-a| < epsilon. I have often wondered about an alternate mathematics in which the inequality sign was always pointed in the same direction and whether that would ease the difficulty students have with inequalities.


There is hypersrcipt: https://hyperscript.org which claims a descent from hypercard and certainly embraces the web.

Also, this might happen in a few years if AI improves enough to be trusted to make things by novices. Hard to imagine, but just maybe.


> since every computer will have rational numbers they can't exactly calculate as well

It might be better worded as "can't calculate a decimal version of every rational number". One can work quite easily nowadays with exact representations of rational numbers on computer. With Bigint stuff, it is easy to have very large (for human purposes) numerators and denominators. To what extent practical calculations could be done with exact rational arithmetic, I am not sure of though I suspect it is largely not an issue as precision of inputs is presumably a limiting factor.

Wildberger has specific objections to the usual definitions of real numbers and they vary based on the definition. For decimals, it is the idea that doing arithmetic with an infinite decimal is difficult even with a simple example such as 1/9*1/9 which is multiplying .111... times itself, leading to sums of 1s that carryover and create a repeating pattern that is not self-evident from the decimal itself.

For Cauchy sequences, he objects to the absurd lack of uniqueness, particularly that given any finite sequence, one can prepend that sequence to the start of any Cauchy sequence. So a Cauchy sequence for pi could start with a trillion elements of a sequence converging to square root 2. This can be fixed up with tighter notions of a Cauchy sequence though that makes the arithmetic much more cumbersome.

For Dedekind cuts, his issue seems mostly with a lack of explicit examples beyond roots. I think that is the weakest critique.

Inspired by his objections, I came up with a version of real numbers using intervals. Usually such approaches use a family of overlapping, notionally shrinking intervals. I maximized it to include all intervals that include the real number and came up with axioms for it that allow one to skirt around the issue that this is defining the real number. My work on this is hosted on GitHub: https://github.com/jostylr/Reals-as-Oracles


> One can work quite easily nowadays with exact representations of rational numbers on computer

One can also work with exact representations of Pi and sqrt(2). Use a symbolic system like MATLAB or Wolfram Alpha. Yes, if you create dedicated data structures for those exact representations you can work around the limitations of both 1/3 and Pi -- this is my point: the line is not "rational vs. irrational", it's "exact vs. computable to arbitrary precision vs. uncomputable". That is to say: a mathematical model that permits the rationals but outlaws the irrationals is much less likely to be at all useful than a model that permits computable numbers but outlaws/ignores non-computable numbers. I contend most objections to irrational numbers boil down to their general incomputability -- that is, 100% of all irrationals are not computable, and that makes people anxious. There is a coherent computation-focused model that keeps all computable irrationals and disallows the rest that would quell almost everyone's objections to the irrationals. For example, the set of rationals plus computable irrationals is countably infinite. All polynomials have roots.

> For decimals, it is the idea that doing arithmetic with an infinite decimal is difficult even with a simple example such as 1/9*1/9 which is multiplying .111... times itself, leading to sums of 1s that carryover and create a repeating pattern that is not self-evident from the decimal itself.

Right, but this is another example where an objection to irrational numbers can also be levied against 1/9, showing that computability is actually what we care about. And Pi and e and sqrt(2) are all computable, and not in any qualitatively more "difficult" way than the rationals.

> For Dedekind cuts, his issue seems mostly with a lack of explicit examples beyond roots. I think that is the weakest critique.

Yes, that is a weak critique indeed. Any computable real can be turned into a Dedekind cut that you can query in finite time.

> I came up with a version of real numbers using intervals

I haven't dug into your axioms but it seems to follow that if you gave me a Dedkind cut (A, B) then I could produce an Interval Oracle by taking [x, y] => x ∈ A && y ∈ B. Similarly if you gave me an Oracle I could query it to determine inclusion in A and B for any points -- immediately if you allow infinity in the query. That is, Oracle(x, inf) <=> x ∈ A and Oracle(-inf, x) <=> x ∈ B. So at first glance these appear to be equivalent, unless you disallow infinity to the Oracle, in which case I might need O(Log(n)) steps to establish inclusion in the Dedekind steps. So it might be a very slight Is that where the power comes from?


The current top comment has a link to the paper and the pdf is freely available there. It says it is an open access article.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You