For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | _alternator_'s commentsregister

Love the abstract, a real blast from the past. The wavelet transform is a truly beautiful idea, with compact wavelets first identified by Duabechies in the 90s. They were revolutionary for, among other things, being a truly unique class of function with fascinating properties (fractal, compact analogues of the Fourier transform).

They have applications in image and video processing, though IMO they aren’t used as often as they should be (they are default in JPEG2000 IIRC, but that’s not commonly used).

There have definitely been attempts to do graph based wavelets before; tbh I’m not familiar enough with the literature to comment on the novelty of this work, but it looks solid on a quick inspection.


About 3000 citations, so I’d guess the paper was very well received

The real problem for Apple here: in the fairly near future, the model of pre-defined functionality of software will be obsolete. All apps will be vibe coded and customized. Individual apps will basically be silos that protect proprietary data sources that are difficult to collect. But they will be infinitely more configurable than they are today.

Is there an actual use-case for this fan-fiction-esque prediction of software that rewrites itself, or is this just promoting AI for the sake of promoting it only?

I get annoyed enough when software I use changes arbitrarily in ways that don't benefit me, I can't see LLM vibed software that changes itself based on what it thinks I need being an improvement at all. It just feels like it would be even more annoying.


My ideal software: buggy in ways you can't diagnose, for reasons you can't intuit, reproducible by literally no one in the world, and with no one to file a bug report to

I love the idea of all software everywhere involving a die roll. Sounds like it'll be even more infuriating than most computing is right now.

The BOFH is grinning in his cave.

Simply put, yes, there are many use cases. Concrete example: Various timers with a better interface for the specific task I want to do (meditate, pomodoro, workout, etc) and no ads.

There’s no reason I really need those four different apps on my phone with a login and ads and tracking and 100 page terms of service.

Claude can write them for me today. It’s be even better if I just ask my phone for them and they pop up in a couple minutes.


I feel like I could write this in an afternoon with basic HTML. React if you want your phone to also act as a heating pad.

I suspect any minute the first software with integrated AI customization will launch. Geeks will hate it, but regular folks will love trading all those god damn endless settings and menus for a simple prompt bar.

In an almost ironic twist, GUI will revert back to a "CLI".


Yeah, I've been wondering what this might look like for a 3D printer slicer --- heck, I'd be glad to just have a series of sliders:

- aesthetic print quality

- dimensional accuracy

- strength

- ease of removing supports

- reliability of printing

which resolve to two values which estimate:

- print time

- volume of material used/consumed in supports


Yeah but not everyone has the same priorities within those sliders. For example strength is something that has many different types. Tensile strength, compression strength, shearing etc.

You use different infills to optimise for each type. This differs per model. An AI can surely help optimise it but it won't always know which one to prioritize, it requires knowing exactly what the printed model will be used for.

The same with aesthetics, usually you care about one specific side. And for ease of remove, are you willing to use support interface material? That makes a lot of difference.


I think this comment actually makes the case for highly custom LLM modifications to software. If you have priorities, you express them to the model and let it figure out how to maximize the UI for your needs.

Yes indeed but not as sliders IMO, that was my point

The article basically said it did launch and then Apple blocked it.

I’m really curious why I’m getting downvoted here. I fundamentally think that software is about to become 1000x more customizable and it’s a problem for the existing app model.

If I’m wrong, I want to know why. The thread seems to have a bias against AI slop (totally understandable), but in my experience it can one shot simple and functional apps today, and the technology will likely be able to make much better apps in the near future.


Just think about how your parents used google when you were a kid. What got better results faster?

we were taught google search query syntax by our librarian when I was in high school in 2002-ish. so...

Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.

Undefined behavior is worse than complicated defined behavior imo.

That's a good way of describing that. It's far too easy to pretend UNIX timestamps would correspond to a stopwatch counting from 1/1/1970.

Right. Currently epoch time is off the stopwatch time by 27 seconds.

Let’s not forget that mathematics is a social construct as much as (and perhaps more than) a true science. It’s about techniques, stories, relationships between ideas, and ultimately, it’s a social endeavor that involves curiosity satisfaction for (somewhat pedantic) people. If we automate ‘all’ of mathematics, then we’ve removed the people from it.

There are things that need to be done by humans to make it meaningful and worthwhile. I’m not saying that automation won’t make us more able to satisfy our intellectual curiosity, but we can’t offload everything and have something of value that we could rightly call ‘mathematics’.


> mathematics is a social construct

If you believe Wittgenstein then all of math is more and more complicated stories amounting to 1=1. Like a ribbon that we figure out how to tie in ever more beautiful knots. These stories are extremely valuable and useful, because we find equivalents of these knots in nature—but boiled down that is what we do when we do math


I like the Kronecker quote, "Natural numbers were created by god, everything else is the work of men" (translated). I figure that (like programming) it turns out that putting our problems and solutions into precise reusable generalizable language helps us use and reuse them better, and that (like programming language evolution) we're always finding new ways to express problems precisely. Reusability of ideas and solutions is great, but sometimes the "language" gets in the way, whether that's a programming language or a particular shape of the formal expression of something.

More like 1 = 0 + 1.

Read about Lisp, the Computational Beauty of Nature, 64k Lisp from https://t3x.org and how all numbers can be composed of counting nested lists all down.

List of a single item:

     (cons '1 nil)
Nil it's an empty atom, thus, this reads as:

[ 1 | nil ]

List of three items:

    (cons '1 (cons 2 (cons 3 nil)))
Which is the same as

    (list '1 '2 '3)
Internally, it's composed as is, imagine these are domino pieces chained. The right part of the first one points to the second one and so on.

[ 1 | --> [ 2 | -> [ 3 | nil ]

A function is a list, it applies the operation over the rest of the items:

     (plus '1 '2 3') 
Returns '6

Which is like saying:

  (eval '(+ '1 '2 '3))
'(+ '1 '2 '3) it's just a list, not a function, with 4 items.

Eval will just apply the '+' operation to the rest of the list, recursively.

Whis is the the default for every list written in parentheses without the leading ' .

    (+ 1 (+ 2 3))
Will evaluate to 6, while

    (+ '1 '(+ '2 '3)) 
will give you an error as you are adding a number and a list and they are distinct items themselves.

How arithmetic is made from 'nothing':

https://t3x.org/lisp64k/numbers.html

Table of contents:

https://t3x.org/lisp64k/toc.html

Logic, too:

https://t3x.org/lisp64k/logic.html


You don’t really have to believe Wittgenstein; any logician will tell you that if your proof is not logically equivalent to 1=1 then it’s not a proof.

Sure, I just personally like his distinction between a “true” statement like “I am typing right now” and a “tautological” statement like “3+5=8”.

In other words, declarative statements relate to objects in the world, but mathematical statements categorize possible declarative statements and do not relate directly to the world.


If you look from far enough, it becomes "Current world ⊨ I am typing right now" which becomes tautological again.

In my view mathematics builds tools that help solve problems in science.

This is known as “applied mathematics”.

Sounds lame and boring to me.

There is a bit about this in Greg Egan‘s Disspora, where a parallel is drawn between maths and art. It is not difficult to automate art in the sense that you can enumerate all possible pictures, but it takes sentient input to find the beautiful areas in the problem space.

I do not think this parallel works, because I think you would struggle to find a discipline for which this is not the case. It is trivial to enumerate all the possible scientific or historical hypothesis, or all the possible building blueprints, or all the possible programs, or all the possible recipes, or legal arguments…

The fact that the domain of study is countable and computable is obvious because humans can’t really study uncountable or uncomputable things. The process of doing anything at all can always be thought of as narrowing down a large space, but this doesn’t provide more insight than the view that it’s building things up.


Automating proofs is like automating calculations: neither is what math is, they are just things in the way that need to be done in the process of doing math.

Mathematicians will just adopt the tools and use them to get even more math done.


I don't think that's true. Often, to come up with a proof of a particular theorem of interest, it's necessary to invent a whole new branch of mathematics that is interesting in its own right e.g. Galois theory for finding roots of polynomials. If the proof is automated then it might not be decomposed in a way that makes some new theory apparent. That's not true of a simple calculation.

> I don't think that's true. Often, to come up with a proof of a particular theorem of interest, it's necessary to invent a whole new branch of mathematics that is interesting in its own right e.g. Galois theory for finding roots of polynomials. If the proof is automated then it might not be decomposed in a way that makes some new theory apparent. That's not true of a simple calculation.

Ya, so? Even if automation is only going to work well on the well understood stuff, mathematicians can still work on mysteries, they will simply have more time and resources to do so.


This is literally the same thing as having the model write well factored, readable code. You can tell it to do things like avoid mixing abstraction levels within a function/proof, create interfaces (definitions/axioms) for useful ideas, etc. You can also work with it interactively (this is how I work with programming), so you can ask it to factor things in the way you prefer on the fly.

>This is literally the same thing as

No.

>You can

Not right now, right? I don't think current AI automated proofs are smart enough to introduce nontrivial abstractions.

Anyway I think you're missing the point of parent's posts. Math is not proofs. Back then some time ago four color theorem "proof" was very controversial, because it was a computer assisted exhaustive check of every possibility, impossible to verify by a human. It didn't bring any insight.

In general, on some level, proofs like not that important for mathematicians. I mean, for example, Riemann hypothesis or P?=NP proofs would be groundbreaking not because anyone has doubts that P=NP, but because we expect the proofs will be enlightening and will use some novel technique


Right, in the same way that programs are not opcodes. They're written to be read and understood by people. Language models can deal with this.

I'm not sure what your threshold for "trivial" is (e.g. would inventing groups from nothing be trivial? Would figuring out what various definitions in condensed mathematics "must be" to establish a correspondence with existing theory be trivial?), but I see LLMs come up with their own reasonable abstractions/interfaces just fine.


There are areas of mathematics where the standard proofs are very interesting and require insight, often new statements and definitions and theorems for their sake, but the theorems and definitions are banal. For an extreme example, consider Fermat's Last Theorem.

Note on the other hand that proving standard properties of many computer programs are frequently just tedious and should be automated.


Yes, but > 90% of the proof work to be done is not that interesting insightful stuff. It is rather pattern matching from existing proofs to find what works for the proof you are currently working on.

If you've ever worked on a proof for formal verification, then its...work...and the nature of the proof probably (most probably) is not going to be something new and interesting for other people to read about, it is just work that you have to do.


You're right, I misread your comment. Apologies.

[flagged]


First of all, I think your comment is against HN guidelines.

And I expect GP has actually a lot of experience in mathematics - there are exactly right and this is how professional mathematicians see math (at least most of them, including ones I interact with).


Engineers, maybe. Not the case with Mathematicians.

Fascinating discussion including showing how LLMs are helping push the state of the art:

> I still did not see how to prove this inequality, but I decided to try my luck giving it to ChatGPT Pro, which recognized it as an {L^1} approximation problem and gave me a duality-based proof (based ultimately on the Fourier expansion of the square wave). With some further discussion, I was able to adapt this proof to functions of global exponential type (replacing the Fourier manipulations with contour shifting arguments, in the spirit of the Paley-Wiener theorem), which roughly speaking gave me half of what I needed to establish (2).

> As a side note, this latter argument was provided to me by ChatGPT, as I was not previously aware of the Nevanlinna two-constant theorem.

This mirrors my experience with these tools for math. Great for local problems and chatting through issues. Still can’t do the whole thing in one shot but getting there.


This started from an offhand comment I made in this thread [1] a few weeks ago. I was interested in making the intuition rigorous, and the result was satisfying and agreed with my experiments. Hope y'all enjoy!

[1]: https://news.ycombinator.com/item?id=47254896#47298380


I'd just say I'm playing microtonal music[1][2] and not worry about it.

Jokes aside, great read. Always so satisfying when the predictions match experiment.

[1]: https://www.youtube.com/watch?v=0Ssi-9wS1so (for the latest craze in microtonal music)

[2]: https://www.youtube.com/watch?v=tiKCORN-6m8 (for a more informative and historical video)


Thanks! Yeah, microtonal stuff is very interesting. I was surprised by the total amount of pitch bending you can feasibly get with a guitar and just changing the dynamics. I think the max was about a half tone, which is definitely a “macrotonal” shift.

Thank you. The main problem with Bayesian statistics is that if the outcome depends on your priors, your priors, not the data determine the outcome.

Bayesian supporters often like to say they are just using more information by coding them in priors, but if they had data to support their priors, they are frequentists.


If they were doing frequentist inference they wouldn’t be using priors at all and there is nothing frequentist in using previous data to construct prior distributions.


Not true. In frequentist statistics, from the perspective of Bayesians, your prior is a point distribution derived empirically. It doesn't have the same confidence / uncertainty intervals but it does have an unnecessarily overconfident assumption of the nature of the data generating process.


Not true. In frequentist statistics, from the perspective of Bayesians and non-Bayesians alike, there are no priors.

—-

Dear ChatGPT, are there priors in frequentist statistics? (Please answer with a single sentence.)

No — unlike Bayesian statistics, frequentist statistics do not use priors, as they treat parameters as fixed and rely solely on the likelihood derived from the observed data.


There's always priors, they're just "flat", uniform priors (for maximum likelihood methods). But what "flat" means is determined by the parameterization you pick for your model. which is more or less arbitrary. Bayesians would call this an uninformative prior. And you can most likely account for stronger, more informative priors within frequentist statistics by resorting to so-called "robust" methods.


First, there is not such thing as a ‘uninformative’ prior; it’s a misnomer. They can change drastically based on your paramerization (cf change of variables in integration).

Second, I think the nod to robust methods is what’s often called regularization in frequentist statistics. There are cases where regularization and priors lead to the same methodology (cf L1 regularized fits and exponential priors) but the interpretation of the results is different. Bayesian claim they get stronger results but that’s because they make what are ultimately unjustified assumptions. My point is that if they were fully justified, they have to use frequentist methods.


One standard way to get uninformative priors is to make them invariant under the transformation groups which are relevant given the symmetries in the problem.


It’s not true that “there are always priors”. There are no priors when you calculate the area of a triangle, because priors are not a thing in geometry. Priors are not a thing in frequentist inference either.

You may do a Bayesian calculation that looks similar to a frequentist calculation but it will be conceptually different. The result is not really comparable: a frequentist confidence interval and a Bayesian credible interval are completely different things even if the numerical values of the limits coincide.


Frequentist confidence intervals as generally interpreted are not even compatible with the likelihood principle. There's really not much of a proper foundation for that interpretation of the "numerical values".


What does “as generally interpreted” mean? There is one valid way to interpret confidence intervals. The point is that it’s not based on a posterior probability and there is no prior probability there either.


If you want to say that when you do a frequentist analysis which doesn’t include any concept of prior you get a result that has a similar form to the result of a completely different conceptually Bayesian analysis which uses a flat prior (definitely not “a point distribution derived empirically”) that may be correct. It remains true that there is no prior in the frequentist analysis because they are not part of frequentist inference at all.


Priors are not used in construction of frequentist approaches, but that does not mean that the analyses aren't isomorphic in theory.

Point distribution <=> point estimate as a sample from an initially flat distribution. A priori vs a posteriori perspectives, which are equivocal if we are to take your description of frequentist statistics into account ;)


It’s not my description of frequentist statistics. It’s the frequentist statisticians’ description. This is from Wasserman’s All of Statistics:

The statistical methods that we have discussed so far are known as frequentist (or classical) methods. The frequentist point of view is based on the following postulates:

F1 […]

F2 Parameters are fixed, unknown constants. Because they are not fluctuating, no useful probability statements can be made about parameters.

F3 […]


Explain?


You're talking to a bot. There has been a wild influx of these accounts that just add filler comments like this.


Thousands of them in fact.


This seems like soft trolling. Global warming is canonically the opposite of a zero sum game. Everyone is losing.


Everyone is suffering sure, but what matters is relative degree of it. Side that suffers less, wins against others and that's the only thing that matters.

And no, in China global warming means worsening desertification, in Russia it means melting permafrost that covers 60% of the country, same in Canada. Europe and the US are uniquely positioned to suffer the least from it and many industries will win outright. For example, there will be year-round tourist season within continental EU: all summer on the Baltics and the North Sea and all winter in the Mediterranean; winemaking in Spain and southern France will suffer badly and in some places may become commercially non-viable, but will expand to great lot more territory in northern Germany, low countries, Poland, UK, thus enabling a lot richer wines due to great variety of soils.


> Everyone is suffering sure, but what matters is relative degree of it.

Is that what matters?


> This seems like soft trolling. Global warming is canonically the opposite of a zero sum game. Everyone is losing.

No not everyone loses, people in colder climates win a lot from global warming, especially Russia and Canada. There is so much land there that is currently not viable due to being too cold that will open up from global warming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You