I also liked the one on lambda calculus. I hope one day we will be able to find interpretation of what it actually means for PLUS Times Plus. Maybe this is how we will explore nonstandard arithmetic.
Edit: Oh wait, no, I was thinking of the Drew's Campfire double pendulum video. That video was extra interesting because the creator is not a typical content producer. He just has a few videos without any views, then dropped what might be one of the best videos of all time, and then went back to his technical videos.
More on what astronauts found “objectionable” and “distasteful” with Apollo's system, from the PDF linked in the OP (1):
"In general, the Apollo waste management system worked satisfactorily from an engineering standpoint. From the point of view of crew acceptance, however, the system must be given poor marks. The principal problem with both the urine and fecal collection systems was the fact that these required more manipulation than crewmen were used to in the Earth environment and were, as a consequence, found to be objectionable. The urine receptacle assembly represented an attempt to preclude crew handling of urine specimens but, because urine spills were frequent, the objective of “sanitizing” the process was thwarted.
The fecal collection system presented an even more distasteful set of problems. The collection process required a great deal of skill to preclude escape of feces from the collection bag and consequent soiling of the crew, their clothing, or cabin surfaces. The fecal collection process was, moreover, extremely time consuming because of the level of difficulty involved with use of the system. An Apollo 7 astronaut estimated the time required to correctly accomplish the process at 45 minutes.* Good placement of fecal bags was difficult to attain; this was further complicated by the fact that the flap at the back of the constant wear garment created an opening that was too small for easy placement of the bags.** As was noted earlier, kneading of the bags was required for dispersal of the germicide.
*Entry in the log of Apollo 7 by Astronaut Walter Cunningham.
**The configuration of the constant wear garments on later Apollo missions were modified to correct
this problem."
This is indeed a great quote (one of many gems from Sir Tony) but I think the context that follows it is also an essential insight:
> The first method is far more difficult. It demands the same skill,
devotion, insight, and even inspiration as the discovery of the simple
physical laws which underlie the complex phenomena of nature. It
also requires a willingness to accept objectives which are limited
by physical, logical, and technological constraints, and to accept a
compromise when conflicting objectives cannot be met. No committee
will ever do this until it is too late.
I'm a occasional hobbyist maker and i've used Autodesk Fusion, Solid Edge, OpenSCAD and other niche parametric programs, but always felt FreeCAD was too complex. But I really wanted it to work for me because it's FOSS and 100% offline. So with the new FreeCAD 1.1 RC I found an hour long tutorial and dove in.
(1.1 is supposedly much easier to work with)
After doing the tut I can say that 1.1 is very nice, i can uninstall Fusion and Solid Edge finally :)
> The transformer architectures powering current LLMs are strictly feed-forward.
This is true in a specific contextual sense (each token that an LLM produces is from a feed-forward pass). But untrue for more than a year with reasoning models, who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully.
Heck, it was untrue before that as well, any time an LLM responded with more than one token.
> A [March] 2025 survey by the Association for the Advancement of Artificial Intelligence (AAAI), surveying 475 AI researchers, found that 76% believe scaling up current AI approaches to achieve AGI is "unlikely" or "very unlikely" to succeed.
I dunno. This survey publication was from nearly a year ago, so the survey itself is probably more than a year old. That puts us at Sonnet 3.7. The gap between that and present day is tremendous.
I am not skilled enough to say this tactfully, but: expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept. Way worse.
> My take is that research taste is going to rely heavily on the short-duration cognitive primitives that the ARC highlights but the METR metric does not capture.
I don't have an opinion on this, but I'd like to hear more about this take.
Not go to all “ackchually” but modern GPUs can render in many other ways than rasterising triangles, and they can absolutely draw a cylinder without any tessellation involved. You can use the analytical ray tracing formula, or signed distance fields for a practical way to easily build complex scenes purely with maths: https://iquilezles.org/articles/distfunctions/
Now of course triangles are usually the most practical way to render objects but it just bugs me when someone says something like “Every smooth surface you've ever seen on a screen was actually tiny flat triangles” when it’s patently false, ray tracing a sphere is pretty much the Hello World of computer graphics and no triangles are involved.
Edit: for CADs, direct ray tracing of NURBS surfaces on the GPU exists and lets you render smooth objects with no triangles involved whatsoever, although I’m not sure if any mainstream software uses that method.
> For the chess problem we propose the estimate number_of_typical_games ~ typical_number_of_options_per_movetypical_number_of_moves_per_game. This equation is subjective, in that it isn’t yet justified beyond our opinion that it might be a good estimate.
This applies to most if not all games. In our paper "A googolplex of Go games" [1], we write
"Estimates on the number of ‘practical’ n × n games take the form b^l where b and l are estimates on the number of choices per turn (branching factor) and game length, respectively. A reasonable and minimally-arbitrary
upper bound sets b = l = n^2, while for a lower bound, values of b = n and l = (2/3)n^2 seem both reasonable and not too arbitrary. This gives us bounds for the ill-defined number P19 of ‘practical’ 19x19 games of
10^306 < P19 < 10^924
Wikipedia’s page on Game complexity[5] combines a somewhat high estimate of b = 250 with an unreasonably low estime of l = 150 to arrive at a not unreasonable 10^360 games."
> Our final estimate was that it is plausible that there are on the order of 10^151 possible short games of chess.
I'm curious how many arbitrary length games are possible.
Of course the length is limited to 17697 plies [3] due to Fide's 75-move rule. But constructing a huge class of games in which every one is probably legal remains a large challenge; much larger than in Go where move legality is much easier to determine.
The main result of our paper is on arbitrarily long Go games, of which we prove there are over 10^10^100.
While useful it needs a big red warning to potential leakers. If they were personally served documents (such as via email, while logged in, etc) there really isn't much that can be done to ascertain the safety of leaking it. It's not even safe if there are two or more leakers and they "compare notes" to try and "clean" something for release.
The watermark can even be contained in the wording itself (multiple versions of sentences, word choice etc stores the entropy). The only moderately safe thing to leak would be a pure text full paraphrasing of the material. But that wouldn't inspire much trust as a source.
Super cool to read but can someone eli5 what Gaussian splatting is (and/or radiance fields?) specifically to how the article talks about it finally being "mature enough"? What's changed that this is now possible?
For most daily needs: chef's knife, pairing knife, serated/bread knife. Possibly useful 'extras': kitchen shears, petty/utility, boning, slicing/carving. They do not recommend sets.
I will beat loudly on the "Attention is a reinvention of Kernel Smoothing" drum until it is common knowledge. It looks like Cosma Schalizi's fantastic website is down for now, so here's a archive link to his essential reading on this topic [0].
If you're interested in machine learning at all and not very strong regarding kernel methods I highly recommending taking a deep dive. Such a huge amount of ML can be framed through the lens of kernel methods (and things like Gaussian Processes will become much easier to understand).
For the particular case of the 5 delimiters '\n', '.', '?', '!', and ';', it just happens to be so that you can do this as a single shuffle instruction, replacing the explicit lookup table.
You can do this whenever `c & 0x0F` is unique for the set of characters you're looking for.
The article, and many of the responses, are hinting at the fact that bubblesort is an example of an anytime algorithm. This is a wide class of algorithms which provide a correct answer after some amount of time, but provide an increasingly good answer in increasing amounts of time short of the completion time. This is a super valuable property for real time systems (and many of the comments about games and animations discuss that). The paper that introduced me to the category is "Anytime Dynamic A*" [0], and I think it's both a good paper and a good algorithm to know.
I'm guessing about 10-15 years ago I was watching a documentary on the re-release of Ken Burns Civil War.
They were highlighting the digital tools they were using to restore and enhance the original film capture for new streaming services etc.
They showed one of the restorers using a fascinating tool where one window was a video feed of the original film's "first pass" to digital. One of the landscape scenes had a small smudge in the upper right hand corner so the restorer pauses the feed, goes back frame by frame and then was able to drag and drop the frame into another window where he used Photoshop like tools to fix everything and then drag and drop it back into the "feed". Seemed VERY efficient and shows how good tools can really accelerate a workflow.
I found Neetcode to be a very fun way to progress through them. It has a skill tree that makes your progress more visible and has great lessons too.
On the more abstract motivation side, despite the somewhat contrived nature of the challenges compared to day-to-day work I have treated it as a learning opportunity as there is genuinely some interesting stuff in there and there you never know when it might come in handy.
> The Hierarchical navigable small world (HNSW) algorithm is a graph-based approximate nearest neighbor search technique used in many vector databases.[1] Nearest neighbor search without an index involves computing the distance from the query to each point in the database, which for large datasets is computationally prohibitive. For high-dimensional data, tree-based exact vector search techniques such as the k-d tree and R-tree do not perform well enough because of the curse of dimensionality.
i don't have any background in computer vision but enjoyed how the introductory chapter gets right into it illustrating how to build a limited but working simple vision system
Since people are a bit WTF is this, here's a point to combinators.
A combinator is a function that doesn't mutate global state and doesn't close over variables. It's the base case in software. Pass it the same argument, get the same result, doesn't change anything about the rest of the system.
If you combine some of these combinators, you get another combinator - because when you put pure functions together, what you get is a pure function.
These are also the functions that are easy to write in assembly. Or C. Because the don't do very much. So if you write S and K in x64, and then compile common lisp to combinators written in terms of combinators written in terms of S and K, what you've got is common lisp running on those two hand written assembly functions.
That's not a great idea for performance, but if you go with a less spartan set of maybe a few hundred of the most commonly occurring patterns inlined into each other and given names, you've got a viable way "machine" to compile functional programs into.
This looks a bit like a forth that fears mutation. Or a bytecode vm. Or a CPU, where the combinators are called "instructions".
So what combinators are, broadly, is an expression of applied computer science with implementation details ignored as much as possible. That's the sense in which it's simpler than the lambda calculus.
Equally, if you implement the lambda calculus on a handful of combinators, then implement lisp on the lambda calculus, then write stuff in that lisp, you've really cut down how much machine specific work needs to be done at the bottom of the stack.
Jeez top post on HN and there's a full overlay ad to "download a mac extension". This deserves a summary post to save others the click. Here's the "what every engineer should know" without the spam:
PoE (Power over Ethernet) sends both DC power and data through the same twisted-pair Ethernet cable, allowing devices like IP cameras, wireless access points, and VoIP phones to run without separate power lines. The power is delivered by Power Sourcing Equipment (PSE) — either an endspan (built-in PoE switch) or a midspan (PoE injector placed between a non-PoE switch and the device). The powered device (PD) negotiates power via detection and classification before voltage is applied, preventing damage to non-PoE gear. IEEE 802.3af (Type 1) provides up to 15.4 W at the source, 802.3at/PoE+ (Type 2) up to 25.5 W delivered, and 802.3bt (Type 3/4) extends that to roughly 60–90 W using all four wire pairs. Engineers need to understand not just wiring, but also cable category limits, pair usage, power losses over distance, and heat dissipation — especially at higher power levels. Modern PoE designs must consider standards compliance, thermal management, and efficiency, as power density rises with new generations of PoE technology.
> Our key insight is that many everyday social interactions may follow predictable patterns; efficient "scripts" that minimize cognitive load for actors and observers, e.g., "wait for the green light, then go." We propose modeling these routines as behavioral programs instantiated in computer code rather than policies conditioned on beliefs and desires.
Aren't there already materials (made for people with autism) that catalog these scripts and make them explicit?
I have a tip for following lectures (or any technical talk, really) that I've been meaning to write about for a while.
As you follow along with the speaker, try to predict what they will say next. These can be either local or global predictions. Guess what they will write next, or what will be on the next slide. With some practice (and exposure to the subject area) you can usually get it right. Also try to keep track of how things fit into the big picture. For example in a math class, there may be a big theorem that they're working towards using lots of smaller lemmas. How will it all come together?
When you get it right, it will feel like you are figuring out the material on your own, rather than having it explained to you. This is the most important part.
If you can manage to stay one step ahead of the lecturer, it will keep you way more engaged than trying to write everything down. Writing puts you one step behind what the speaker is saying. Because of this, I usually don't take any notes at all. It obviously works better when lecture notes are made available, but you can always look at the textbook.
People often assume that I have read the material or otherwise prepared for lectures, seminars, etc., because of how closely I follow what the speaker is saying. But really most talks are quite logical, and if you stay engaged it's easy to follow along. The key is to not zone out or break your concentration, and I find this method helps me immensely.
What is PLUS times PLUS?
https://www.youtube.com/watch?v=RcVA8Nj6HEo