For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more evo's commentsregister

This feels like a refinement on the existing practice of 3D printing metallic powder mixed with a low-temp meltable binding agent, then sintering the resulting object in a kiln to burn off the binder and generate the final part. As far as I know, this also has shrinkage that must be accounted for, as well as a particularly rough finish. That's not that big of a deal if there's subsequent machining steps, but perhaps this direct precipitation of metal out of solution improves upon the precision or tolerances.


There are both selective laser sintering and selective laser melting that do not require a kiln step and therefore the final size is the final size. Both processes can produce some amazing metal designs that would not otherwise be possible with a typical metal+binder extrusion process. I've talked to some vendors who have shown me some really cool waveguide and cavity filter designs they've made using either SLM or SLS. There are also some methods using lasers to effectively remelt the surfaces to smooth them out to get rid of the grainy texture that's typical for any sintered material. In aerospace, size and weight are very limited so bolting a bunch of off the shelf waveguide parts together can get messy, being able to get a really compact design that fits exactly can really help with miniaturization.


I feel like there’s a natural adoption curve to social media:

1. The growing social media platform balances the needs of two user groups: the consumer’s need for fresh content and the neophyte producer’s need for a slowly ramping trickle of validation. This is possible because the people don’t know how to produce content in the new format yet.

2. The mature social media platform has picked winners. We know who the successful youtubers are, the successful twitch streamers, etc., and they know how to create the optimal media for their platform. At this point we’re maximally satisfying consumer demand, but we’re actively repelling the neophyte producers because the bar is now too high. They form a growing untapped market for the next social media platform.

3. Decay. A competing platform has stolen the limelight by restoring the dynamism of the consumer/producer balance. The successful producers of the platform start flexing out to the new upstart, though they’re unlikely to repeat their successes there, they’re too late to the game and bound to old habits. Chasing feature parity with the new platform does nothing because now you’re just upsetting the existing balance but that’s not suddenly going to pull new people into the game, they’ve already written you off.


Some social media are like that, others are not. When I read “It’s that I don’t see my actual friend’s posts and they don’t see mine.” I thought that my friends do see my posts and viceversa because we use channels on WhatsApp and Telegram. If all I want is keeping in touch with friends, why should I use media like Twitter, Facebook, Instagram, TikTok?


The original idea is that you can share this same photo to all your friends and family at once. That’s why it’s superior to 1:1 messaging apps.

The problem is you start following strangers which causes the app to transforms itself to promote strangers over friends and family to meet monthly usage goals. It’s a viscous cycle.


Because in the US almost no one uses WhatsApp and Telegram.


You can use Messenger or iMessage the same way.


Agreed. I can't help but feel that many current social media platforms are on step 3 right now.


Historically, I think the answer is even scarier: the Dead Hand system presupposes that a centralized power is necessary to start nuclear war, the American system has been to delegate that power to regional and even theater-level military commanders (particularly in the bomber days, when sending orders to the Pacific theater was never going to work in the necessary timescale). It makes sense from a game theory perspective--if the commander-in-chief was literally the only person with strategic command authority, an assassination could render the entire country vulnerable to an unopposed first strike.

An airfield on the northern edge of Japan is close enough to Russian airspace that, at alert, the local airfield commander was going to have to make the call to sortie their bombers while the runways were still intact, and once deployed it wasn't unimaginable that a Dr. Strangelove-style scenario could go down.

It's been a few years, but I think this was mostly detailed in The Doomsday Machine by Daniel Ellsberg.


The most consequential recipient of said orders is going to be a ballistic nuclear submarine deep underwater, and water is a terrific shield for the higher frequency radio bands a satellite will have access to. I'm guessing the command rocket will spool out an extremely long wire on its flight trajectory and give access to the VLF or ULF radio bands that can penetrate the ocean.


I think the command signal is merely picked up by existing repeaters that broadcast to the submarines.

I actually think the most consequential recipients are actually the land based ICBMs as those are likely 100% automated installations.

Even if the submarines are already at launch depth I would assume there would still be final humans in the loop in all nuclear bombers and subs, making them unreliable launchers. (who really wants to press the "end the world" button?)


I actually wouldn't be surprised if it did have a direct VLF antenna system. If the Dead Hand had triggered, that suggests multiple Soviet C&C headquarters went offline due to a first strike, and at that point you can't really assume your repeaters or silos are still online. The submarine fleet's primary purpose is to be the surviving second strike capacity.


Both the US and Russia have flying command systems which use very long trailing wires to send launch commands.

The problem: they're nearly a single point of failure. If you can hit the other side's command systems their missiles won't be going anywhere.

I'd assume both sides know this and contingencies and redundancies exist.


I recall hearing a hypothesis that one consequence of global warming might be an uptick in opportunistic fungal infections. Warm-bloodedness might be largely an adaptation to make the human body uninhibitable by environmental pathogens, but as warming pushes tropical areas to long sustained stretches of temperatures above 98.6F, it acts a selective pressure for organisms to evolve such that they can now thrive in the body.


Everything bad is related to global warming these days, it seems.


I suspect their primary advantage remains shelf-stability at room temperatures, which will make them stay relevant for military applications, e.g. you don't want a cryogenics facility in your submarine or cruise missile launch platform.

Historically, I think they're cheaper than an equivalent disposable liquid fueled engine but don't hold up to the fully reusable designs of today, and from a reliability perspective there's not a lot of room between working-as-intended and "activate the flight termination system" at a total loss.


> I suspect their primary advantage remains shelf-stability at room temperatures, which will make them stay relevant for military applications

I while ago I read--but barely understood--a book that went into a lot of this: "Ignition! An Informal History of Liquid Rocket Propellants" by John D. Clark.

_____________________

IIRC there were some cases where a fuel was not militarily-acceptable because you would need to warm/thaw the mobile missile prior to firing in a Russian winter, or other cases where a permanent missile-silo meant it was cost-effective to run heating/refrigeration all the time, etc.

> [I]n applications which do not require a low freezing point, hydrazine itself is fused, either straight or mixed with one of its derivatives. The fuel of the Titan II ICBM doesn't have to have a low freezing point, since Titan II lives in a steam-heated hole in the ground, but it does need the highest possible performance, and hydrazine was the first candidate for the job.

_____________________

Another fuel-choice issue involves how badly it might self-destruct if anything unusual happened:

> [I]n the summer of 1960, we tried to fire a 10,000-pound thrust Cavea B motor. [...] Well, through a combination of this and that, the motor blew on startup. We never discovered whether or not the [detonation] traps worked—we couldn't find enough fragments to find out.

> The fragments from the injector just short-circuited the traps, smashed into the tank, and set off the 200 pounds of propellant in that. (Each pound of propellant had more available energy than two pounds of TNT.) I never saw such a mess. The walls of the test cell—two feet of concrete—went out, and the roof came in. The motor itself—a heavy, workhorse job of solid copper— went about 600 feet down range. And a six-foot square of armor plate sailed into the woods, cutting off a few trees at the root, smashing a granite boulder, bouncing into the air and slicing off a few treetops, and finally coming to rest some 1400 feet from where it started. The woods looked as though a stampeding herd of wild elephants had been through.

> As may be imagined, this incident tended to give monopropellants something of a bad name. Even if you could fire them safely—and we soon saw what had gone wrong with the ignition process—how could you use them in the field?

> Here you have a rocket set up on the launching stand, under battlefield conditions; and what happens if it gets hit by a piece of shrapnel? LRPL came up with the answer to that. You keep your monoprop in the missile in two compartments: one full of fuel-rich propellant made up to A. = 2.2 or 2.4, and the other containing enough acid to dilute it to X = 1.2. Just before you fire, a can-opener arrangement inside the missile slits open the barrier separating the two liquids, you allow a few seconds for them to mix, and then push the button.


I find the proliferation concerns of this technology to be rather scary--if LLNL could cobble this together in a few rush months with slide rules back in the late 50s, what could an adversary with modern computational fluid dynamics and a pile of GPUs do?

With this sort of design, you could scale a fairly small amount of fissile material into arbitrarily large outputs adding basically just lithium deuteride.


As somebody who actually tried this — not much. You’ll need a lot of tacit information on how (thermo)nuclear weapons can be constructed, the feedback from actual engineers and experimental results, etc. I can design e.g. an implosion simulation with some open source CFD codes, but how can I verify my results without access to (really expensive) explosion testing rigs and measurement equipment? Thermonuclear weapons are a whole another story: the experimental equipment for testing radiation compression is prohibitively expensive. So, thankfully, a working nuclear program is still only possible for nation-states and extremely sophisticated organizations.


That’s totally fair—I was thinking more like a small nation-state being able to leverage a relatively small enrichment program rather than a private independent program.

I’m sure there’s all sorts of experimentally determined magic numbers that are only discoverable with testing.


Not quite--a conventional thermonuclear weapon uses a "tamper" surrounding the secondary, essentially a cylinder of uranium that's designed to eat the spike of x-rays coming off the primary and convert it into a compressive force by ablating away on the exterior. This compressive force generates the pressures necessary to drive the fusion reaction (and is potentially fissile itself, if you use enriched material). It's physically heavy, and bringing that mass to speed requires a pretty powerful primary relative to the overall yield.

It seems that the conjectured Ripple design uses some sort of wave-shaping barrier between the primary and secondary to convert the single massive spike of x-rays into a series of pressure waves that, via constructive interference, are able to compress the secondary directly without the need for the intervening tamper.

Being able to do direct compression this way means you can do away with the weight of the tamper, and since you're applying the force directly you can get away with a much smaller primary (e.g. a 15MT device with a 15kT primary gives you a 99.99% "clean" design).

Still requires the primary though.


Here's my understanding, starting with some background terminology:

Everything that's tradable on an exchange (an "instrument") has a bid/ask spread that represents the highest price someone's willing to pay to buy (the bid), and the lowest price that someone's willing to pay to sell (the ask). There is _always_ a bid/ask spread, because as soon as anyone places an order that would reduce the spread to zero, that means they're willing to pay what someone's asking, or vice-versa, and therefore the exchange immediately converts it into a trade--done deal!--and now there's a spread again. Incidentally, executing a trade this way is "crossing the spread", you're opting to "pay the difference" between the bid and ask to get your trade done.

Someone that crosses the spread is said to be "taking liquidity." They're willing to pay the surcharge of the bid/ask spread to get their trade executed right now. On the other hand, someone that sits at the bid/ask spread, waiting for someone to cross to execute, is said to be "offering liquidity," they're willing to patiently wait in order to save money equal to the spread.

Now, a market maker is a participant that is _solely_ interested in making money off that bid/ask spread, basically like a sports bookie. They're willing to always be in the market, on both sides, and take the spread whenever someone crosses over. So if say AMZN is trading at 3332.95 x 3333.05, they'll be offering to buy at 3332.95, and sell at 3333.05, and any time people take those offers, they make a dime. Do this thousands of times a day, on many different instruments, and you've got a business. That said, there's real risks in market making, and understanding them requires the idea of "informed" versus "uninformed" trading.

An uninformed trader comes to the market simply because they want to trade for some external goal unrelated to trading. Maybe they're selling stock for a house downpayment, or buying agricultural futures because they make potato chips and don't want to deal with the price shocks of a sudden drought. They're willing to cross the spread, and they don't particularly care if they lose a few pennies on the transaction, because that's not their goal. These traders are the meat and potatoes for market makers, because they don't move the fundamental price of the instrument, they're effectively noise. In a market of nothing but uninformed traders, you would expect your position as a market maker to fluctuate around zero, because you're buying roughly as much as you're selling.

An informed trader, on the other hand, "knows something". They're aware of some material fact (or at least a strong hypothesis) that indicates the price of the instrument is going to move dramatically in the near future. They're willing to cross the spread, because they know the spread is going to move with them anyway. These are danger for market makers, because they will all pile in on one side of the trade, all buying, or all selling, and now the market maker will end up in a losing position--short when the price is going up, or long when the price is going down.

Imagine running a Gamestop store: on a normal day, you might see half your customers buying a PS4 and half selling a PS4, but on the day that the PS5 is announced, suddenly everyone wants to sell their PS4 at the same time before you lower what you're offering.

The classic market maker algorithm looks at "inventory", basically your absolute outstanding position, and tries to keep inventory as low as possible. When uninformed trading is taking place, your inventory is around zero, and you can stay very close to the minimum spread. As your inventory grows, and you become either increasingly more long or short, you start pulling your bids or asks away from the best bid/ask to try and bias future trades back into a 50/50 ratio. All market makers doing this simultaneously means the bid/ask spread starts to widen as there's increased uncertainty about the price.

Another key element to market making comes down to trade volumes. You could, today, start market making, all you need to do is put in limit orders at the bid and ask and wait. However, you'd probably not make that much, because you're losing money to various trading commissions, exchange fees, roundtrip network latency, etc. Professional market makers make tens of thousands of automated trades in a day, and as a result, are able to negotiate substantially lower costs that make it worth doing. Many exchanges even have "designated market makers" that have special trading permissions in exchange for guaranteeing that they will _always_ provide some best bid/ask offer even in the worst case conditions, otherwise you in a sufficiently large event you could get a "liquidity crisis" (i.e. there's no one willing to buy or sell that instrument at any price).

That ended up being more text than I thought it would--apologies.


> Now, a market maker is a participant that is _solely_ interested in making money off that bid/ask spread, basically like a sports bookie. They're willing to always be in the market, on both sides, and take the spread whenever someone crosses over. So if say AMZN is trading at 3332.95 x 3333.05, they'll be offering to buy at 3332.95, and sell at 3333.05,

I guess my question is, how is that different, what they're doing, vs someone crossing over but the money goes directly to the other party? I notice you said the market maker is listing the same prices, I'm trying to visualize how their action is any different than the exact same spread/scenario but the buyer crosses over to the seller and the same trade happens. What is actually different?

> you start pulling your bids or asks away from the best bid/ask to try and bias future trades back into a 50/50 ratio

Also are you saying that the market maker dictates the bid ask spread and not the highest bidder/lowest seller


> What is actually different?

They are the other party--a market maker isn't (outside of the "designated market makers" I referenced earlier) a special participant in trading, they're just like you or me.

If I put a limit order in to buy at 3332.95, and someone takes it, I now have one stock. If I put in a limit order to sell at 3333.05, I sell that stock and make a dime. In aggregate, if I'm doing that many many times, and the price stays roughly around 3333, I'm making a dime on every round trip.

A "market maker" just means that I don't really care about investing or speculating, all I'm really in for is to collect that dime on the round-trip and sit at the bid/ask spread.

> Also are you saying that the market maker dictates the bid ask spread and not the highest bidder/lowest seller

No, as you've said, the highest bidder/lowest seller set the bid/ask spread. It's just, in any high volume market, chances are the incidental traders that want to improve the best offer clear very quickly--at any given point the market is probably going to clear until you hit the market makers. By definition, they're the folks willing to wait it out.

That said, market makers can compete with each other--if you are more ambitious than your competition, you might be willing to improve (narrow the spread) on your competitors. You'll make money by filling trades that they will miss out, but on the other hand, you're getting less spread and less profit per-trade. If that lower profit doesn't cover the statistical risk of losses from price movements, then you won't be profitable. The bid/ask spread narrows or widens based on the interactions of all market participants, just, if a particular instrument looks very risky, the market makers, acting as backstops, might want more money in the form of spread to warrant trading.

In practice, the most liquid instrument in the market these days trade pretty close to the minimum spread all of the time--high-frequency market makers are very efficient and so you rarely have to pay more than a penny to cross the spread. As a result, it's also not terribly profitable to make markets, since you're only earning a penny per round-trip for the risk you have to take.

(Compared to say, real estate, where the "bid/ask spread" is basically unknown and has to be discovered through the very expensive agent mechanism.)


Really detailed explanation! This is essentially it. While there's obviously a bunch more complexity, the essence of market making is just trying not to own a stock, but just buy and sell immediately.

Think about how when you go on holiday and want to change money. Admittedly it's becoming more online now, but when you go to a foreign exchange shop they will have a "we buy" and "we sell" price. They are essentially a market maker. They don't care about having a load of pounds, or dollars or rupees. They just want to buy low and sell high to you, and make the difference!


Brilliant explanation! Thank you. This is exactly the level of detail I love. You explained that very well


I wonder if this problem flows both ways. I suspect that at least housing and school (or the overlap, housing in good school districts) is to some degree a zero-sum game whose costs are set at the margins.

In tech metros, where those marginal market-clearing prices are set by those tech salaries, it makes sense that they would settle at precisely the point where they're no longer quite comfortable to the average tech worker. Otherwise, what's to stop you from paying a bit more for that house than the second best offer?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You