> It's far from perfect, but using a simple application with no built-in ads, AI, bloat, crap, etc is wonderful.
I think there are three main reasons it's not perfect yet:
1. Building both a decentralised open standard (Matrix) at the same time as a flagship implementation (Element) is playing on hard mode: everything has to be specified under an open governance process (https://spec.matrix.org/proposals) so that the broader ecosystem can benefit from it - while in the early years we could move fast and JFDI, the ecosystem grew much faster than we anticipated and very enthusiastically demanded a better spec process. While Matrix is built extensibly with protocol agility to let you experiment at basically every level of the stack (e.g. right now we're changing the format of user IDs in MSC4243, and the shape of room DAGs in MSC4242) in practice changes take at least ~10x longer to land than in a typical proprietary/centralised product. On the plus side, hopefully the end result ends up being more durable than some proprietary thing, but it's certainly a fun challenge.
2. As Matrix project lead, I took the "Element" use case pretty much for granted from 2019-2022: it felt like Matrix had critical mass and usage was exploding; COVID was highlighting the need for secure comms; it almost felt like we'd done most of the hard bits and finishing building out the app was a given. As a result, I started looking at the N-year horizon instead - spending Element's time working on P2P Matrix (arewep2pyet.com) as a long-term solution to Matrix's metadata footprint and to futureproof Matrix against Chat Control style dystopias... or projects like Third Room (https://thirdroom.io) to try to ensure that spatial collaboration apps didn't get centralised and vendorlocked to Meta, or bluesky on Matrix (https://matrix.org/blog/2020/12/18/introducing-cerulean/, before Jay & Paul got the gig and did atproto).
I maintain that if things had continued on the 2019-2022 trajectory then we would have been able to ship a polished Element and do the various "scifi" long-term projects too. But in practice that didn't happen, and I kinda wish that we'd spent the time focusing on polishing the core Element use case instead. Still, better late than never, in 2023 we did the necessary handbrake turn focusing exclusively on the core Element apps (Element X, Web, Call) and Element Server Suite as an excellent helm-based distro. Hopefully the results speak for themselves now (although Element Web is still being upgraded to use the same engine as Element X).
3. Finally, the thing which went wrong in 2022/2023 was not just the impact of the end of ZIPR, but the horrible realisation that the more successful Matrix got... the more incentive there would be for 3rd parties to commercialise the Apache-licensed code that Element had built (e.g. Synapse) without routing any funds to us as the upstream project. We obviously knew this would happen to some extent - we'd deliberately picked Apache to try to get as much uptake as possible. However, I hadn't realised that the % of projects willing to fund the upstream would reduce as the project got more successful - and the larger the available funds (e.g. governments offering million-dollar deals to deploy Matrix for healthcare, education etc) then you were pretty much guaranteed the % of upstream funding would go to zero.
So, we addressed this in 2023 by having to switch Element's work to AGPL, massively shrinking the company, and then doing an open-core distribution in the form of ESS Pro (https://element.io/server-suite/pro) which puts scalability (but not performance), HA, and enterprise features like antivirus, onboarding/offboarding, audit, border gateways etc behind the paywall. The rule of thumb is that if a feature empowers the end-user it goes FOSS; if it empowers the enterprise over the end-user it goes Pro. Thankfully the model seems to be working - e.g. EC is using ESS for this deployment. There's a lot more gory detail in last year's FOSDEM main-stage talk on this: https://www.youtube.com/watch?v=lkCKhP1jxdk
Eitherway, the good news is that we think we've figured out how to make this work, things are going cautiously well, and these days all of Element is laser-focused on making the Element apps & servers as good as we possibly can - while also continuing to also improve Matrix, both because we believe the world needs Matrix more than ever, and because without Matrix Element is just another boring silo'd chat app.
The bad news is that it took us a while to figure it all out (and there are still some things still to solve - e.g. abuse on the public Matrix network, finishing Hydra (see https://www.youtube.com/watch?v=-Keu8aE8t08), finishing the Element Web rework, and cough custom emoji). I'm hopeful we'll get here in the end :)
This is a much more reasonable take than the cursor-browser thing. A few things that make it pretty impressive:
> This was a clean-room implementation (Claude did not have internet access at any point during its development); it depends only on the Rust standard library. The 100,000-line compiler can build Linux 6.9 on x86, ARM, and RISC-V. It can also compile QEMU, FFmpeg, SQlite, postgres, redis
> I started by drafting what I wanted: a from-scratch optimizing compiler with no dependencies, GCC-compatible, able to compile the Linux kernel, and designed to support multiple backends. While I specified some aspects of the design (e.g., that it should have an SSA IR to enable multiple optimization passes) I did not go into any detail on how to do so.
> Previous Opus 4 models were barely capable of producing a functional compiler. Opus 4.5 was the first to cross a threshold that allowed it to produce a functional compiler which could pass large test suites, but it was still incapable of compiling any real large projects.
And the very open points about limitations (and hacks, as cc loves hacks):
> It lacks the 16-bit x86 compiler that is necessary to boot [...] Opus was unable to implement a 16-bit x86 code generator needed to boot into 16-bit real mode. While the compiler can output correct 16-bit x86 via the 66/67 opcode prefixes, the resulting compiled output is over 60kb, far exceeding the 32k code limit enforced by Linux. Instead, Claude simply cheats here and calls out to GCC for this phase
> It does not have its own assembler and linker;
> Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.
Ending with a very down to earth take:
> The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
All in all, I'd say it's a cool little experiment, impressive even with the limitations, and a good test-case as the author says "The resulting compiler has nearly reached the limits of Opus’s abilities". Yeah, that's fair, but still highly imrpessive IMO.
Things change. The barrier to entry decreased, meaning more things will get created, more people will participate in communal efforts, and quality will depend on AI capabilities and figuring out how to curate well - better tools, less friction between idea and reality, and things get better for everyone.
Just because some things suck, for now, doesn't mean open source is being killed. It means software development is changing. It'll be harder to distinguish between a good faith, quality effort that meets all the expectations of quality control without sifting through more contributions.
Anonymous participation will decrease, communities will have to create a minimal hierarchy of curation, and the web of trust built up in these communities will have to become more pragmatic. The relationships and the tools already exist, it's just the shape of the culture that results in good FOSS that will have to update and adapt to the technology.
Someone tried to shake our company down once. They posted all this stock imagery on the web, waited for someone to use it with an ambiguously worded attribution policy, then have a third party chase you down and demand $100k but will settle for $5k.
It turns out we did attribute the right way (in our terms of use) and could prove it with logs of when we added the language and when it was removed after we removed the image, but I am sure they nail people all the time with this strategy. This didnt stop them from sending 20 emails, demand lawyers get on the phone, etc.
There are a couple of similar scams like this out there.
I was curious so I looked it up. Your description of the events isn't quite accurate IMHO. There was an objection to a Meta datacenter, but then state lawmakers passed new laws after losing the business to NM. It doesn't look like anyone was "fooled" by the anonymous bid but rather they simply changed their minds/laws.
> In 2016, West Jordan City sought to land a Facebook data center by offering large tax incentives to the social media giant. That deal ultimately fell through amid opposition by Salt Lake County Mayor Ben McAdams and a vote of conditional support by the Utah Board of Education that sought to cap the company’s tax benefits.
> That project went to New Mexico, which was offering even richer incentives.
> Three months after the Utah negotiations ended, state lawmakers voted in a special session to approve a sales tax exemption for data centers. The move was seen by many as another attempt to woo Facebook to the Beehive State.
So basically they first said "No", lost the bid, had FOMO so they passed new laws to attract this business.
>Asked about the identity of the company, Foxley said only that it is “a major technology company that wants to bring a data center to Utah.”
>And that vision could soon be a reality, after members of the Utah County Commission voted Tuesday to approve roughly $150 million in property tax incentives to lure an as-yet-unnamed company — that sounds an awful lot like Facebook — to the southern end of Pony Express Parkway.
Let me give you an anecdote that illustrates why it was needed in Eagle Mountain, Utah. One of my friends works for the city there and he told me about how the development went down.
When the city council first heard that Facebook wanted to build a data center, they shot it down solely because of Facebook's reputation. A year or two later, Facebook proposed the exact same project to the city council, while keeping their name secret under an NDA. Then, when the city council was only considering the economics of it, they jumped at the chance for the tax revenue and infrastructure investment. With essentially the same exact plan as before, one of the council members who rejected it before the NDA said "this is exactly the kind of deal a city should take."
I think in many ways, these companies are fighting their own reputations.
If you are looking for some reasoning behind the "hype": one piece of it is that humans have relatively good contextual spatial memory and using one very large "space" that has a sort of "physicality" to it (you can scroll it; things generally stay where you put them; etc) can feel really good. It goes back to some of the early ideals of "spatial navigation" of the original "desktop metaphor". (Many of which have been somewhat lost to time, with a lot less emphasis on things like windows opening in the same appearance as when they were last closed.)
I think where scrolling WMs starts to feel like it scratches peculiar itches the most is when you have a complex multi-workspace config in a more traditional tiling WM. Each workspace is a different place. In some of the best cases the WM may give a metaphor that each workspace is on a cube or other polygon that you are switching faces on. Scrolling WMs simplify needing to do 3D compositing if you want to visualize that "space" at a distance or have nice flips between workspaces that provide spatial cues to your brain, because scrolling is a thing we do a lot. We have many apps with "infinite scrolling" today; applying that to one large workspace can feel like a nice space to have to arrange your windows in, and other common computer gestures like zoom out and then back in to a different part of it feel "natural". Navigating your "desktop" becomes just like navigating a large Excel file or a large code file.
I don't know how it would work for people who can travel visa free, but for people on K-1, F-, M- and J-visas, as well as for people on work visas, you're required to set your social media visibility to public between the time when you apply for the visa and the time when a decision is made on it.
> bureaucracy and stagnation while the same staff ends up flourishing and producing top notch tech when under a US company like Apple
Apple orders a widget from them which can be sold in an established product with existing customers. The magic was creating the Apple brand and iPhone product.
I think the problem with old European "conglomerates" is that no one has the mandate/legitimacy to make a multi-year/decade tech investment equivalent to the iphone. "Decision makers" are likely to have been promoted from other professions than engineering/products, and people promoted through management/sales lack competence and legitimacy. Their job is to apply old and approved templates for decision-making, while paying dues and respect to appropriate people.
It fails when templates for decision-making don't exist. Spin-offs or acquisitions based on new tech, rather than existing products/markets, can't work with this type of management.
I have also gotten the sense that management positions are given as rewards for "long and loyal service". It is effectively an incentives program, with the implicit assumption that management decisions don't really matter. This is not far from the the truth in old industrial companies with few but huge returning "captive" customers, which is typical in Europe eg Siemens, or high value luxury brands in fashion/jewellery/liquor/etc.
The meaning of the word "verwaltung" is different from the American "management". Verwaltung implies "preservation of stability" whereas "business management" implies something like "figure out how to sell more stuff".
Not OP but I'm currently make a city-builder computer game with a large procedurally-generated world. The terrain height at any point in the world is defined by function that takes a small number of constant parameters, and the horizontal position in the world, to give the height of the terrain at that position.
I need the heights on the GPU so I can modify the terrain meshes to fit the terrain. I need the heights on the CPU so I can know when the player is clicking the terrain and where to place things.
Rather than generating a heightmap on the CPU and passing a large heightmap texture to the GPU I have implemented the identical height generating functions in rust (CPU) and webgl (GPU). As you might imagine, its very easy for these to diverge and so I have to maintain a large set of tests that verify that generated heights are identical between implementations.
Being able to write this implementation once and run it on the CPU and GPU would give me much better guarantees that the results will be the same. (although necause of architecture differences and floating point handling they the results will never be perfect, but I just need them to be within an acceptable tolerance)
This isn't quite true. The Google Pixels allow me to unlock the bootloader, install my own system, and relock the bootloader. As a result, I run an alternative OS called GrapheneOS which is more secure than Android.
The fact that I can unlock and relock the bootloader is not a security issue or a risk. People who don't know what that means cannot possibly do it by mistake.
Now allowing root access to users on Android, that's a security risk because a user can be tricked into giving root access to some evil app. I don't have root access on my GrapheneOS, even though I chose to install it myself. Because it is more secure like this.
So it sounds like a fair compromise to me: they make Android the way they want, and if I don't like it I can install an alternative OS. Just like I can install Linux if I don't like Windows. What I don't like is that most Android manufacturers actively try to prevent me from doing that, and I don't like it.
Nice work! On a complete tangent, Git is the only SCM known to me that supports recursive merge strategy [1] (instead of the regular 3-way merge), which essentially always remembers resolved conflicts without you needing to do anything. This is a very underrated feature of Git and somehow people still manage to choose rebase over it. If you ever get to implementing merges, please make sure you have a mechanism for remembering the conflict resolution history :).
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
I'm completely blind. I like Linux. I've started to love Android since getting a Samsung and getting rid of OnePlus, cause accessibility. Termux is cool, but it's accessibility wasn't. So, I had Gemini rangle it up a bit into my fork of Termux [1]
Now it reads (usually) only newly incoming text, I can feel around the screen to read a line at a time, and cursor tracking works well enough. Then I got Emacs and Emacspeak working, having Gemini build DecTalk (TTS engine) for Termux and get the Emacspeak DecTalk speech server working with that. I'm still amazed that, with a Bluetooth keyboard, I have Linux, and Emacs, in my pocket. I can write Org and Markdown, read EPUB books in Emacs with Nov.el, look at an actual calendar not just a list of events, and even use Gemini CLI and Claude Code, all on my phone! This is proof that phones, with enough freedom, can be workstations. If I can get Orca working on a desktop environment in Termux-GUI. But even with just Emacs and the shell, I can do quite a bit.
Then I decided to go wild and make an MUD client for Emacs/Emacspeak, since accessible ones for Android are limited, and I didn't trust my hacks to Termux to handle Tintin++ very well. So, Emacs with Emacspeak it was, and Elmud [2] was born.
Elmud has a few cool features. First of all, since Emacspeak has voice-lock, like font-lock but for TTS, Ansi colors can be "heard", like red being a deeper voice. Also a few MUD clients have sound packs on Windows, which make them sound more like a modern video game, while still being text-based. I got a few of those working with Elmud. You just load one of the supported MUD's, and the sound pack is downloaded and installed for you. It's easy and simple. And honestly, that's what I want my tools to provide, something I, or anyone else who chooses to use them, that is easy to get the most out of.
None of this would have been possible without AI. None of it would have been done. It would have remained a dream. And yes, it was all vibe-coded, mostly with Codex 5.2 on high thinking. And yes, the code may look awful. But honestly, how many closed-source programs look just as bad or even worse under the covers of compilation?
I have DID and am also curious how it would affect the measurement. I'm just waking up so I've only skimmed the paper so far, but I suspect the results would differ depending on which of us was fronting.
We've noticed that each of us integrates not just sensory information differently, but we also seem to be "wired" differently.
For instance, we are AuDHD, and I, the primary host, lean strongly to the autism behavioral side, my co-host is somewhere between, and a secondary host leans strongly to the ADHD behavioral side. Things that are easy for me can be hard for another.
We also experience senses very differently. There have been many times where one of us can smell something strongly, switch, and the other can't smell it at all.
This affects other senses as well. When I watch a 24 fps movie at a theater, for about the first 10 minutes, all I see is a strobing of still images before I finally adapt and see motion. My co-host sees continuous motion right from the start. This may relate to the temporal binding window discussed in the paper as a motivation for their research.
Our working hypothesis since we were finally diagnosed has been that identity is, at least in part, an integration of both sensory information as well as how strongly various brain regions are activated by whichever identity or identities are most active at a particular time.
Lastly, we have the ability to "take control" over just part of the body. For example, for whatever reason, the motion of stirring a sauce is difficult to me, but it's trivial for another, so sometimes they'll take control of our arms to stir the pot while cooking. To me it feels like my arms have disappeared and someone else's arms are now attached and stirring the pot. This may be temporal binding window related because we do seem to experience sensory information at different speeds and this might cause us to get that alien hand feeling, which is sort of opposite of the rubber hand illusion.
So, I suspect that each of us would react differently to the rubber hand illusion test.
I once read “The Joy of Living” by Yongey Mingyur Rinpoche. It should come with a warning. It broke me for a year. I’m actually grateful for the existential crisis it caused me. But it was a brutal experience at first.
>Basically, as an employer it simply isn't fun to be forced to be in the "providing access to healthcare" business when that's not your core business.
It is for most large employers as it helps depress salaries and reduce competition from startups. Employees will want to work for a large employer that lets them pay for health insurance with pre-tax dollars, among other tax advantaged benefits that having a well funded HR department can provide. And employees cannot easily compare compensation at other employers so they are more likely to stick around than shop around, reducing the need to increase pay to keep up with the market.
Employers can also tweak compensation by modifying deductibles/out of pocket maximums/healthcare provider networks, and most people's eyes will glaze over before they can figure out if they got an increase or decrease in their total compensation.
Many, many years ago I was the first non-founder employee at a UK startup. That meant hearing quite a lot (but not all) of the corporate discussions about finance, company structure etc. Fairly early on in the fundraising process we settled on "Delaware parent company, 100% owner of UK private limited company". This meant that US investors could invest in the Delaware entity happily, while the tax and employment aspects were handled locally in the UK. Stock grants were made from the US entity (under US law), but also eligible for UK tax relief (since the options were being assigned to UK residents).
While I think this was obviously more complicated than a single entity and probably required two sets of specialists rather than just one, it certainly worked and I would expect something similar is possible with Canada?
The founders were not required to move to the US, but ended up doing so anyway.
> A good manager is more like a transparent umbrella. They protect the team from unnecessary stress and pressure, but don’t hide reality from them.
I'm absolutely going to steal this metaphor going forward.
Being a "transparent umbrella" does require knowing the personalities of your reports, some people do get distracted when they think higher-up decisions or unhappiness are going to affect their team. Most people, however, really appreciate the transparency. It helps them feel more in control when they know what is happening around them, and when things do change they can tie it back to something that was said previously.
Seems like China is entering every industry. This week they just launched their attempt to take over the ice cream market in the US. Its amazing to see how much overinflated every product in the US has become, everything from cars, to computers to now even freakin coffee or ice cream.
Imagine if China has a foothold in every industry. Sure the US can tariff itself but the rest of the world is not really competing in most of those industries and so consumers will be able to see that they dont have to settle for overpriced junk anymore. What will American/European or even other sian companies do? In America most companies have financialized so much that the underlying product that made the company famous have rotten in quality.
I recently was blown away Laifen's P3 Pro electric razor. I always thought I would be a loyal Panasonic customer for life (since I had family work for the company) but here comes this Chinese company from nowhere and they produce such an amazing device at an amazing price. I never thought having a CNC milled pocket razor using some sort of tiny linear motor would be something I would want but now I can't see life without it.
They are doing it to every industry. I always accepted things like 3D printers were gone thanks to Bambu but I now have to consider every industry at risk.
This reminds me of when I tried to let Claude port an Android libgdx-based game to a WASM-based libgdx version, so I can play the game in the browser.
No matter how much I tried to force it to stick to a mostly line-by-line port, it kept trying to "improve" the code. At some point it had to undo everything as it introduced a number of bugs. I asked it: "What should I add to your prompt so you won't do this again?" and it gave me this:
### CRITICAL LESSON: Don't "Improve" During Porting
- **BIGGEST MISTAKE: Reorganizing working code**
- **What I did wrong:** Tried to "simplify" by splitting `createStartButton()` into separate creation and layout methods
- **Why it failed:** Introduced THREE bugs:
1. Layout overlap (getY() vs getY() - getHeight())
2. Children not sized (Group.setSize() doesn't affect children)
3. Origins not updated (scaling animations broken)
- **The fix:** Deleted my "improvements" and copied the original Android pattern faithfully
- **Root cause:** Arrogance - assuming I could improve production-tested code without understanding all the constraints
- **Solution:** **FOLLOW THE PORTING PRINCIPLES ABOVE** - copy first, don't reorganize
- **Time wasted:** ~1 hour debugging self-inflicted bugs that wouldn't exist if I'd just copied the original
- **Key insight:** The original Android code is correct and battle-tested. Your "improvements" are bugs waiting to happen.
I like the self-reflection of Claude, unfortunately even adding this to CLAUDE.md didn't fix it and it kept taking wrong turns so I had to abandon the effort.
Correct. In the US, the TSA is just a government jobs program for the lowly skilled or unskilled. It's all security theater.
TSA Chief Out After Agents Fail 95 Percent of Airport Breach Tests
"In one case, an alarm sounded, but even during a pat-down, the screening officer failed to detect a fake plastic explosive taped to an undercover agent's back. In all, so-called "Red Teams" of Homeland Security agents posing as passengers were able get weapons past TSA agents in 67 out of 70 tests — a 95 percent failure rate, according to agency officials."
This just adds confusion as to the purpose of all this.
The motivation behind the liquid limits is that there are extremely powerful explosives that are stable water-like liquids. Average people have never heard of them because they aren’t in popular lore. There has never been an industrial or military use, solids are simpler. Nonetheless, these explosives are easily accessible to a knowledgeable chemist like me.
These explosives can be detected via infrared spectroscopy but that isn’t going to be happening to liquids in your bag. This reminds me of the chemical swipes done on your bags to detect explosives. Those swipes can only detect a narrow set of explosive chemistries and everyone knows it. Some explosives notoriously popular with terror organizations can’t be detected. Everyone, including the bad guys, knows all of this.
It would be great if governments were more explicit about precisely what all of this theater is intended to prevent.
Nice russian talking point. UA designed developed and maintained most top tier soviet nuclear weapons. The largest nuke plant in USSR was Yuzhmash in Dnepr and largest design bureau again in UA Dnepr KB Yuzhnoe. UA had to help maintain russian nukes after the collapse of USSR cause russia lacked tech. capability.
So, a couple years ago Microsoft was the first large, public-facing software organization to make LLM-assisted coding a big part of their production. If LLM's really delivered 10x productivity improvements, as claimed by some, then we should by now be seeing an explosion of productivity out of Microsoft. It's been a couple years, so if it really helps then we should see it by now.
So, either LLM-assisted coding is not delivering the benefits some thought it would, or Microsoft, despite being an early investor in OpenAI, is not using it much internally on things that really matter to them (like Windows). Either way, I'm not impressed.
From the essay - not presented in agreement (I'm still undecided), but Dario's opinion is probably the most relevant here:
> My co-founders at Anthropic and I were among the first to document and track the “scaling laws” of AI systems—the observation that as we add more compute and training tasks, AI systems get predictably better at essentially every cognitive skill we are able to measure. Every few months, public sentiment either becomes convinced that AI is “hitting a wall” or becomes excited about some new breakthrough that will “fundamentally change the game,” but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.
> We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code. Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.
> In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.
https://www.shadertoy.com/view/4dsXzM