I really like it. API-first git repos without the limitations of a git service like github that are built primarily for humans. Looks like a competitor to code.storage by pierre.
Zig is a great choice. I spent the last three years working on my own git implementation in Zig (see my profile) and it's really the perfect language for this. It gives precise low level control and heavily emphasizes eliminating dependencies (like libc) which makes it perfect for web assembly.
>At the same time, the larger tech companies (Meta and Google, specifically) ended up building off of hg and not git because (at the time, especially) git cannot scale up to their use cases.
Fun story: I don't really know what Microsoft's server-side infra looked like when they migrated the OS repo to git (which, contrary to the name, contains more than just stuff related to the Windows OS), but after a few years they started to hit some object scaling limitations where the easiest solution was to just freeze the "os" repo and roll everyone over to "os2".
>> It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules
This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.
The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.
Particularly by experimenting on something proprietary like Graviton.
People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.
The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).
This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.
I've actually heard a plausible theory about the TUI being janky, that being that they avoid use of the alternate screen feature of ANSI (and onwards) terminals.
The theory states that Anthropic avoids using the alternate screen (which gives consuming applications access to a clear buffer with no shell prompt that they can do what they want with and drop at their leisure) because the alternate screen has no scrollback buffer.
So for example, terminal-based editors -- neovim, emacs, nano -- all use the alternate screen because not fighting for ownership of the screen with the shell is a clear benefit over having scrollback.
The calculus is different when you have an LLM that you have a conversational history with, and while you can't bolt scrollback onto the alternate screen (easily), you can kinda bolt an alternate screen-like behaviour onto a regular terminal screen.
I don't personally use LLMs if I can avoid it, so I don't know how janky this thing is, really, but having had to recently deal with ANSI terminal alternate screen bullshit, I think this explanation's plausible.
I wrap Opus 4.5 in a consumer product with 0 economic utility and people pay for it, I'm sure plenty of end users are willing to pay for it in their software.
Edit: I'm not using the term of art, I mean it literally cannot make them money.
I’d encourage devs to use MiniMax, Kimi, etc for real world tasks that require intelligence. The down sides emerge pretty fast: much higher reasoning token use, slower outputs, and degradation that is palpable. Sadly, you do get what you pay for right now. However that doesn’t prevent you from saving tons through smart model routing, being smart about reasoning budgets, and using max output tokens wisely. And optimize your apps and prompts to reduce output tokens.
> [T]he overwhelming thrust of the available evidence is
that there is no difference in the legibility of serif typefaces and sans serif typefaces either when reading from paper or when reading from screens. Typographers and software designers should feel able to make full use of both serif typefaces and sans serif typefaces, even if legibility is a key criterion in their choice.
As someone who's been pushing for renewables for quite a while now it's dismaying that it's taken a war to accelerate this push, but I'm glad to see that it's happening at least.
It's doubly dismaying that my own country (US) is still doubling down on fossil fuels despite everything.
The concern about a new dependency on China is real, but renewables do have the advantage that once you have the infrastructure in place it keeps working without continuously importing fuel. Nonetheless, China has done a good job building up their PV/battery manufacturing capacity (including via subsidies for a while if I'm not mistaken) and to the extent the rest of the world wants to avoid a dependency on them we should do that too.
I'll do you one better, which requires no special cameras (most have IR filters) nor double cameras or prisms.
Shoot the scene in 48 or 96 fps. Sync the set lighting to odd frames. Every odd frame, the set lights are on. Every even frame, set lights are off.
For the backing screen, do the reverse. Even frames, the backing screen is on. Odd frames, backing screen is off.
There you go. Mask / normal shot / Mask normal shot / Mask ... you get the idea.
Of course, motion will cause normal image and mask go out of sync, but I bet that can be remedied by interpolating a new frame between every mask frame. Plus, when you mix it down to 24fps you can introduce as much motion blur and shutter angle "emulation" as you want.
The sad thing about this is the problems encountered during post from the production team saying "fix it post" during the shoot. I've been on set for green screen shoots where the lighting was not done properly. I watched the gaffer walk across the set taking readings from his meter before saying the lighting was good. I flip on the waveform and told him it was not even (which never goes down well when camera dept tells the gaffer it's not right). He put up an argument, went back and took measurements again before repeating it was good. I flipped the screen around and showed him where it was obviously not even. A third set of meter readings and he starts adjust lights. Once the footage was in post, the fx team commented about how easy the keys were because of the even lighting.
The problem is that the vast majority of people on set have no clue what is going on in post. To the point, when the budget is big enough, a post supervisor is present on production days to give input so "fixing it in post" is minimized. When there is no budget, you'll see situations just like in the first 30 seconds of TFA's video. A single lamp lighting the background so you can easily see the light falling off and the shadows from wrinkles where the screen was just pulled out of the bag 10 minutes before shooting. People just don't realize how much light a green screen takes. They also fail to have enough space so they can pull the talent far enough off the wall to avoid the green reflecting back onto the talent's skin.
TL;DR They solved something to make post less expensive because they cut corners during production.
I'm not so familiar with the C64, but Monkey Island did indeed use graphics mode on all the 16/32 bit systems it supported - PC graphics cards had their video memory on the card (same as they do today), so saving memory by using text mode didn't make sense. The only problem with that was that the CPU had to be used for any processing of the video memory, so especially scrolling the whole screen was sometimes a bit slow with weaker CPUs. The Amiga and Atari ST didn't have a dedicated text mode.
XML is notoriously expensive to properly parse in many languages. Basically, the entire world centers around 3 open source implementations (libxml2, expat and Xerces), if you want to get anywhere close to actual compliance. Even with them, you might hit challenges (libxml2 was largely unmaintained recently, yet it is the basis for many bindings in other languages).
The main property of SGML-derived languages is that they make "list" a first class object, and nesting second class (by requiring "end" tags), and have two axes for adding metadata: one being the tag name, another being attributes.
So while it is a suitable DSL for many things (it is also seeing new life in web components definition), we are mostly only talking about XML-lookalike language, and not XML proper. If you go XML proper, you need to throw "cheap" out the window.
Another comment to make here is that you can have an imperative looking DSL that is interpreted as a declarative one: nothing really stops you from saying that
means exactly the same as the XML-alike DSL you've got.
One declarative language looking like an imperative language but really using "equations" which I know about is METAFONT. See eg. https://en.wikipedia.org/wiki/Metafont#Example (the example might not demonstrate it well, but you can reorder all equations and it should produce exactly the same result).
I work with a lot of audio in a professional capacity. You're correct if you're saying that neither tech is universally "teh best".
And you're correct that wired phones have a lot of advantages.
Tack on that they don't have latency, though I've never really tried to track vocals on wireless cans. I have a pretty nice collection of what I consider to be quality mid-tier stuff for my studio (hd280, dt770, mdr7506, k240), and I think they mostly sound better and I can use them longer than I can use the various wireless stuff I use.
And the "real" UHF wireless audio I use professionally (well, to collect rather than listen to audio) is very reliable and good sounding but also, like, $1000/ch once it's cased and cabled and properly accessorized.
However, for almost all of my day to day listening I use either airpods or a some bluetooth'd 3M ear muffs. I even went back to airpods after going through both wired and other wireless solutions.
I don't enjoy having my in-ears ripped out along with my pocket. And universally the cord ends and the physical connector on my phone are the weak spots that have had me replace stuff- I haven't bought a phone in the 5 years since I got one that could charge wirelessly and never has phones plugged into it, and I don't intend to get another one any time soon (knock on wood that my case keeps the screen from breaking and needing me to repair it).
I have a bluetooth receiver with an analog out that I keep in my workbox, which I used for program music at a show tonight. It's nice to start my truck and my podcast just starts playing, too, without having to get out my phone and plug it in.
You're right that wired stuff is better for some things. I still find wireless stuff to be superior in a lot of situations.
It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.
> But once you start adding mouse clickable tabs, buttons, checkboxes etc. you left the UX for TUIs behind and applied the UX expected for GUIs, it has become a GUI larping as a TUI.
Hard disagree. Borland TurboVision [0] was one of the greatest TUI toolkits of the DOS era, had all of these:
> Turbo Vision applications replicate the look and feel of these IDEs, including edit controls, list boxes, check boxes, radio buttons and menus, all of which have built-in mouse support.
JS-rendered websites are sometimes even better, they usually have some sort of internal API that you can access directly instead of relying on the website styling which may change at any moment.
I'm a fellow reporter who needs to keep tabs on some websites. I used various tools, including running my own Klaxon[1] instance, but these days I find it easier to just quickly vibe-code a crawler and use GitHub Actions to run it periodically. You can make it output an RSS feed, email you, archive it with archive.today, take a screenshot, or trigger whatever action you want.
I think this long post is saying that if you are afraid that moving code behind a function call will slow it down, you can look at the machine code and run a benchmark to convince yourself that it is fine?
This suffers from the same problem that so so so many alternative social, federated, self-hosted ideas suffer from. Matrix, keybase, pgp, etc.
It’s too dependant on encryption. Yes, it’s a cool technical feat that stuff can be in the open but also private - but:
1. I want to be able to follow my freinds if my phone dies and i have to get a new one.
2. I am very technical, and idk exactly what a X25519 keypair is.
I would like for people to come up with more stuff like this that is designed for small communities but not for very secure communication. Like I want something where it’s secured by a username and password, that i give to a server i am registered with - and that server handles the encryption business. If the server rotates keys, that’s for the admin to figure out and exchange keys with sibling servers.
Idk I’m just making up specifics but this is the kind of ethos i think is needed to make things that can be successful with non-technical people in a way that can unseat big tech.
In case i sound too critical - this is cool. It just isn’t something i can use with family and friends to replace facebook or even email.
In general, I find that minimax approximation is an underappreciated tool, especially the quite simple Remez algorithm to generate an optimal polynomial approximation [0]. With some modifications, you can adapt it to optimize for either absolute or relative error within an interval, or even come up with rational-function approximations. (Though unfortunately, many presentations of the algorithm use overly-simple forms of sample point selection that can break down on nontrivial input curves, especially if they contain small oscillations.)
While I'm glad to see the OP got a good minimax solution at the end, it seems like the article missed clarifying one of the key points: error waveforms over a specified interval are critical, and if you don't see the characteristic minimax-like wiggle, you're wasting easy opportunity for improvement.
Taylor series in general are a poor choice, and Pade approximants of Taylor series are equally poor. If you're going to use Pade approximants, they should be of the original function.
It's not lonliness, it's solitude. Time to find yourself and your creativity again.
You were codependant and need to learn how to be independant again. Living for you, just for yourself. You've spent most of your life living for others. Now it's time for you.
Try not to drink alcohol. Focus on your physical health. Gym/tennis/saunas/running/golf any physical activities you can.
I'm still paranoid about keeping things securely sandboxed.