When there’s feature parity, what’s the next differentiator for you? For me, performance.
Though I admit another important aspect is community adoption. If your 3rd-party dependency uses zod internally, well now you’re bundling in both, and the added network latency probably negates any performance improvement you were getting in a web app. That’s why I wish libraries would use something more generic that allows you to dependency-inject what you’re already using, like https://github.com/decs/typeschema
> In an afternoon, we extended our data service library to perform large-scale data migrations. It reads token ranges from a database, checkpoints them locally via SQLite, and then firehoses them into ScyllaDB. We hook up our new and improved migrator and get a new estimate: nine days!
How many machines this migrator was running on? One? :D Sounds absurdly amazing!
Here is Makefile "starter" I use: https://github.com/awinecki/magicfile
People call this "self-documenting makefile".
It migrated with me from company to company, from project to projects. Through node, php, aws, docker, server management, cert updates, file processing and many many more.
As person who has been doing “react” since 2016 I would say that it removes so much of the “react” complex BS that I am surprised it is not x100 times more popular.
We’ve recently moved one service from next to Astro and it was just removing a ton of boilerplate and “dance around” code.
Oh, but you are moving out of React. And svelto IMO is waaay friendlier and "sane" than "typical" react. Svelte reactive model (observables and computed) are very friendly and simple to use.
It has one of the highest information and idea density per paragraph I've read.
And it is full good ideas. Honestly, reading any discussion or reasoning about AGI / Safe AGI is kinda pointless after that.
Author has described all possible decisions and paths and we will follow just one of them. Could find a single flaw in his reasoning in any history branch.
His reasoning is very flawed and this book is responsible for a lot of needless consternation.
We don’t know how human intelligence works. We don’t have designs or even a philosophy for AGI.
Yet, the Bostrom view is that our greatest invention will just suddenly "emerge" (unlike every other human invention). And that you can have "AGI" (hand-wavy) without having all the other stuff that human intelligence comes along with, namely consciousness, motivation, qualia, and creativity.
This is how you get a "paperclip maximizer" – an entity able to create new knowledge at astounding rates yet completely lacking in other human qualities.
What has us believe such a form of AGI can exist? Simply because we can imagine it? That's not an argument rooted in anything.
It's very unlikely that consciousness, motivation, qualia, and creativity are just "cosmic waste" hitching a ride alongside human intelligence. Evolution doesn't breed inefficiencies like that. They are more likely than not a part of the picture, which disintegrates the idea of a paperclip maximizer. He's anthropomorphizing a `while` loop.
> His reasoning is very flawed and this book is responsible for a lot of needless consternation.
Is it? I feel that there is a stark difference between what you say and what I remember what was written in the book.
> We don’t know how human intelligence works.
I think it was addressed in the first half of the book. About research and progress in the subject. Both with the tissue scanning resolution, emulation attempts like human brain project and advances in 1:1 simulations on primitive nervous systems like worms that simulate 1 second in 1 real hour or something.
While primitive, we are doing exponential progress.
> Yet, the Bostrom view is that our greatest invention will just suddenly "emerge"
I think it is quite the contrary. There was nothing sudden in the reasoning. It was all about slow progress in various areas that gets us closer to advanced intelligence.
The path from a worm to a village idiot is million times longer than from a village idiot to the smartest person on earth.
> an entity able to create new knowledge at astounding rates yet completely lacking in other human qualities.
This subject was also explored IMO in the depth...
Maybe my memory is cloudy, I've read the book 5+ years ago, but it feels like we've understood it very (very) differently.
That said, for anyone reading, I am not convinced by the presented argument and suggest reading the book.
Correct me if I am wrong. My understanding is that hydrogen is an efficient way of getting energy only if it is used as a larger scale storage instead of batteries. Is it?
The round trip efficiency of hydrogen and electricity is much less than batteries, even at large scale.
What hydrogen has going for it is extremely low capex per unit of energy storage capacity, perhaps $1/kWh for a system that stores compressed hydrogen in solution mined salt caverns.
According to various sites, the Mediatek Dimensity 6100+ is a 6nm update to a core that was released 3 years ago (Dimensity 700 on a 7nm). It's 5-10% faster, likely due to the update from 7 to 6nm, as the cores are the same and run at the same speed. It contains an updated bluetooth chipset (from 5.1 to 5.2) and supports a larger max camera. The camera on the A15 is well below the max size of the previous chipset, however, the increased camera bandwidth should ensure that the camera feels snappier (a common complaint on low-end phones). The process improvement should increase efficiency as well, however, there are not benchmarks that are able to test this.