So, it’s interesting. You know how with RAM, it’s a good idea for it to be “fully utilized”, in a sense that anything apps aren’t using should be used for file system cache? And then when apps do need it, the least-recently-used cache can be freed to make room? It’s actually similar for the file system itself!
If macOS is using 153GB for iCloud cache, that’s only a bad thing if it’s not giving it back up automatically if your filesystem starts getting full. Because it means you have local copies of things that live in iCloud, making the general experience faster. In that sense, you want your filesystem to be “fully utilized”. The disk viewer in macOS that shows you filesystem utilization should even be differentiating this sort of cache from “real” utilization… this cache should (if everything is working right) should logically be considered “free space”.
Now of course, if there are bugs where the OS isn’t giving that storage back when you need it, that all goes out the window. And yeah… bugs like these happen too damned often. But I still say, the idea is actually a good one, at least in theory.
What would the alternative be? Simply don't cache anything you get from icloud? Because even if you delete it more eagerly, that's a write cycle.
In fact, avoiding deleting it in case the user gets it again, is going to put fewer write cycles on the SSD, assuming you're going to write it to the SSD at all. The only alternative I can think of is keeping everything from iCloud in RAM, but that is a pretty insane idea. (Also, then the first thing you'd get is people complaining that iCloud eats up all their 5G data caps, etc.)
Of course, but then iCloud might want to cache a reasonable amount of data, say, the 10% the user uses the most. Seeing iCloud caches in the 100+GB arena makes no sense to me, especially if the system isn’t rapidly releasing that storage when needed.
If the ability to release the storage on-demand works correctly (and this is a big if) there’s no reason to limit to 10%. What benefit will that have? If the system works well, deleting the data eagerly accomplishes nothing.
I think the actual system uses filesystem utilization as a form of “disk pressure”, to the point where once it’s above a certain threshold (say, 90% used), it should start evicting least-recently-used data. It doesn’t wait for 100%, because it takes some nonzero amount of time to free the cache. But limiting the cache size arbitrarily doesn’t seem useful.
It gets more complicated when there are multiple caches (maybe some third party apps has their own caches) and you need to prioritize who gets evicted, but it’s still the same thing in theory.
But yeah, if the system isn’t working right and cache isn’t seen as cache, or if it can’t evict it for some reason, then this all goes out the window. I’m only claiming it’s good in theory.
Fair, another thing I really like is you can do `nix flake show templates` and try to `nix flake init -t templates#trivial` and be like ahh so thats how its supposed to be done or init `full` and be like ok this flake thing can do a lot of things I didn't know it could do, you can then edit, delete and experiment.
The flakes were the main UX/DX improvement for me. Before them I honestly could not do anything. The learning curve was so incredibly steep it almost felt like the people behind Nix were being malicious or intentionally gatekeeping. You finally stumble onto something you can at least partly understand, but then the powers that be throw two last obstacles at you like,
First, flakes are "experimental", so you have to enable them. Back then there were like three slightly different CLI commands to do it, and it felt like none worked from like 5 tutorial tabs I had open, putting it `experimental-features =` into flake you are trying to switch to does not work obviously.
Then you hit the classic situation where your flake is not committed or staged, so Nix refuses to see it. And instead of telling you that, it prints this abomination of error message "error: path '/nix/store/0ccnxa25whszw7mgbgyzdm4nqc0zwnm8-source/flake.nix' does not exist" (https://determinate.systems/blog/changelog-determinate-nix-3...)
I would not wish learning Nix from zero on my worst enemy, and I say that as someone who uses nix-darwin, devShells, deploy-rs and so on every day. The UX/DX is really bad, but nothing else comes close to its capabilities.
Sorry for rant but without flakes I would not make it.
> The flakes were the main UX/DX improvement for me. Before them I honestly could not do anything.
Agreed. I think flakes are far more intuitive than channels. In a flake everything is declared in the repo it's used in. I still don't understand channels.
For someone who's used to thinking in channels, I suppose flakes would be jarring. For someone (like me) who came from the world of Project.toml and package.json, flakes make a lot of sense.
I think a lot of people come to Nix and NixOS from Linux and similar environments, where having "repositories" or "registries" is fairly common as a way for distributing indexes of software in their distributions. So it's quite naturally moving in that way.
But for someone coming from OSX/macOS or Windows where there basically is just one index (provided by the companies maintaining the OSes) and you can't really add/remove others, it's a completely new concept, makes sense there is at least a bit of friction as those people wrap their head around it.
I'm not sure I understand your point. You're saying that channels are like apt/sources.list or yum.repos.d, and flakes are like the Apple App Store? Or the other way around?
One thing that probably didn't help my understanding of channels was that I run Nix on non-NixOS systems (primarily MacOS and Fedora). If I'd stuck to NixOS, then thinking of a channel in the same terms as apt/sources.list or yum.repos.d would have been an easier mental model.
That has nothing to do with flakes. When I add a "module" to my repos its the same. I have to add it the git repos or nix does not "see" it. And yes, its pretty unintuitive.
It actually is specific to flakes. Classic nix commands can see untracked files just fine. Flake evaluation behaves differently because of how it decides which "scheme" to use:
> If the directory is part of a Git repository, then the input will be treated as a `git+file:` URL, otherwise it will be treated as a `path:` url;
This is why untracked or unstaged files disappear when using flakes:
What's more interesting, is how confident your original comment read, but turned out to not be correct at all. Of course it has always been true, but excellent reminder that even humans hallucinate.
I saw exactly the same infinite arms thing with zero prior interest in religion. It took me to place “I was once before and should know well” other entities protested because why bother when he needs to go back soon. Then I came back to my room and had no idea what to do with that experience.
These stories never fail to astonish me. Why the same deity? It’s so interesting.
The fact the mind is able to create these powerful visions and patterns and other realities is really incredible. We have this machinery for perceiving the world and moving though it, but that machinery is capable of so many other insane and beautiful and terrifying things - capabilities which are inaccessible except in rare instances.
It’s really quite remarkable. Underneath our prosaic experience of consciousness is something that can generate infinite fractals, awe-inspiring visions of otherworldly creatures, dream landscapes of colour and shape. Why? Where does it all come from? Is this what life would be like all the time without us filtering the information coming into our senses?
May I suggest "Man And His Symbols" by Car Jung? It was his final writing and, I believe, his only one that focused on the common(ish) reader as the audience. The basis of the book (and generally his studies and beliefs) is that the subconscious is as meaningful as the conscious, it just communicates in ways that are harder to access in modern society, and therefore it's been pushed away and ignored.
Yup, which is why I never understand why people keep making this criticism that could have been avoided by just reading the docs a little bit more or even asking on the htmx Discord.
Serious question: do people actually enjoy writing Ruby? I feel I’m writing in something like Bash. I never felt this way until I picked up other languages like Rust, Zig, C#, and learned a tiny bit of programming language theory. After that, the loose and squishy feel of Ruby really started to bug me. Also, it seems like every Ruby programmer I know only ever uses other dynamic languages like Python. It’s never like they’re experts in C++ or something and then decided to start programming in Ruby.
I had a good background in C++ programming before switching to ruby. At first, I was terrified of the lack of strict typing, but after using it for a while, I realized my concern wasn't that warranted. For me it is about the tradeoff of dealing with types vs productivity. Sure I occasionally get bit by a random "method not defined for nil" error, but it is usually very easy to fix, and I don't run into the issue very often. With ruby, and especially rails, it is about the productivity gains. I can simply accomplish much more in less time and fewer lines of code than I would in other languages/frameworks. Not only am I writing fewer lines of code (usually), the language is very expressive without being overly cryptic. The code is more readable, and to me that results in better maintainability. The strong community and ecosystem emphasis that is put on testing, also leads to more resilient and much more maintainable code.
I disagree, I think weak typing significantly lowers developer productivity. Because your IDE gets lobotomized. Types aren't just for people, they're for programs. If I can't go to definition or go through the control flow that's a problem to me. I program in PHP - I get it. I have to live in the debugger because my IDE is worthless when I'm using bespoke arrays for everything.
Also, most statically typed languages have very robust type inference. If you don't like writing types that's fine - the language can just infer them 95% of the time. A lot of times you can open up a C# file and find next to no types explicitly written. But if you hover over something in your IDE, you can see the type.
Absolutely. I enjoy it so much that I wonder "do people actually NOT enjoy writing Ruby?" It's usually the first tool I pull out of the toolbox for DSLs, scripts, spikes, one-offs and the like. A lot of the time, the project will happily stay in Ruby unless there's a good reason to use something else. And then I move it - horses for courses.
I programmed professionally in C, C++, C#, Deplhi, and a few other languages well before I had even heard of Ruby.
Yes, love it. Rewritten large parts of my stack in it (editor, shell, font renderer, terminal, window manager, file manager)
I started from a background of heavy C++ use, including a lot of template metaprogramming. Convincing me to even give Ruby a chance took a lot, but once I'd tried it I abandoned C++ pretty much immediately, and don't miss it.
You don't miss things like enums, exhaustive switch or any other basic language features? How about `method_missing` its such a crazy idea to me that something like this exists, I know why it exists but I am like why, why such bloat and complexity.
Ruby inheritance is a list of class names. When you call a function on an object, Ruby goes up that list, looking for the first one that defines that function.
If it doesn't find any class defining that function, it calls `method_missing` (and again, goes up the list). The Ruby base object class defines `method_missing`, so if no-other classes in the ineritance list do, you get that one (which then throws the usual error).
IMO, there is zero bloat or complexity added by this; it's super simple language bootstrapping (allowing more of Ruby to be written in Ruby, vs the c interpreter).
What do you see as the bloat and complexity added by this?
> rely less on a compiler and more on myself with automated tests
jme, but i think this is a muscle that a lot of people don't have developed if they came from a language/toolset/ide that does automatic type checking and autocomplete reliably etc
Personally, method_missing goes against both of mine. It makes programs harder to reason about, more difficult to debug, and nearly impossible to `grep`. That said, I understand that this kind of flexibility is what some people like. I just don’t.
That's not a serious question. Of course people do. Your inability to understand the language does not impact anyone else other than yourself. This should go without saying.
I'm also an expert in C, Go and JavaScript. Ruby is an excellent language and the smalltalk paradigm has some real strengths especially for duck typed systems. The only reason I don't use it more often is because it is slow for the type of work I'm doing recently.
It was amazing for web work and it's fantastic for writing small little utility scripts.
A open distaste for things does not make you sophisticated or smart. You're not in any category of high repute when you do this.
I love Rails, its been my to-go framework for reference. But I could never get as confortable with Ruby as writing JS or PHP. I do not know the reason.
If debugging is hard to you in Ruby because of monkey patching, it's an issue of not knowing the debugging tools. Attach pry or Ruby debug, and show the source location of a method, or log them. This isn't surprising - debugging Ruby is different to debugging most static languages, and more tutorials on how to do this well would be nice...
Also the use of monkey patching in Ruby peaked something like a decade and half ago. Outside of Rails, it's generally frowned on and introducing new methods is usually addressed by opting in by including modules these days.
Can you give an example of where monkey patching made debugging hard? I have a decade of Ruby experience and can't think of a single time it was an issue
This is one of those things that sounds like it'd be a problem but it really isn't
I spent more of my life that I would like to admit learning and writing Rust. I still build all of my web applications in almost pure Ruby these days. Speed of thought to action is simply unparalleled and it turns out in most situations that was the most important factor.
I do. It's a whole thing that get you down to writing your business logic in an expressive way very easily. Framework (Rails) helps, yes, but even pure Ruby can be nice. I've written a second time accuracy simulator for cars and chargers in a EV charging stations in pure Ruby, that was fast to iterate around and pleasant to write.
The ecosystem, toolchain and all do a lot. It is really missed when I do other languages, and I wish to find the same way of developing elsewhere. I currently do C for embedded in an horrible IDE, and I want to bang my head against the table each time I had to click on something on the interface.
> Also, it seems like every Ruby programmer I know only ever uses other dynamic languages like Python. It’s never like they’re experts in C++ or something and then decided to start programming in Ruby.
Can you expand on what you’re saying here or why you’re raising this is as an issue with ruby the language or rails the library?
Just a personal observation that made my communication with ruby developers hard as I cannot use concepts from strongly typed languages because they live in a word without them, but I guess it's more issue with me than them.
Yes, many people love programming in Ruby. It’s a matter of preference not some lack of technical merit. There are plenty of people who are well equipped in strongly typed languages that write in both. You might not know them but you really don’t have to look very far.
yes, I have used a lot of languages, both static and dynamic, and ruby is one of the ones I love. maintaining large code bases is certainly not its forte, but in terms of expressing what you want in code it is like a tool that fits really well into my hand.
A lot of these kind of skills arent always applicable or comparable to a salary position.
Many do odd contract jobs that are extremely high value; i.e. come in a fix this super big bug or add this super important feature on a COBOL system at an extremely high day rate because its hard to find people with appropriate skills.
FWIW when I worked in a Finance company with a lot of Cobol + JCL + DB2 devs (including in management so I could see more info) their salaries were on average similar to Full Stack but possibly lower, especially as we put more AWS emphasis which those people started getting more premium salaries. Some banks I hear give cobol premium but it seems to be more specifically very specific mainframe systems experience + cobol.
But what do those Full Stack engineers make? Salaries are hugely variable across the industry. There are “senior” engineers making 60 k$/yr, and new grads starting at 200 k$/yr
reply