Docker performance on current Intel Macs is much worse than Linux. So if you're building software to run on an x86 Linux server, dealing with ARM-based MacOS is going to be even a bigger PITA: as both the arch and the OS will be different. I am definitely in the camp of replicating server setups on my laptops, so as sweet and incredible as M1 is, I won't be able to use it.
But for new projects, I honestly do not know what will people do.
If Amazon continues to evolve their Graviton chips, and other cloud providers follow, it will be quite tempting to try a 100% ARM setup...
> Becuase macosx containers run inside a Linux VM, not on the native host OS
My comment was sarcastic.
> same with Windows
Significantly less so as Microsoft themselves provide the virtualisation glue, with a custom kernel and a dedicated / special-cased VM, in the form of WSL2.
Not only this and the parent comment is also correct but this is the lesson that civilization will continually have to deal with every generation ad nauseam. This is our time and our struggle and how we handle this and all the issues afoot will set the course for humanity.
We as a species and our societies by proxy are self actualizing kicking and screaming.
It might be counter productive to think it'll get better if people can just assume social media sites will ban/flag misinformation correctly. Maybe only lower profile stuff will slip through, but the volume of misinformation will barely change I suspect and users that haven't learned how to train their own filter might be more likely to believe "unflagged" content.
The languages/terms differed, but that's how software has been constructed since approximately forever, and we've always had debates on optimal size/length of lines/functions/objects/modules, etc. We've also had numerous reincarnations of binaries talking to each other over some form of RPC. What happened here was a marketing knife-war between companies in the container management space. Then someone (we'll never remember who) tried to differentiate by coining this term, which basically means "binaries with RPC".
Every binary in your /usr/bin/ is a microservice. Just type `watch date` and enjoy two microservices running, no need for containers/kubernetes :)
I have found that this applies all the way up to the loftiest levels of abstraction. For instance, we partition our problem space in terms of logical business process once you get to a certain altitude in the architecture. But, the most obvious way to represent these discrete process domains was with... objects. So yeah. It's objects all the way up. The only difference between good and bad code is how the developers handle namespacing, drawing abstraction hierarchies and modeling business facts.
Microservices are about organization, not about requests being sent on wires throughout some abomination of a cloud infrastructure. Developing a UserService class and having the audacity to just directly inject it into your application for usage is probably one of the most rebellious things you could do in 2020. Extra jail time if you also decide to use SQLite where appropriate.
> The only difference between good and bad code is how the developers handle namespacing, drawing abstraction hierarchies and modeling business facts.
I wish those were the only indicators of bad code, but illegible aesthetics, over-engineered complexity, under-engineered fragile modules, etc. are to be found everywhere.
I’d also add “not a shred of documentation” to the list. Not a comment, no issue- tracker IDs in the commit comments; just a confabulating pile of generically named “things doing things”.
Yes, programming keeps me straddling the line between futurism and going full luddite. Come on people, nobody is a mind reader, document your intentions in comments at the very least!
Things were so much easier 15 years ago. I see very little gain for all the complexity we have added since then. Ok we can scale easily on demand, but the vast majority of places won't need to.
Actually, we still can't scale easily on demand. Let's say you get 2x the traffic. Sure, your front-end instances in the cloud can scale to 2x the capacity, but that just moves the bottleneck over to your database.
And then you noticed that scaling an Amazon RDS database to a larger instance requires downtime...
I don't think that definition of microservice is useful and I think it promoting it will add to the confusion around the issue. You can comment on the similarities in the way modern microservices and watch/date are architected without calling the latter microservices.
The although both words have the same last two letters - "le", the third last letters are different. This produces a different syllable when pronounced.
Yes, the benefits are substantial. I used to have a static web site served out of a Dallas data center, with a great network connection, from a powerful bare metal machine that did nothing else. Sitting in Austin, it felt instant, equal to localhost speed, until I tried accessing it while traveling, especially overseas.
It's not just geographic latency you're addressing with a CDN, you're also reducing the number of network hops. It's not uncommon to experience higher latency going from SF to San Jose datacenter just because you're on a "wrong" ISP. A good CDN usually has a POP on the same network as you.
I live in Vietnam and frequently give tech advice to people regarding websites they are developing. Most sites on the works fine, but I can tell straight away when someone has hosted their site like you have described - because it will take >5 minutes to load! And they'll be telling me that it's fine, and I have to explain, yes, it's fine if you only want people in the same country as you to access the site. If you want it to work worldwide, you'll have to do better.
Fortunately, these days that just means creating a free Cloudflare account.
In your case, are all CDNs equal? Do I just have to throw my content to the biggest provider?
Don't get me wrong. Disenfranchisng non-Western visitors is the last thing I want to do, but the issue is not CDN or not, it's caching content closer to people whose ISPs don't provide sufficient service outside their own borders.
I dislike that CDNs are the only way around this and I feel like it is centralizing Internet access in an unfavorable way.
The big data centers in this region are in Singapore and I expect all CDNs have a presence there, so probably. I haven't exactly done any benchmarking though.
> the issue is not CDN or not, it's caching content closer to people whose ISPs don't provide sufficient service outside their own borders.
How would you do this without a CDN (on a small budget)?
Isn't this how it's always been? New tech shows up in enthusiast/consumer segment first, bugs are ironed out, manufacturing ramps up & yields go up, then new server parts are announced?
Besides, isn't EPYC2 the best server CPU already? There's no time pressure on AMD, they're comfortably in the lead.
Regular EPYC CPUs have too high a TDP for smaller server installations, such as so-called edge servers. EPYC Embedded is still stuck on Zen 1. Intel hasn't upgraded their comparable line of mid-range server CPUs (e.g. Xeon D line), either, but AMD won't win on performance alone. Intel has a huge SKU lineup, muchmore volume, and a muchricher vendor and platform ecosystem. If the vendor demand isn't there to push AMD on EPYC Embedded, Ryzen Emedded, and other market segments, AMD should build demand, otherwise they'll just recapitulate their rise & fall during the 2000s.
Cloudflare recently announced that they would be building out some data centers with EPYC CPUs. Do you believe that the situation has changed since February? They did a pretty exhaustive analysis [0] of price, performance, and power where they saw an advantage in switching away from Intel for that generation at least.
If the existing server hardware is already in a decent spot, then maybe they need to spend more resources on sales rather than making changes to keep it cutting edge.
I think Cloudflare is a different kind of "edge" service than what EPYC Embedded targets, and I would think Cloudflare uses regular EPYC chips. The "edge" that EPYC Embedded, Xeon D, etc target is, I think, more about the hardware configurations (smaller enclosures, minimal number of drives and other devices) and the type of facility (usually not a colocation facility, so power and heat are of significantly more concern). But the workloads are still very much server-class.
EPYC Embedded chips are competitive with Intel's offerings (Xeon D, etc), but as I said Intel's ecosystem is richer--for example, more, better motherboards. It's not enough that AMD's chips are competitive with Intel's. AMD has a huge ecosystem handicap, and so either they need to improve the ecosystem or sell chips that are dramatically better than Intel's, and for EPYC Embedded neither is the case. Long term a better ecosystem is necessary for AMD to survive because a broader product and customer base brings consistent income and mindshare--staying power. I would hope and assume AMD is working closely with cloud vendors on their proprietary hardware, but the results are opaque; judging by traditional channels (Supermicro, etc), AMD hasn't even begun to close the gap.
I don’t think survival is at stake. AMD powers both XBox and PlayStation for the second generation in a row and that ought to help keeping them alive. It’s more a matter of whether they can capture enough of the market so they’re not merely “surviving”.
Fair enough. Survive was a poor choice. What I had in mind was surviving as a contender at the high-end, so we can continue to benefit from competition for server-class chips. Alpha, SPARC, and POWER all lost the high-end market (their only market) to Intel at a time when Intel was inferior. AMD previously failed because Intel surpassed them, but that's because AMD couldn't leverage their initial advantage to secure their markets and thus their ability to keep investing. Without volume and mindshare failure is inevitable. Providing the best high-end chips is insufficient to remain competitive long-term. The reasons for previous failures were complex (ISA, operating system, sales channels, etc), but ultimately it comes down to something like diversity--customer and product diversity provide buffers in terms of sales as well as changes in market direction. AMD's chips are indisputably better than Intel's right now, but even with Intel's mindboggingly massive fumbles they're barely sweating in terms of current and prospective revenue.
> Alpha, SPARC, and POWER all lost the high-end market (their only market) to Intel at a time when Intel was inferior
What's interesting is that the industry was so sure that Itanium / Windows NT was going to crush everything that many of them just gave up. Compaq and Silicon Graphics specifically tired to switch to Itainium, which was years late and never a huge success. They probably could have gotten another generation or two in. Might have saved SGI.
> Providing the best high-end chips is insufficient to remain competitive long-term
Reason Alpha SPARC and POWER failed was they did not support the de facto standard X86 instruction-set, I believe. So the fastest most power efficient chip is not enough unless it can also run the most common software
In this case "most common software" meant proprietary and nonportable operating systems and applications.
Would you buy a server that will run the software you want to use either never (Windows and Windows applications developed on Windows) or maybe, eventually, as long as the new platform is a success (only slightly more portable proprietary Unix variants)?
Maybe it is different this time because well for one Intel and AMD are on the same architecture
but more importantly for arm or other platform, maybe this time is different because the servers mostly run on open source software. Given the potential revenue, I'd imagine it should be pretty straightforward to port common server software to any architecture if it isn't already?
With the kind of marketshare ryzen has gotten on custom builds plus several % increase in server( which has large margins) I don't think amd has financial issues right now. They'll probably target gpus next, go for the low hanging fruit and then slowly iterate on the rest of their products.
ARM just needs to be a little bit more competitive on performance per watt and a few other metrics...I think when Apple inevitably either proves ARM can be a very powerful platform via their migration to complete ARM based computing, we'll see a lot more movement and investment here, I'd say ~5 years is a good timeframe to start seeing serious long term shifts to ARM. If Apple fails to do this within that time frame, I think that may push it out longer. They're already pushing the limits of ARM as a platform with their custom chipsets
I think nvidia's long term strategy is going to be built around this, among other things, otherwise I believe they overpaid for ARM holdings. They'll need to make sure they can expand the market for ARM CPUs to drive ever more GPU sales in the future. They haven't been as successful getting their embedded GPUs into the ARM market if I recall correctly
A huge sku lineup according to every marketing book every made is a terrible idea here. I hope they don't view it as a strength. I'd be slashing like hell.
It's not a given at all, even if that's common. As a fresh counter-example, Nvidia released Ampere on TSMC 7nm for the enterprise many months before they released chips of the same architecture for consumer devices on Samsung's cheaper and significantly less dense 8nm node.
AVX512 is garbage. It incurs a massive performance hit from both frequency and mode-switching penalties. Apart from a few niche HPC and ML applications which you'll never encounter, AVX512's most compelling use cases are to drive Intel's shady market fragmentation, and to create more bullshit FP benchmarks that Intel can claim to win.
Now that we've covered both ends of hyperbole, it's maybe worth noting that a lot of CPU parallel tasks can be well accelerated by ISPC, which can make reasonably effective use of AVX512 (with the aforementioned clock speed caveat): https://www.mail-archive.com/ispc-users@googlegroups.com/msg...
Also, AVX512 is a much nicer (orthogonal) ISA than SSE or AVX2.
While there are some niche applications that need larger amounts of memory than GPUs can offer, it's worth noting that this speedup comes from making the CPU act more like a GPU, and they aren't as fast as GPUs acting like GPUs (which are essentially many 32-wide vector units, rather than AVX512's 16-wide).
Which applications, and why? Some computational ones will go significantly slower on the same number of cores if they could have kept avx512 fed (perhaps small data in cache), but most don't spend all their time in something like GEMM. The new UK "tier 1" HPC system is all EPYC.
Automatic vectorization for AVX512 would work on the simplest of cases and using intrinsics or writing inline assembly is beyond the scope of 99% of software projects.
There's a very small set of applications where the precision of AVX512, but more cores also speeds them up.
A much more important set of applications is those sped up with GPUs or other ML accelerators. Notably the high performance of these CPUs is useful with those too, because they are great at data pipeline crunching prior to the GPU part.
Come here to build & own gravitational.com web site and other web properties we have. We need someone who appreciates the beauty of semantic HTML markup, elegant CSS, minimal JS and appreciates the good UI when one sees it.
First thing I did when I clicked on comments is Cmd+F for "Electron". I was so relieved to find your comment. Electron apps are fine, but you have to pay me to use them :-)
Let's see... you're proposing that a company should just hand out their competitive advantage just because some parents are unable to curb their consumerism / keeping up with the Jonesses an dealing with "psychology in teens"?
Nobody forces you to use iMessage. Use Snap, Whatsapp, Telegram, FB messenger, whatever. You don't need to be upgrading with every cycle (or even every other cycle) either. I am on my 4th smartphone since 2008, replacing a battery is cheaper that replacing phones. And if your teen insists on the latest&greatest instead, that should be your problem, not Apple's.
Without their walled garden Apple is nothing but a commodity hardware maker and those do not survive for long on this side of the Pacific.
>Nobody forces you to use iMessage. Use Snap, Whatsapp, Telegram, FB messenger, whatever.
This isn't a case of fair play among competitors. iMessage, unlike the other options here, is baked-into the iOS SMS client, and it's not obvious to a non-technical use how to opt-out of using iMessage services. It's akin to if Android, as a default, inspected your contacts list and hijacked outgoing SMS messages for delivery via Hangouts.
Apple isn't a commodity hardware maker, as the software isn't sufficiently distinctive vs Android. Their hardware is what makes them so hot.
Also, integrating SMS with online messaging and giving people free chatting are Very Standard terms of competition for the hot and overcrowded chat app market, and it's precisely this which has gotten some people so bothered.
More to the point, Apple is not responsible for the visibly poisonous affects of classism on children, and Apple is not morally culpable merely because they indicated to you what's free and what's not. The families whose children are hurting each other should gather as a community, look each other in the eye and ask what's happening.
Nike also serves as a symbol of class — should we blame Nike when kids laugh at no-brand shoes?
The opposite is true. In rural places, you often only have Wi-Fi by way of AT&T DSL or WildBlue/HughesNet satellite broadband. Your iMessage messages always deliver instantly. Meanwhile, the SS7 SMS messages always fail to send, because you have 0 bars of cellular service. Even in cases where there is cellular service, iMessage and other IP-based messaging is far more reliable.
iMessage is years behind any of those other chat clients. It’s their competitive advantage and yet it’s atrocious. There’s still no unsend feature ffs.
Someone just needs to meme us teenagers to switch to a better chat client.
I still use iMessage if someone send it to me, but I’m glad to have switched to other chat clients because I have far more affinity toward my friends than I do this half-forgotten messaging service.
They hosted, vetted, distributed public, provided support and developer services including a single easy to use and safe purchase system that built that market over a decade. They have been so successful that iOS developers make nine times as Android developers per device solid.
Walmart build, vetted, distributed and provided support to Procter & Gamble products over many decades. But saying that Walmart is paying to Procter & Gamble would be also gross mischaracterization.
Appstore model is as old as civilization - middle man reselling stuff. Some middle mans are more effective than others, and true, Apple is one of them. But that doesn't change characterization of the relationship. They don't pay developers for their service, they resell their stuff, while keeping hefty profit margins.
Grocery stores were an enormous business before Walmart, it just made them more efficient. And it opened doors for lots of smaller products to reach far more customers.
The entire mobile app market was not even one hundredth the size of the App Store before it was created.
The walled garden Apple created made safest and easiest app purchase environment ever, which is why it attracts the highest spending by app customers and highest spending app customers.
If Apple had taken the Google play route that $35B a year would be closer to $4B a year.
Why is this always added as if it's relevant? If I'm selling an app on both stores I care about how much money I can make in total, how many Android phones have been sold globally is just trivia.
Because it directly reflects how successful the walled garden Apple has built is for developers. It’s attracted great customers, and made them feel very safe and comfortable with purchasing apps.
If you don’t like the ratio, just say App Store sales are 50% higher than Android.
That $35B didn’t exist until Apple created the iPhone and built its walled garden.
"Pandering to Apple" on the matter of whether your messaging is free and colored blue or green? As opposed to the fundamental moral question of how families and children treat each other in poisonous, classist ways?
Next we'll say that accepting Nike's swoosh is pandering to Nike, and look at all the poisonous affects Nike's swoosh is having on kids with no-brand shoes.
Not much else? Other than consistently building some of the best consumer tech products in history that are widely used and loved as evidenced by this thread?
We shouldn't. Buying Apple products is not mandatory.
Also, apologies for getting a bit political, but I came from a country that got screwed up for 100 years because a small group of people who loved "corporation making a few people wealthy beyond all imagination" language got into power in 1917, so I'm quite sensitive to it, so what if instead we say:
"a group of 47,000 Americans who make anyone a little bit wealthier if they buy a bit of AAPL"?
Regulating monopolies is not full communism. One of the problems with soviet immigrants like yourself is the tendency to have extremist radical capitalist views, the most famous example being Ayn Rand. But history has shown extreme unregulated capitalism doesn't work either, the best system is a well regulated capitalism with democratic oversight of the regulators.
Frankly, this makes my blood boil. Full disclosure: I used to work at Sendgrid's competitor (Mailgun) and I'm quite familiar with what's happening under the hood. You allowed spam to be sent from your account. You poisoned the IP space (not just one address you were sending from, but the IP block, affecting others) with spammy reputation. These blocks are expensive and hard to acquire (due to IPv4 shortage) and at this point you have caused more damage to Sendgrid and their customers, than your account value can probably compensate for. What kind of "support" do you expect at this point? You rented a hotel room, gave the key to someone who erected a meth lab there, and you're unhappy with the hotel's reaction?
Sending email on behalf of other people is hard work. The margins are not great, the ratio of spam-to-ham for new accounts is insane, and they're constantly under pressure between their customers who pay very little but want to spam the entire universe (I am sorry but most of you do!) and ESPs not wanting that traffic.
I don't think I did a great job of making clear that my main gripe with their support (taking months) was from a separate, personal account which — from day one — was unable to send non-spam, transactional email to anyone with an Outlook address. In my first example, the account's (for previous employer — not "me") right to support I guess is more debatable...
But I don't think your analogy of the hotel room is totally representative here. Without knowing how someone has had their credentials hacked, it's much more prudent to assume that in your analogy that the room key was pickpocketed / stolen. And then it becomes more of a grey-area as to how much support one can expect.
That being said, I do appreciate your insight on account value, as compromised accounts clearly do constitute a burden that don't end up paying for themselves (even if they aren't "to blame").
> And then it becomes more of a grey-area as to how much support one can expect.
Lol no it doesn't. The comment you replies to uses an excellent example, this isn't someone snuck into your room with the stolen key and messed it up, they burned the room down.
Their support isn't coming after you for the value of the room, but they're also politely telling you they aren't about to replace your belongings or give you a new one
But notice how that doesn't happen very often. Because hotel keys almost never have the room numbers on them. This is basic security the hotel provides to handle the inevitable fact someone is going to lose their credentials. The Sendgrid equivalent would be some mechanism to prevent lost credentials leading to abuses - such as 2FA.
I'm actually intrigued now that I think about this.
What do you think typical hotel keys actually are? They could be arbitrary one shot token random tokens authorised for your stay. When you check out your token, even if you've cloned it, is now useless. This would superficially match the UX you see used, in which each mag stripe card is rewritten before it's handed to you when you check in.
But from what I can tell actually the typical practice is that the card doesn't have a random token, it encodes the room number and period of stay. If you write a new card with a different room number and period it would work, although of course that doesn't make such a thing legal to do.
I think the lack of a human readable number on your typical hotel keycard is because it was easier/ cheaper not because of some security insight. I would be happy to be proved wrong.
Certainly when I've stayed at very small hotels with actual keys, the keys were marked with a room number. These hotels also really wanted the keys back when you check out of course, not because they think you'll come back later and enter a room that's now empty or has a different guest (at such a small hotel that would not be subtle) but because they need it for the next guest.
Anyway. Sendgrid's 2FA doesn't actually block lost credentials. If you have Sendgrid 2FA and use it to get a token for their API, and then the new guy puts it on Pastebin your token will now be abused to send spam.
The main benefit is that the random tokens aren't guessable whereas your brilliant choice of Sendgrid password, "sendgrid" is very guessable. Yes this is some very weak sauce.
Maybe the analogy is getting a little long in the tooth here, but it's been 4 years since you needed basic auth on Sendgrid, so hopefully your username and password can't be found together without a targeted attack (kind of like how your room key getting stolen should only lead to your room with a targeted attack).
On the other hand, many hotels will give you a room number on the sleeve of the card. It's up to you to do the right thing and get rid of it properly
Kind of like Sendgrid has 2FA and it's up to you to set it up.
I mean, I get it, default behaviors don't rely on users doing the right thing, phone 2FA doesn't count even though it would have probably saved OP just fine, etc. If Sendgrid was trying to come after OP for losing their credentials those would all be big factors in me wanting to boycott Sendgrid...
But they're not. They're pretty much just telling OP "it is what it is after you let your account get stolen, eventually you might recover from this but you don't get a free second chance"
I really don’t understand this - I’ve read the parent comment 3 times and can’t see how you you come to the “gave someone the key to someone else” analogy.
Are we interpreting OP differently? From all I can read of available information, they seem to have been reasonable. Sounds more like their neighbor or the previous tenant had a meth lab.
Especially given that they haven't had working 2FA for the longest time, it's not fair to say that OP allowed it or gave someone the keys.
Stretching the analogy, it's more like OP had the room keys lying on a table at a cafe, someone managed to copy the key after sniping a photo and then abused the room while OP was out for the night, and then OP notified the hotel staff once they got back and realized what was going on.
In both cases, OP was unaware of anyone having unauthorized access and it wouldn't have happened (as easily( if the business had better security.
Apparently they're doing pretty well. Don't expect them to change course. But this created the opportunity for others. I have migrated to Affinity photo + Capture One. No regrets.
How did you find the migration? I’ve been resisting Lightroom upgrades as I don’t want the cloud stuff. But as raw support for new cameras falls off it’s getting harder since the two step DNG import is such a pain.
Moving to Capture One or something else seems worthwhile. Tried Darktable but wasn’t a big fan.
How’s its catalog? One thing I like about Lightroom is the catalog is just files on disk vs a giant database in say Aperture. I like having my own file layout and would hate to give that up.
Lightroom’s is. Not sure about capture pro. In Apertures case, the photos and meta data are in one giant blob. I’d want to avoid that. Manage the files my way, then have a separate metadata store.
The downside of the SQLite dB is no network editing. I’d love to be able to edit from multiple machines without a cloud subscription or super slow NFS.
Yeah, I use sessions, which basically means there’s no catalog. I just keep the files where I like and open those folders with Capture One when editing.
I also like the EIP package format. You can package an image into a zip of the original image and all Capture One data. Then it’s just one file for your archives but everything you need is there for the future. Even if you can’t open the Capture One bits at some point, the original is still in there.
I bought Affinity Photo and I am not enjoying the migration. Having used Photoshop for 20 years, there is so much muscle memory for a lot of tasks that were quick and are much slower now. It's really hard to learn and easy to feel disgruntled when you know you could have been done in a few seconds in Photoshop. I'm considering running CS6 in a VM or paying for the damn subscription.
I bought Affinity Photo and I am not enjoying the migration.
How long have you been working with Affinity? It's not Adobe's software and it's not trying to be, so there is always going to be some period of adjustment while you get used to a new way of doing things. IME, that passed very quickly, and within a few hours I was doing most basic tasks as readily in Affinity products as I would have done in CS/CC ones, but my work is usually more in Designer than Photo.
There is still quite a bit of useful functionality present in CC but missing in Affinity, which is a more serious problem for power users who have workflows and presets and plug-ins and so on set up with the Adobe products. But the Affinity suite seems to be developing rapidly, so perhaps in time this gap will close too.
Lightroom -> Capture One was an upgrade. Even when the learning curve is taken into account, I found C1 to be superior piece of software. It's faster, its color editing is better, and the default look it gives me was just better, at least for my camera.
Photoshop -> Affinity Photo was painful. I'd estimate the pain to be "moderate". It misses some features I want (color calibration targets support, some layer blending modes) and some others are not as great (no smart sharpening). Affinity UX of working with layers is surprisingly painful and the attention to detail in the UI is poor. But since 90% of my time is spent in C1 anyway, I found it tolerable.
Capture One is more annoying in a bunch of user interface ways (for example it doesn't default to importing RAWs for some unknown reason) but it is learnable. Expect some time watching how-to videos on YouTube if you want to achieve proficiency.
Not op, but recently switched to same combo after trying darktable, rawtherapee and digikam. Open source solutions had some random problems, strange ui and they lock up when you change slider values, at least on my laptop. Capture one doesn't and the overall polish is decent. Affinity photo lacks some photoshop features like content-aware fill AFAICT but good enough for my amateur needs. Not going back, screw you adobe.
Yeah, in my opinion. The catalog gives you loads of power, eases the culling process, allows you to easily export batches of photos with recipes, do searches against metadata.
It also has 'sessions' which are aimed at individual shoots or projects. I suspect most pros use them instead of catalogues. It's essentially a self-contained directory structure that C1 understands,
But for new projects, I honestly do not know what will people do.
If Amazon continues to evolve their Graviton chips, and other cloud providers follow, it will be quite tempting to try a 100% ARM setup...