In the early 2000s we were still in high school and a friend of mine told me that it's pointless to chose a carrier in SWE because eventually all the software will be written and there would be nothing to do.
Well we both ended up in CS but it's really a funny take nowadays (by then I wasn't so sure and it made me worry a bit).
I worked on Mono a lot back in the early 2000s (back in the SVN days before it moved to Git, even). This move makes a lot of sense. Things evolved a lot over the years. Mono's legacy goals, which are to be a portable CLR (.NET) runtime for platforms that Microsoft didn't care about, don't make much sense today.
Mono made a lot of sense for running places where full .NET didn't, like in full AOT environments like on the iPhone where you can't JIT, or for random architectures that don't matter anymore but once did for Linux (Alpha, Itanium, PPC, MIPs, etc.). When Microsoft bought Xamarin (which itself was born out of the ashes of the Novell shutdown of the Mono effort) and started the DotNET Core efforts to make .NET more portable itself and less a system-provided framework and merge in a lot of the stuff Mono did a single more focused project made more sense.
Mono was still left out there to support the edge cases where DotNET Core didn't make sense, which was mostly things like being a backend for Wine stuff in some cases, some GNOME Desktop stuff (via GTK#, which is pretty dead now), and older niche use cases (second life and Unity still embed mono as a runtime for their systems). The project was limping, though, and sharing a standard library but different runtimes after much merging. Mono's runtime was always a little more portable (C instead of C++) and more accessible to experiment with, but we need that less and less, but it's still perfect for Wine. So, having it live on in Wine makes sense. It's a natural fit.
Is there somewhere where someone new to the ecosystem can get a simple introduction to all of these different terms and which ones are still relevant today? I looked into .NET somewhat recently and came away with the apparently mistaken impression that Mono was how .NET did cross-platform. I guess I must have been reading old docs, but I'm pretty sure they were at least semi-official.
Is there good documentation somewhere for getting set up to develop with modern .NET on Linux?
For modern .NET, you don't need to know anything about the legacy terms of Mono, .NET Core, .NET Framework, .NET Standard, etc. All you need is .NET 8 SDK. It's fully-cross platform and installs support for both C# and F#.
For example, just download .NET 8 SDK on whatever platform, which is usually very easy on most platforms, and then run `dotnet fsi` to get into an F# REPL.
This is from Mint 22. MS does have its own PPA though.
$ apt search dotnet
p dotnet-apphost-pack-6.0 - Internal - targeting pack for Microsoft.NET
p dotnet-apphost-pack-7.0 - Internal - targeting pack for Microsoft.NET
p dotnet-apphost-pack-8.0 - Internal - targeting pack for Microsoft.NET
p dotnet-host - dotNET host command line
p dotnet-host-7.0 - dotNET host command line
p dotnet-host-8.0 - .NET host command line
p dotnet-hostfxr-6.0 - dotNET host resolver
p dotnet-hostfxr-7.0 - dotNET host resolver
p dotnet-hostfxr-8.0 - .NET host resolver
p dotnet-runtime-6.0 - dotNET runtime
p dotnet-runtime-7.0 - dotNET runtime
p dotnet-runtime-8.0 - .NET runtime
p dotnet-runtime-dbg-8.0 - .NET Runtime debug symbols.
p dotnet-sdk-6.0 - dotNET 6.0 Software Development Kit
p dotnet-sdk-6.0-source-built-arti - Internal package for building dotNet 6.0 So
p dotnet-sdk-7.0 - dotNET 7.0 Software Development Kit
p dotnet-sdk-7.0-source-built-arti - Internal package for building dotNet 7.0 So
p dotnet-sdk-8.0 - .NET 8.0 Software Development Kit
p dotnet-sdk-8.0-source-built-arti - Internal package for building the .NET 8.0
p dotnet-sdk-dbg-8.0 - .NET SDK debug symbols.
p dotnet-targeting-pack-6.0 - Internal - targeting pack for Microsoft.NET
p dotnet-targeting-pack-7.0 - Internal - targeting pack for Microsoft.NET
p dotnet-targeting-pack-8.0 - Internal - targeting pack for Microsoft.NET
p dotnet-templates-6.0 - dotNET 6.0 templates
p dotnet-templates-7.0 - dotNET 7.0 templates
p dotnet-templates-8.0 - .NET 8.0 templates
p dotnet6 - dotNET CLI tools and runtime
p dotnet7 - dotNET CLI tools and runtime
p dotnet8 - .NET CLI tools and runtime
p libgtk-dotnet3.0-cil - GTK.NET library
p libgtk-dotnet3.0-cil-dev - GTK.NET library - development files
dotnet-sdk-8.0 should have the rest of what you need downstream from there. For other libraries and versions, you should be able to use NuGet with your project directly.
I've been using the script installer version intended for ci/cd as I actually like that installer more, it's the only one that really supports multiple versions correctly.
What's unfriendly about just clicking through the options? Anytime I want to install .NET, I just go to that exact documentation, click on the distribution I want (usually Ubuntu), and then just click on the version (https://learn.microsoft.com/en-us/dotnet/core/install/linux-...). I almost always use Microsoft's feeds though, so as to not rely on the middleman of the Ubuntu package manager feeds.
Ubuntu is a subpar package maintainer, but in well run distros that middleman who does the packaging makes an effort to ensure you are getting a stable, performant package, and tries to catch eratta or abusive practices that upstream starts pushing (say, Microsoft opening Edge when you run wget or curl in the terminal, rather than calling the real wget or curl).
To a point. Making cross platform native desktop apps is still in the hands of 3rd party vendors such as Avalonia and Uno. MAUI was supposed to fix that oversight to a less than stellar results.
If there were an old version of C that only worked on one platform but had a graphical toolkit in its standard library, and a new version of C that is cross platform but that graphical toolkit is now ambiguously still sort-of part of the standard library but still not cross platform (and there was no realistic alternative)... Then yes it would be valid to object C is not really cross platform.
back when .NET was first launched it was advertised as the new way of making desktop applications on Windows.
Visual C# made it very easy to design GUI interfaces.
So this "it's all for backend now" notion is surprising.
.Net is "Microsoft Java". Like Java it was designed to do everything, but as desktop development died (and mobile development was locked down by Apple and Google, limiting it to their corporate languages), it pivoted towards networked applications.
They were legally forbidden from going the Embrace-Extend-Extinguish route there, so they had to build their own version from scratch. C# exists because J++ couldn't.
Is Kotlin the most "active", "hot", or "up-and-coming" competitor? Possibly. But the "largest"? Its deployed footprint and popularity are nowhere close to Java's at this point in time.
No and it's not even close. Kotlin only has a single Jetbrains Compose (I presume Kotlin Multiplatform is the same thing). It is also subject to the quirks and specifics of JVM implementations, build-systems and package management. Kotlin native partially bypasses this, but its performance is a factor of 0.1-0.01x vs OpenJDK (if there is new data - please let me know). This is very unlike NativeAOT which is on average within 90% of CoreCLR JIT but is also a performance improvement in variety of scenarios.
C# and F# get to enjoy the integration that is "much closer to the metal" as well as much richer cross-platform GUI frameworks ecosystem with longer history.
There are more than 10 sibling and gp comments that exhaustively address the GUI and other questions :)
> That's a massive advantage over the arcane package management and build systems of .NET.
Very few languages ever achieve a build and package management system as mature and usable as the Java ecosystem.
I've been waiting for 12 years for .NET to match Java's ecosystem, and it's still not there yet.
If you want to sell me on "advantages" of invoking Gradle or Maven over
dotnet new web
dotnet run
curl localhost:port
or
dotnet new console --aot
echo 'Console.WriteLine($"Right now is {DateTime.Now}");' > Program.cs
dotnet publish -o {here goes the executable}
or
dotnet add package {my favourite package}
I suppose you would actually need 12 years of improvements given how slow if ever these things get resolved in Java land.
Also, what's up with Oracle suing companies for using incorrect JDK distribution that happens to come with hidden license strings attached?
Well, that's where the problem lies, isn't it? The ecosystem for .NET is extremely limited compared to what's available for the JVM
And the way JVM packages are distributed, with native libraries, BOMs and platforms allows more versatility than any other platform.
The build system may be better in dotnet, but that only really matters for the first 10 minutes. Afterwards, the other tradeoffs become much more important.
I don't think "JVM is more popular" argument does justice to Java (and Kotlin) strengths. With this reasoning, you could also say "C++ is more popular for systems programming" but it doesn't stop developers from switching to Rust, Zig or even C# as a wider scope and easier to use language that has gotten good at it.
Nonetheless, you could make this argument for select Apache products, but that's Apache for you. It does not hold true for the larger ecosystem and, at the end of the day, quantity is not quality, otherwise we would've all been swept by Node.js :)
Same applies to "packages that bundle native libraries".
First, they are always maintenance-heavy to manage with ever growing matrix of platforms and architectures. Just x86 alone is problem enough as all kinds of codecs perform wildly different depending if AVX2 or 512 is available vs SSE4.2 or even SSE2 without EVEX. Now add ARM64 with and without SVE2 to the mix. Multiply this by 2 or 3 (if you care about macOS or FreeBSD). Multiply linux targets again by musl and glibc. You get the idea. This a worst-case scenario but it's something Java is not going to help you with and will only make your life more difficult due to the reason below.
There is also an exercise in writing JNI bindings. Or maybe using Java FFM now which still requires you to go through separate tooling, build stage, deal with off-heap memory management API and still does not change the performance profile significantly. There's a reason it is recommended to avoid native dependencies in Java and port them instead (even with performance sacrifices).* Green Threads will only exacerbate this problem.
Meanwhile
using System.Runtime.InteropServices;
[DllImport("libc", EntryPoint = "putchar")]
static extern int PutChar(int c);
var text = "Hello, World!\n";
foreach (var c in text) PutChar(c);
since C# 2 or maybe 1? No setup required. You can echo this snippet into Program.cs and it will work as is.
(I'm not sure if binding process on the ole Mono was any different? In any case, the above is a thing on Linux since 8 years ago at least)
* Now applies to C# too but for completely different reason - you can usually replace data crunching C++ code with portable pure C# implementation that retains 95% of original performance while reducing LOC count and complexity. Huge maintenance burden reduction and "it just works" without having to ship extra binaries or require users to pull extra dependencies.
> There is also an exercise in writing JNI bindings. Or maybe using Java FFM now which still requires you to go through separate tooling, build stage, deal with off-heap memory management API and still does not change the performance profile significantly. There's a reason it is recommended to avoid native dependencies in Java and port them instead (even with performance sacrifices).* Green Threads will only exacerbate this problem.
public interface MSVCRT extends Library {
public static MSVCRT Instance = (MSVCRT) Native.load("msvcrt", MSVCRT.class);
void printf(String format, Object... args);
}
public class HelloWorld {
public static void main(String[] args) {
MSVCRT.Instance.printf("Hello, World\n");
for (int i=0;i < args.length;i++) {
MSVCRT.Instance.printf("Argument %d: %s\n", i, args[i]);
}
}
}
> "C++ is more popular for systems programming"
Sure, and it's got many great libraries – but actually using those is horrible.
You're absolutely right about Rust though. crates.io and cargo are amazing tools with a great ecosystem.
The primary issue I've got with the .NET ecosystem is actually closely related to that. Because it's so easy to import native libraries, often there's no .NET version of a library and everyone uses the native one instead. But if I actually want to build the native one I've got to work with ancient C++ build systems and all the arcane trouble they bring with them.
> Same applies to "packages that bundle native libraries".
You seem to have misunderstood. The fun part of the maven ecosystem is that a dependency doesn't have to be a jar, it can also be an XML that resolves to one or multiple dependencies depending on the environment.
> The primary issue I've got with the .NET ecosystem is actually closely related to that. Because it's so easy to import native libraries, often there's no .NET version of a library and everyone uses the native one instead. But if I actually want to build the native one I've got to work with ancient C++ build systems and all the arcane trouble they bring with them.
What is the reason to continue making statements like this one? Surely we could discuss this without trying making accusations out of thin air? As the previous conversation indicates, you are not familiar with C# and its toolchain, and were wrong on previous points as demonstrated. It's nice to have back and forth banter on HN, I get to learn about all kinds of cool things! But this happens through looking into the details, verifying if prior assumptions are still relevant, reading documentation and actually trying out and dissecting the tools being discussed to understand how they work - Golang, Elixir, Swift, Clojure, etc.
> You seem to have misunderstood. The fun part of the maven ecosystem is that a dependency doesn't have to be a jar, it can also be an XML that resolves to one or multiple dependencies depending on the environment.
Same as above.
> JNA
I was not aware of it, thanks. It looks like the closest (even if a bit more involved) alternative to .NET's P/Invoke. Quick search indicates that it comes at an explicit huge performance tradeoff however.
This uses Win32 API. I will post numbers in a bit. .NET interop overhead in this scenario usually comes at 0.3-2ns (i.e. single CPU cycle which it takes to retire call and branch instructions) depending on the presence or absence of GC frame transition, which library loader was chosen and dynamic vs static linking (albeit with JIT and dynamic linking the static address can be baked into codegen once the code reaches Tier 1 compilation). Of course the numbers can be presented in a much more .NET-favored way by including the allocations that Java has to do in the absence of structs and other C primitives.
> Quick search indicates that it comes at an explicit huge performance tradeoff however.
That's definitely true, but it should be possible to reimplement JNA on top of the new FFM APIs for convenient imports and high performance at the same time.
> Of course the numbers can be presented in a much more .NET-favored way by including the allocations that Java has to do in the absence of structs and other C primitives.
Hopefully Project Valhalla will allow fixing that, the current workarounds aren't pretty.
I fully agree though that .NET is far superior in terms of native interop.
> As the previous conversation indicates, you are not familiar with C# and its toolchain,
I've been using .NET for far over a decade now. I even was at one of the hackathons for Windows Phone developers back in the day.
Sure, I haven't kept up with all the changes in the last 2-3 years because I've been so busy with work (which is Kotlin & Typescript).
That said, it doesn't seem like most of these changes have made it that far into real world projects either. Most of the .NET projects I see in the real world are years behind, a handful even still targeting .NET Framework.
> were wrong on previous points as demonstrated.
So far all we've got is a back and forth argument over the same few points, you haven't actually shown any of my points to be "wrong".
> I've been using .NET for far over a decade now. I even was at one of the hackathons for Windows Phone developers back in the day.
This conversation comes up from time to time. It is sometimes difficult to talk to developers who have a perception of .NET that predates .NET Core 3.1 or so and newer. Windows Phone and its tooling is older. I am sad UWP has died, the ecosystem needs something better than what we have today, and the way Apple does portability with MacCatalyst is absolutely pathetic. In a better timeline there exists open and multi-platform UWP-like abstraction adopted by everything. But these were other times and I digress.
The package distribution did not change significantly besides small things like not having to write .nuspec by hand in most situations. Nuget was already good and far ahead of the industry at the time it was introduced.
The main change was the switch to SDK-style projects files. Kind of like Cargo.toml but XML.
Adding a file to a nuget package (or anything else you build) is just adding a <Content ... /> item to an <ItemGroup>.
As you can see, it is possible to make definitions conditional and use arbitrary information provided by the build system. It is very powerful. I don't know what made you think that I assume anything about .jar files.
Together with <PublishAot> property, invoking 'dotnet publish -o .' calls into cargo to build a static library from Rust, then compiles C# project, then compiles the produced .NET assemblies to native object files with ILC (IL AOT Compiler), and then calls system linker to statically link together .NET object files and a Rust object file into a final native binary. The calls across interop, as annotated, become direct C ABI calls + GC poll (a boolean check, multiple checks may be merged so less than a branch per call).
This produces just a single executable that you can ship to users. If you open it with Ghidra, it will look like weird C++. This is a new feature (.NET 7+) but even without NativeAOT, it was already possible to trim and bundle CIL assemblies into a single executable together with JIT and GC. As far as I'm aware, the closest thing that Java has is Graal Native Image, which is even more limited than NativeAOT at the present moment (IL linker has improved a lot and needs much less annotations, most of which can be added as attributes in code and the analyzer will guide you so you don't need trial and error). And the project that allows to embed bytecode in the .NET trimmed single-file style in Java is still very far from completion (if I understood it right).
I think https://two-wrongs.com/dotnet-on-linux-update is more or less representative of unbiased conclusions one makes when judging .NET by its merits today. You can always say "it used to be bad". Sure. It does not mean it still is, and the argument is irrelevant for greenfield projects, which is what I advocate C# is the better choice for anyway.
> I fully agree though that .NET is far superior in terms of native interop.
This is not limited to native interop. At its design inception, C# was supposed to replace C++ components at MS. Then, in C# 2, a focus group including Don Syme if I'm not mistaken pushed for generics and other features. Someone posted a history bit here on HN.
This and influence from the projects like Midori (spans, struct improvements), and subsequent evolution (including the existence of Mono) and especially after it stopped being .NET Framework and became .NET resulted in a language that has much wider scope of application than most other GC-based languages, including Java, particularly around low-level tasks (which is also why it's popular in the gaming industry).
Unfortunately, the perception of "another Java" hurts the ecosystem and discourse significantly, as the language and the platform are very unlike this claim.
> NET Multi-platform App UI (.NET MAUI) apps can be written for the following platforms:
> - Android 5.0 (API 21) or higher is required.
> - iOS 11 or higher is required
> - macOS 11 or higher, using Mac Catalyst.
> - Windows 11 and Windows 10 version 1809 or higher, using Windows UI Library (WinUI) 3.
Okay, where's Linux? That's what Mono was originally made for and where Mono really shines.
Also, the development experience isn't great either:
> - If you are working on Linux, you can build and deploy Android apps only
> - You need a valid Visual Studio or IntelliCode subscription
The getting started guide only exists for Windows and macOS and the forum post announcing experimental Linux support is full of caveats.
I don't think you and I would agree on what "cross-platform" means, especially in the context of Mono being donated to Wine, which is a heavily linux-centric discussion topic.
> - If you are working on Linux, you can build and deploy Android apps only
> - You need a valid Visual Studio or IntelliCode subscription
You don't: https://marketplace.visualstudio.com/items?itemName=ms-dotne... (DevKit, which is the licensed one, is completely optional - it gives you VS-style solution explorer. You can already get it with e.g. F#'s Ionide that works for any .NET file in the solution, though I use neither)
While I don't have much direct experience with it, as it was easy to migrate my personal projects, it seemed the idea was sound. It seemed like it was a way to encourage people to write libraries against the new .NET Core (at the time) but still allow those libraries to be used in .NET Framework as a sort of bridge for people stuck on .NET Framework.
Any support for open source or cross-platform stuff was a bulwark against claims of monopoly abuse, but none of it worked well enough to be a true replacement. Mono worked for some purposes, but it was far from the first party support cross-platform .NET gets today. Nowadays it sounds like .NET Core + third-party GUI libraries is the way to go.
> Nowadays it sounds like .NET Core + third-party GUI libraries is the way to go.
For reference for those unfamiliar with the terms:
.NET Core was the name given to the cross-platform fork of the .NET runtime.
It was forked out of .NET 4.x and dropped support for a lot of things in the first versions.
It ran on various distributions of Linux and MacOS.
At the same time there were forks of other libraries/frameworks in the .NET ecosystem to have 'Core' variants. Often these were dropping support for legacy parts of their code so that they could run on Core.
Later versions of .NET Core brought over support for a many of the things that had been dropped.
.NET Core and .NET had stand-alone versions until .NET Core was renamed to . NET and became .NET 5.
So, if you want to do the most modern cross-platform C# you would use .NET 9.
More or less. any version of .NET >= 5 is cross-platform and is a direct descendant of the "Core" side of the fork, and so has no "full framework, windows only" variant.
It is "Core" in a lineage sense, but there's no need to make that distinction any more. The term "Core" is out of date, because the experimental "Core" fork succeeded, and became the mainstream.
I've been a long way from Windows development for a while, so missed that shift. I knew it was coming since moving functionality to the open source thing seemed to be Microsoft's target (with some skeptics doubting it, understandably). I didn't know it already happened.
The shift is slow, but it has been ongoing for years, and is pretty much wrapping up now. .NET 5 was released in November, 2020 and that was the "beginning of the end" of the shift over.
For what I do, it's not really "Windows development" in any meaningful way. It is business functionality with HTTP, message queues etc, developed on mostly Windows laptops, and deployed to mostly Linux instances on the cloud. Not that the host OS is something that we have to think about often.
For this, .NET 3.x "the full framework windows only version" services are regarded as very much legacy, and I wouldn't go near one without a plan to migrate to a modern .NET version.
However, YMMV and people are also making windows desktop apps and everything else.
Quantity of languages might be less important than: how many needs are served by those languages, whether the ecosystem is dynamic enough to keep expanding served niches, and whether the culture and community is likely to produce language support for a niche that matters to you ever or on a realistic timeline. The JVM does appear to have a lot more niches covered, but you can still do all the things those languages do in what's available for the CLI.
I don't know much about the current state of CLI and .NET beyond what I've read here, but it sounds like it's dynamic enough to keep expanding. I also don't know enough about the long tail of niche languages supported by each to know which direction they're headed.
That's the situation with the tools used for music production. In theory, any DAW (Digital Audio Workstation) can make any kind of music. In practice, they all move toward different kinds of music, and you'll run into increasing friction as you do weirder or more complex stuff if you pick the wrong DAW. Cubase can do electronic music, but you're better off with FL Studio or Live. Live and FL Studio can do orchestral, but you're better off with Cubase.
And I'd guess there's a similar dynamic with CLI and JVM and the languages that target them.
It's a fork with a lot of modifications (mostly removing deprecated stuff and making it cross-platform). You can still see a lot of ancient stuff in the sources such as referring to the base Object class as "COM+ object" (.NET was originally envisioned as a successor to COM).
>An early name for the .NET platform, back when it was envisioned as a successor to the COM platform (hence, "COM+"). Used in various places in the CLR infrastructure, most prominently as a common prefix for the names of internal configuration settings. Note that this is different from the product that eventually ended up being named COM+.
Correct, the bytecode wasn't even 1:1 compatible. They then brought over missing pieces, and consolidated .NET Framework features into .NET Core, thus becoming just .NET to end the dumb naming war, since everyone calls it .NET anyway...
Good write up that wonderfully encapsulates how stupid Microsoft’s naming is - you didn’t even mention .NET standard.
I love .NET. It’s a great stack, especially for backend web apps. Blazor is a great SPA framework too. But I loathe how Microsoft continue to handle just about everything that isn’t the framework and C# / F#. It’s laughable.
Oh don’t get me wrong - I wasn’t criticising your write up. It was concise and still relevant.
It’s just funny for newcomers to peel back the onion more. Writing a source generator? Target .NET standard 2.0 (not even 2.1) for a whole host of reasons.
The ".NET" label was applied to a bunch of things at Microsoft.
It was also an early name given to their social networking / IM things.
But for the last 20-ish years it's really only been applied to things related to the .NET Framework.
So, yes - Visual Basic.NET is a language - it's the language that replaced Visual Basic 6. It compiles to the Intermediate Language (IL) that the Common Language Runtime (CLR) executes. There are other languages that compile to IL, too like C#, F#.
The .NET Framework is really a bunch of libraries and tools that are packaged together.
The .NET Standard is a standard that allows you to build a library to a known set of supported libraries and IL / CLR features.
So, yes, depending on which specific part you're referring to - it's all of those.
The "Xbox Series X" is such a nonsensical name that only a marketing department could come with it. And this entire line of names exists solely because someone thought that nobody would buy a "Xbox 2" instead of a "PlayStation 3".
Because X's mean moar marketing power... Like the Extreme X870E X motherboard... There's multiple X's and Extremes and the X's mean extreme... so it's moar extreme!!!
Among the other small nits in your otherwise concise post... the windows only versions of .NET (1-4) were known as .NET Framework. So, Framework is the only windows only version, followed by Core being a limited feature set but cross platform and then .NET 5 (no suffix) being a full featured version that is cross platform.
I'd argue that the dominance of Linux on cloud and Azure growing business is what's causing Microsoft to have an ongoing interest in linux support.
A factoid that's shared sometimes (no idea if true) is that Microsoft now employs more Linux kernel engineers than Windows kernel engineers due to Azure.
That came after. Linux wasn't even on 2.6 with its famous stability yet when this kicked off. What you see now is a result. They softened on open source as they realized it actually has some benefits for a company like Microsoft.
The Microsoft of the Halloween Documents[0] is a different Microsoft from the one we see today that understands open source as something good rather than as a threat, and it started with Microsoft being forced to play nice.
After having gouged Red Hat and Suse for years with their bogus Linux patent racket and bankrolling the infamous SCO Unix lawsuit. Make no mistake M$ coming over all We Love Linux was like Donald Trump turning up at the DNC.
I do remain skeptical that the node on the Microsoft org chart that usually strangles anything good the companies does is waiting to strike. It used to be Windows node, but now it seems like the ad node comes in for the kill most of the time. The company is slowly morphing into Google as Google morphs into Amazon, while Amazon is morphing into UPS.
Off-topic but to join in the general good vibes this announcement emanates: i have to say that my experience using Azure cloud has been stellar. Their co-pilot integration works well, IME. Azure shell is simple and good. Dashboard UI is always good.
Bona fides: I have used GCP for 3 years, AWS for 3 years, and Azure for ~ 1 year. As well as the more "bare-metal" types of cloud providers like Linode/Akamai, and Vultr -- all the latter of which are great for self managing your infra.
I also really find the ability to spin up Windows Server and Windows 10/11 etc super useful for builds, testing, Hyper-V.
I really like Azure for huge projects with many moving parts.
More like it was shoring up for developers who use and/or target mac and linux. Many devs are using macs and targetting linux for deployments. MS wants Azure to be a first class option for developers and is the focus for making money going forward. It makes sense for their developer tools to offer that.
Azure didn't exist. OS X had just come out and almost no one took Macs seriously as a development target yet. Windows was the only user-facing thing anyone developed for aside from little Java games on flip phones. The Web 2.0 takeover was still years off and Internet Explorer ran the show.
Is "historical context" not as clear as I thought? You're the second person to challenge this by pointing out the current situation when I'm talking about how we got here.
Then you're not talking about what I was talking about in the post you replied to with a framing that suggested you were disagreeing. Did you click the wrong reply link?
Mono implemented the GUI stuff like Windows Forms, do the latest windows cross platform stuff support that? Can you run .Net GUI windows program on linux without Mono but using the latest .Net thing ? I know it was not possible in the past.
The whole point of .NET-Core was to remove all the (largely desktop-oriented) platform-specific dependencies that tied it to Windows, so you could run server-oriented .net programs on Linux. So no, afaik you can't simply run GUI apps built with .Net on Linux desktops - that's the reason Mono wasn't simply killed, because it covers that niche (which wouldn't even exist, were it not for Mono/Xamarin's efforts back then. But I digress...). Nowadays there are a few other attempts at providing that UI layer.
.net Core still has Windows Forms thoguh? At least I (for kicks) migrated one of my old .net 4.something projects to .net core and it still works and shows the classic Windows Forms GUI.
.Net Core on Windows has support for loading assemblies that reference COM interfaces and the win32 API, along with other things that aren’t supported elsewhere like C++/CLI.
That’s why loading System.Windows.Forms still works, it’s not part of .Net 5+, but it can still load the assemblies on Windows (they still use GDI, etc under the hood).
Sure, nobody wants to write Winforms new applications today
My point is about running existing applications on Linux,
there are still issues with running .Net GUI stuff under wine and Mono was not a perfect implementation.
I read in other comments that the newer .Net cross platform stuff is not a replacement for Mono for running this old applications. (nobody will rewrite them to use the current GUI stuff from MS since are old apps)
No, Microsoft's .NET only supports WinForms on Windows. They do have an official cross platform GUI toolkit in MAUI, but it strangely does not support Linux.
Last I knew it is also considered pretty lackluster. Every time I read up on it it feels like, even beyond the lack of Linux support people just don't care for it.
If I was building a cross platform native app with .NET I'd probably use Avalonia right now.
Yeah, the took an age delivering it, then it came out and most of the early reports were “It’s still not ready.” and then I think Microsoft just gave up.
I think not supporting Linux was a tactical error, though. Some people will put up with a lot for Linux GUI support, and some of those people are the types who can resolve problems with your half-baked GUzi framework.
Does it really need help? I struggle to imagine a scenario where one would consider MAUI not supporting Linux to be an issue (if we discard superficial bad faith concern) when Avalonia, Uno or, if you care about Linux as the main target, Gir.Core exist.
And, at the end of the day, you have a tool with an extremely rich FFI capability so whatever is available from C you can use as well.
Sorry I clearly was not clear enough. I mean specifically an issue with MAUI itself. I agree dotnet/c# have some solid UI options cross platform at this point. MAUI however seems to be at best a mess and at worst dead in the water.
> "The future is already here – it's just not evenly distributed."
Were I live and work (IT and consulting in central south-east Norway) it has been the year of the Linux Desktop on and off since 2009.
That was the first time I worked full time at a place that deployed Linux for everyone and everything that didn't have a verified reason for needing Windows.
I think we had one 3rd party trading software running on a Windows machine and maybe the CEO and someone in accounting got Windows.
Everyone else was upgraded to Linux and it worked beatifully. It was my job to support the sales department with desktop related issues and it was absolutely no problem to do it while also being a productive developer.
Since then I have not worked on a place that required Linux, but I think most of the places I have worked on since has had Linux as an option as long as you supported it yourself, and some places also have been very active writing how-tos and working with me to troubleshoot issues that were related to Linux, since many of them were also Linux users.
At the moment I use Mac, but at my current job I'm also allowed to use Linux.
Open Source Support reasons. If Linux developers want better MAUI support there is a "Community Repo" to contribute to and help move things further along. The impression is that if things were further along it might get formally "adopted" (by the Dotnet Foundation) for "official" out-of-the-box "support", but it isn't far enough along and doesn't seem to have enough contributors with enough momentum. It currently seems that the Venn Diagram of "Developers that say they want MAUI support for Linux" and "Developers that would contribute to Linux support for MAUI" has too small of an intersection.
Sure, Microsoft could pay more employees to work on it faster, but Linux loves and prefers open source from Linux devs "untainted by Microsoft", right?
Contribute to the Maui backend for GTK and/or Qt, nothing is stopping you
Alternatively, just because you're on .NET doesn't mean you need to use Microsoft sanctioned UI toolkits, just as C++ has no "official" UI toolkit. You're free to pick up some GTK or Qt bindings if you want a native feeling and your application is already architectures correctly. Alternatively, throw Imgui at it if you just need dev tooling, or maybe try other cross platform toolkits in the ecosystem like Avalonia or Uno
It is not perfect, there are issue depending if you need 32 or64 bits or if you need .net4 or greater. Games work but I have issues running tools like mod managers, game save cleners that are made with .net . In my case Sims3 works fine but not the Sims3 Launcher(this tools has more features then just launching the game like importing custom content/mods )
Sadly some Java tools stopped working if you run latest Java runtime because for some reason some crap was removed from Java and nobody made some easy way to add them back with soem package install.
With commercial applications that want to just take their existing code and have it run on Linux with only a couple lines changed, Avalonia XPF will do that
You are expected to use Avalonia or Uno for multi-platform targeting or Gir.Core (GTK4) or one of the many other binding libraries for Linux-specific GUI.
Also very easy to throw something together on top of SDL2 with Silk.NET.
Practically speaking it is in a much better place than many languages considered by parts of Linux community to be more """linux-oriented""".
My personal use case is running old GUI apps, I am not planning on writing GUI apps with .Net , MS had the opportunity to open source .Net/Silverlight and make money from tools but they bet on Windows and today most apps are node and javascript, a much inferior platform but MS open things up too late.
No, they pretty much gave up on winforms when .net core morphed into "the" .net that is cross platform. There are some nice crossplatform gui libs now though.
If true this would be huge. I got burned on the whole silverlight, Universal Windows Platform, WPF etc. All these new and improved solutions had all sorts of issues, no designer, no or weaker accessibility stories, bloated, slow etc etc.
C# + Winforms would be appealing. Some of the performance with larger datasets in the new solutions (tables etc) was just surprising. I really feel like Microsoft got so distracted chasing phones, tables, touch etc they forget just basic line of business application development which they could and should have owned.
.net Core doesn't supply WinForms, but WPF is the far more common paradigm for Windows apps now. WPF is supported by projects like Avalonia on Linux. There are also a few other major alternative UI toolkits, more commonly used by cross-platform (vs Windows-exclusive) developers.
This is the "virtual monorepo", if you want to clone one repo and build the entire SDK product then this is the correct thing to checkout - but development work right now still happens in the separate project repos, of which there are ~20
No it's way better than Flutter. Avalonia really works on desktop.. :). Also the model is WPF so whoever know a little bit of legacy .NET framework will be able to write Avalonia apps in no-time
I don't know any .net, and have never heard of this until now. Only stories with comments on HN are from eight years ago. Although I liked the screenshots on the linked site, it doesn't seem to have much buzz around it.
And unfortunately, the only stench I can't stand more than Google's is Microsoft's.
I do not follow buzz. I am an engineer by education and attitude and always try to investigate my options based on my needs and requirements. I use buzz only to drive me trough my investigations. In my case I had a desktop application that had to run on Windows and MacOs and needed support for Rich text format and rendering of custom graphs.
Following buzz I started to do a prototype with Flutter and stopped after a few days as I found out that most of the open source controls I was using had bugs on Windows Desktop. Then I moved to MAUI and discovered that in order to have some decent Rich Text support my only option was Blazor Hybdrid. Nedless to say I found bugs that prevented my prototype to work correctly. Then I moved to UNO and found that it doesn't have full Rich text format support. I was able to find some .NET open source libraries for doing text layout on Skia and with that I was able to find a partial solution that was however pretty complicated. Out of curiosity I investigated Avalonia and found that everything that I needed had full support. Being fluent in WFP I built the prototype in 3 days and I never looked back.
Your experience might vary depending on your fluency of WFP but I found that, considering Windows Desktop as a target platform, Flutter and MAUI are absolutely the worst options.
In my opinion Uno is better than Avalonia when considering web application support but Avalonia has more coverage of the WPF api with respect to what Uno does for WinUi. And for sure marketing is the worst part of Avalonia while it is the BEST for MAUI and Flutter.
BUT
That's now officially unsupported as all of Xamarin Forms is no longer supported and the MAUI replacement doesn't cover Linux nor does that look likely (MAUI is mired deep in problems due over-ambition, failure to resource and it seems a significant push in MS to use MAUI Hybrid aka web UIs within native apps).
Yes. There are multiple UI projects that build on the WinUI 3 components in the Win App SDK.
There's the first party MAUI which is an updated version of Xamarin Forms. The two best-known third-party implementations are AvaloniaUI and Uno. I prefer Uno, it has more cross-platform targets.
Which lets you run Blazor (web framework) like a desktop UI across all major desktop platforms. Microsoft has MAUI/Blazor as a thing, but only targets Mac and Windows ATM, so Photino bridges the gap for Linux.
Photino lets you use anything other than just .NET but has pretty decent .NET support.
(i hardly know what i'm talking about so somebody else may have a better idea, but i'm here now so)
mingw is a GNU's header/library environment (tools too maybe?) to create windows compatible applications. So I'd look into searching mingw .net and/or mingw mono.
also, ask your favorite AI, they're good at this type of question so long as it's not up to the minute news
> I looked into .NET somewhat recently and came away with the apparently mistaken impression that Mono was how .NET did cross-platform. I guess I must have been reading old docs,
.NET Core 1.0 (2016) was the first cross platform prototype. It got good in a release in 2018 or 2019, I even forgot which now. And took over steadily after that.
We don't even think about it any more. "which OS is the prod env on" isn't a factor that causes any support worries at all.
I would say I’m not ‘new’ and even developed .net 4.5 for a number of years. I’m just as stumped by the naming mess that Microsoft made across the board in that space.
Edit: I say 4.5 because I mean the original thick .net which is not dotnet core, which I think is the way to differentiate between versions, but also all the sub libraries like the orm were iirc named the same but did different things.
They should have rebadged everything with a new name that didn’t involve a word that is fairly painful to google (‘core’) can be used in development as well as the name of a framework.
It's even worse, since they dropped the core now and just call it .NET.
So searching has become even more of a pain.
It's also pretty much a mess, because many things were different between the versions.
So let's say you google how to do something and the result could be:
I think Microsoft is completely allergic to naming anything with a unique name or term; in fact, it's almost like they pick names that will be hardest to find with a google search.
If you just want to get into .NET (C# or F#) on non-Windows platforms, the latest .NET release (at the time of writing, 8.0) is what you want. The development experience is good these days.
Aside from following the default 'start here' documentation, there are various timelines made for fun and profit that visualize the full history, for example:
This is quite overwhelming, but it can still be useful when reading an article about .NET that is either older or refers to history as you can quickly see where in time it is located.
> Is there somewhere where someone new to the ecosystem can get a simple introduction to all of these different terms and which ones are still relevant today?
Not really. It's legacy cruft all the way down.
But the good news is that if you stay on the beaten path, using the latest SDK and targeting the latest Runtime, everything Just WorksTM.
i want to love dotnet-core, especially since godot switched from mono in godot 3 to dotnet-core in godot 4, but so far i haven't been able to
currently debian has a mono package but no dotnet-core package. i'm not sure why this is; usually when debian lacks a popular nominally open-source package like this, it's either because it fails to build from source, or because it has some kind of tricky licensing pitfall that most people haven't noticed, but diligent debian developers have
does anyone know why this problem exists for dotnet-core?
also, does dotnet-core have a reasonable aot story for things like esp32 and ch32v003?
.NET Core is available for Debian, you just have to add Microsoft's APT source [1].
Fedora [2], Ubuntu [3], and FreeBSD [4] build .NET from source themselves. A lot of work has been done to make it possible to build .NET from source [5] without closed source components, so it might just be a matter of someone being motivated to create the package for Debian.
When using Microsoft repositories you need to explicitly opt out on telemetry collection.
I think telemetry collection alone should be a good reason for Debian to consider repackaging it. I don’t want telemetry to be collected on my GNU/Linux machine, thanks Microsoft, but you already have so much telemetry from my Windows machine, please leave my other machines alone.
In any case, Debian would use https://github.com/dotnet/source-build and dotnet/dotnet, and could easily include the argument or a patch for this. It’s unlikely to be an issue. My bet it was not in Debian because there was no one to take initiative yet or there was but that person has faced a backlash by people in Debian who are similar to vocal minority here that posts FUD because of their little personal crusade.
it doesn't seem to have come up on debian-legal in the last year or so https://lists.debian.org/debian-legal/ but debian-legal is also kind of a shadow of its former self
you could easily imagine fedora distributing their own build of software whose licensing fails to comply with the debian free software guidelines; bundling proprietary software used to be common in linux distributions in fact
Does Debian require packages to work on all of its architectures? If so, that could be the issue. .NET Core only supports x86, x64, and Arm64 (I think Arm32 has been discontinued and RISC-V is experimental at this point).
It's possible that they object to .NET Core having certain license restrictions on the Windows port (https://github.com/dotnet/core/blob/main/license-information...). .NET Core is mostly MIT or Apache licensed, but the Windows SDK has some additional terms. Skimming the third party licenses, that doesn't seem like an issue (mostly MIT/BSD/Apache or similar).
I think the licensing situation is an interesting question: if you have software that's 100% open source when compiled for your OS, but requires non-free stuff to run on Windows, is it ok to include in Debian? It looks like none of the non-free stuff (like WPF) gets distributed with the non-Windows SDK builds. Binaries created from your code only depend on MIT-licensed stuff on macOS and Linux, but might depend on something closed-source when targeting Windows - though it looks like almost all of that stuff is either WPF (so you wouldn't be able to develop on Linux/Mac anyway since those libraries wouldn't be in the SDK on those platforms) or were removed as a runtime dependency in .NET 7. It looks like `Microsoft.DiaSymReader.Native` might be the only thing left. Maybe that's what is holding it back?
> also, does dotnet-core have a reasonable aot story for things like esp32 and ch32v003?
"Reasonable" can be a lot of things to a lot of different people. People have been working on RISC-V support. Samsung seems interested in it. But I probably wouldn't recommend it at the moment - and Mono doesn't really have RISC-V support either.
to be clear, my question about debian is not about whether i can install dotnet-core in debian; it's about why it isn't in debian's repositories rather than microsoft's. microsoft, to understate the case somewhat, doesn't provide the stringent protections for users that debian does
> Specifying a specific list of architectures indicates that the source will build an architecture-dependent package only on architectures included in the list. Specifying a list of architecture wildcards indicates that the source will build an architecture-dependent package on only those architectures that match any of the specified architecture wildcards. Specifying a list of architectures or architecture wildcards other than any is for the minority of cases where a program is not portable or is not useful on some architectures. Where possible, the program should be made portable instead.
i don't think the license you link to would be a problem in itself, because it only applies to certain files which are not useful for running dotnet-core on debian anyway. debian has lots of packages from which non-free-software files have been removed. i don't know anything about diasymreader?
with respect to esp32 and ch32v003, what i meant to point to was not the risc-v architecture (some esp32s are tensilica!) but the limited memory space; jit compilation is not a good fit for 2 kibibytes of ram or even 520 kilobytes of ram
if you want your package to be in debian, you are going to have to find a debian developer who is willing to take responsibility for maintaining it. microsoft is already providing .deb packages on their website, at least binaries
getting one of your people to become a debian developer is similar in difficulty to getting one of your people to become a senator or a citizen of switzerland
It sure wouldn't hurt if they hired a Debian Developer to do it right, or maybe work through the process of turning an employee into a Debian Developer.
Debian developers can do it right because they're not affiliated with the vendor, so they can disable user-hostile features and settings that the vendor enables by default.
i don't think debian developers are actually prohibited from becoming employees of the vendor, but i think that if they get caught pushing malware, their dd status is likely to be revoked, and the process that allowed them to become dds is likely to be reviewed. any dd can generally push a change to any debian package to the archive; it's a major level of trust. that's why it's generally not realistic to try to get one of your employees to become a dd
There's a large segment of Debian Developers that are also the upstream maintainers/owners of various projects. I can't think of any paid examples, but volunteer ones are plentiful.
Perhaps the Debian project would force a .NET package to come with telemetry disabled by default, but for as long as said employee can abide by the rules of Debian, I don't really see any reason it can't be done.
Even with AOT compilation, as someone who loves C# and also does embedded development in C I would personally say a garbage collected language like C# has no place there.
not everything running on a 20-mips 32-bit microcontroller with 2 kibibytes of sram needs to be hard real time and failure-free, and of course the esp32 has hundreds of kibibytes
and, correct me if i'm wrong here, but doesn't c# allow you to statically allocate structs just as much as c does? i'd think you'd be able to avoid garbage collection about as much as you want, but i've never written much beyond 'hello, world' in c#
c# has the concept of value types (which structs are), which are stack allocated. Generics have seen more and more instance of getting a Value type like Value Task for stack allocated async objects. But if you add a class as a member of the struct that is going straight to the heap with all the GC stuff that entails
what about global or static variables of value types? i mean in theory you could stack-allocate whatever you want in your main() method and pass pointers to everything, but that sounds unusably clumsy. but with global variables and/or class variables there would be no problem except for things that inherently require heap allocation by the nature of the problem
Static fields may be placed on Frozen Object Heap. The values of static readonly fields may not exist at all if the ILC's static constructor interpreter can pre-initialize it at compile-time and bake the value into binary or codegen. Tiered Compilation does a similar optimization but for all cases. This is with JIT though which is not usable in such environment.
Otherwise, statics are placed in a static values array "rooted" by a respective assembly. I believe each value will be contained by a respective box if it's not an object. This will be usually located in Gen2 GC heap. My memory is a bit hazy on this specific part.
There is no concept of globals in .NET the way you describe it - you simply access static properties and fields.
In practice, you will not be running .NET on microcontrollers with existing mainline runtime flavours - very different tradeoffs, much like no-std in Rust. As mentioned, there is NanoFramework. Another one is Meadow: https://www.wildernesslabs.co which my friend is using for an automated lab for his PhD thesis.
Last mention goes to https://github.com/bflattened/bflat which supports a few interesting targets like UEFI. From the same author there's an example of completely runtime-less C# as well: https://github.com/MichalStrehovsky/zerosharp. It remains a usable language because C# has a large subset of C and features for manual memory management so writing code that completely bypasses allocations is very doable, unlike with other GC-based alternatives.
there are ways (byref I think?) to pass references to stack variables around. And Statics depends. Static const even with stuff like strings would just compile directly into the binary, regular static still has to end up on the Heap.
I believe that you would use dotnet nano for something like that. I used it (or some previous version of it) once many years ago and was very impressed with the productivity and ease of use it offered. Ultimately the lack of community surrounding it drove me to other technologies. Might have changed since then though, who knows!
It never had any sense, and it never had any future. We told Miguel he would be playing the chase game with Microsoft and he will always be behind and never being sure if MS won't use the patent card if Mono actually becomes dangerous (and they can get quite nasty when pissed off - see the accusations against ReactOS).
But he was in love with COM/DCOM, registry, and many other things that MS shipped. Some of these things made Gnome much slower than it could be.
As someone who used GitHub for years (2009-2014), got acquired into a big tech company for 8 years, so I didn't get to use it much, and then got to go back to a startup that used it again, it feels like a totally different product. It's a starkly different offering from that perspective.
GitHub Actions are a major part of that. Codespaces, a solution for Mac CI, that you automate so much, store secrets, drive not CI but CD straight from GitHub now, etc. It's evolved a ton. Then you have the other things they are doing as company with copilot and vscode (and formerly atom).
Sure, code search is only moderately better, and maybe some of the core git features aren't getting as much razzle-dazzle in the UI, but it's evolved a ton.
Technical caveat: To work effectively, it needs to be able to compile (parse into an AST) the code, which for many GitHub projects is challenging. I'd bet they could easily get to 80% compilation rate though and then use dumb text search on the rest.
Nah. XDG base directory was supposed to solve a ton of things 18 years ago, except none of them are hard-hitting user problems most folks really care about most of the time unless you are really anal about hidden folders in your home directory polluting things up (most people aren't), so it fails with partial support from apathy from some package devs on Linux to implement it.
Honestly, we should probably abandon the hope that we will ever get full universal adoption. It stinks even worse with the half-adoption state we are in now, with some apps writing to those directories and other apps not. If it wasn't for arch Linux folks trying to drive apps to adopt it more, I don't think it would be. Hell, default Debian/Ubuntu still doesn't set XDG_* directory ENVs by default (some flavors do), and some apps ignore the prescribed "if not set, use this default" nonsense in the spec and do what they have always done because it doesn't break compatibility for users.
Part of the spec stinks, too, like the spit between cache/config/data config where the spec has no written rules specified on when anything is expected to be cleaned up and when or what can or should be backed up by the user.
Let's move to containerizing most apps, putting everything in little jails so we don't have to deal with where they want to write to. They can have their own consistent and expected behaviors on their own island. Snapcraft, flatpack, or any of a bunch of other solutions are already available to solve this problem. Don't have to worry about what some app left behind or wrote to your system when it's all contained.
> Hell, default Debian/Ubuntu still doesn't set XDG_* directory ENVs by default (some flavors do)
You are supposed to leave them unset if they match the defaults.
Programs ignoring the spec is another matter and should be fixed in those programs not by needlessly bloating the environment of every process with meaningless data.
> It stinks even worse with the half-adoption state we are in now, with some apps writing to those directories and other apps not.
It stinks about half as much as before.
> Part of the spec stinks, too, like the spit between cache/config/data config where the spec has no written rules specified on when anything is expected to be cleaned up and when or what can or should be backed up by the user.
Caches don't have to be backed up and can be deleted if you need space. Most people will want to back up configs. What other data should be backed up is a question different people will answer differently. And containers don't solve this.
I actually back up my Gradle cache in particular because things have a tendency to disappear from the internet sometimes, and permanently break builds of older stuff.
The build dependencies are the problem. The build process downloads them from hardcoded internet URLs and they are cached for 30 days. By backing up the cache, I can (probably) rummage around it later to find anything that's been deleted from the internet.
That's bad too, but at least it's not right in my home folder! When I have to use Windows, I basically just avoid using Documents for files. (But my daily driver is an ancient version of OS X, which has ~/Library for these things.)
Oh, mac has the same issue and for the same reason: the same programs will create the same garbage.
~/Library also isn't something that is used to be; it was good idea, but mostly ignored. Basically similar idea as XDG_(CONFIG|CACHE|DATA)_DIR.
What's worst, is that many projects ignore any effort to move them to XDG schema. They take the approach, that they always did it this way, it works, so why changing it.
My home directory is much cleaner because I've made a concerted effort to keep it that way, in a couple of cases even recompiling software. I found this much more difficult on Linux.
Or even go take a look at Nix/NixOS and how they pull it off in another way. They have hermetic isolation down to a science.
Or heck, just look at what Android does, running each app under its own uid/gid, sandboxing 3rd party code, and keeping each app from reading and writing outside their little jails. Can't pollute a user directory or even write to /tmp if your user can't even enumerate it.
Hell, even built a whole sandboxing capability-based security model inside of Fuchsia at Google, which I worked on for 5+ years.
I've been building OSs for 20+ years, between Fuchsia and Android at Google and mobile/embedded products at Texas Instruments, so I hope I know what I'm talking about.
Nah. Your objections are rooted in the very limited definition of what an OS is and what a user application model is that fits it. There is no reason why each process/app can't be sandboxed. In fact, it should be for security (we did it in fuchsia). It's actually the way things work with apps from the app store on MacOS in a lot of ways where you can't escape your jail except through what is explicitly entitled.
General purpose just means that it's generally useful for a wide range of applications. Android is that. Hell, you can find Android running small appliance server infrastructure and powering IoT devices. Even in 2024, iOS/iPadOS is general purpose at this point, and they have VERY different application models from legacy app models you find on Windows and Linux. You wouldn't not call NixOS a general-purpose OS, and it's like flatpak for literally every process down to things like bash, grep, and vim.
Snaps are fine. Aversion is only from how it was introduced to people in Ubuntu, but conceptionally, is great. cgroups wrapping user processes to box them is not only good for security but also for stability and solving dependency versioning issues. It's brilliant. It's similar to what we did in Fuchsia at Google (we took it to another level because we had no legacy to deal with).
And sure, maybe I have bad judgment on some things. I contributed a ton to GNOME in the early 2000s both code and ideas that were horrible in hindsight, but I'm not still stuck in an outmoded mental model for thinking about my user environment.
AppImages are not containers at all. They bundle up the program data into a single archive but do not do any sandboxing and leave programms to write their user/config/cache files to wherever they would be written without AppImages, i.e. in xdg-basedir locations. As it should be.
> Or even go take a look at Nix/NixOS and how they pull it off in another way. They have hermetic isolation down to a science.
NixOS's packaging is also completely orthagonal to the xdg-basedir spec.
> I've been building OSs for 20+ years, between Fuchsia and Android at Google and mobile/embedded products at Texas Instruments, so I hope I know what I'm talking about.
None of those are desktop operating systems. Please stay away from those with your anti-user opinions.
Or Qubes, which goes further and features per-app VMs. Snaps was foisted on the community which was then unwelcoming of it, but per-app isolation isn't the worst idea.
Yeah, I had upvoted the comment, and retracted the upvote when I got to the last sentence. Nobody who has anything kind to say about snaps has any business anywhere near my desktop.
This would break a bunch of little things in annoying ways. Like I have shell script tools that store shell scrollback logs in .cache, and I want that past reboot.
FWIW, I've been doing it for many years, and not noticed any problems.
If something is supposed to persist past reboot, I wouldn't put it in `.cache`. Though I don't know offhand what the official documented behavior of `.cache` is, and I can't immediately find that documentation (maybe some open desktop cabal thing?).
> $XDG_CACHE_HOME defines the base directory relative to which user-specific non-essential data files should be stored. If $XDG_CACHE_HOME is either not set or empty, a default equal to $HOME/.cache should be used.
It's fine to put ~/.cache on tmpfs, but doing it by default for the general case is going to cause a lot of hurt.
My ~/.cache could be rm -rf'd without too much worry right now, but that doesn't mean that persisting it isn't useful. For example on my system right now:
- Browser cache is useful to persist, especially with some larger sites.
- I put my Go module and build cache in ~/.cache, and while that can be deleted it's useful to persist because it make builds shorter, and avoids having to (re)-download the same modules over and over again. Note that at the moment my internet is kind of crappy so this can take quite a while.
- Some other download cache things in there, from luarocks, xlocate, few other things.
- I store psql history per-database in ~/.cache/psql-dbname. It's useful to keep this around.
- Vim backup files, persistent undo files, and swap files are stored in ~/.vim. The swap files especially are important because I want to keep them after an unexpected system crash.
Some of this is solvable by moving stuff to other directories. Others are inherently unsolvable.
I also have just 8G of RAM, which is fine but not fine for storing ~/.cache in RAM.
That's silly. Cache is there for preserving things across many runs of some application. Applications certainly use it in a way that assumes long term storage. It's not for short term temporary things. It's a cache.
It's not going to cause "problems". It's just going to massively slow down many use cases that rely on downloads.
I don't think XDG base dir will ever be fully and universally adopted. It's been years (21 years to be exact). If it wasn't for Arch Linux getting hostile to programs that don't follow (like locking permissions of the root user directory), it probably wouldn't have grown as much as it has recently to what it is now because, honestly, the spec is not actively solving real problems people have. Most of the adoption is happening because people are getting cranky about their $HOME directory layout enough that they are annoying enough to bully maintainers into supporting it, but 99% of users don't give a damn if the program follows XDG base dir or not. It's just making the things arguably "cleaner" according to freedesktop from a spec that hasn't gained wide adoption over 20 years.
I, however, don't like the loosy goosy config/data/cache split like they have it and the implied intent because not everything falls into those buckets and then you don't know the behavior of some system thing (is Dameon going to wipe my cache? are users going to copy the config folder around? what gets backed up exactly?) There is no standard for how the folders are really treated in the spec. Intentionally vague but possibly vague in ways that will break things if people make assumptions.
It's like the /usr split from the root. or /usr/local from the root. Or /opt. Left overs from a bygone era that seem to stick around. Trying to create yet another.
I reluctantly support it in our product, but only when those ENVs are explicitly set, and I 100% ignore XDG's behavior and what it says should be the default on every linux system when it's not set because it's not what other UNIX variants like MacOS expect. Just enough to keep the Arch Linux zealots from flooding our bug tracker each week, literally demanding we adopt it when they account for less than 1% of users. It's hard enough with docs to let users know where to find their files from our product on their system because users don't know what XDG is all the time and XDG for most is not the established convention they expect already. Frankly, there is too much legacy with trying to move everyone over to this spec, especially when it isn't driving some real compelling value for anyone except for folks who want a pristine user directory.
Honestly, we should rethink it entirely and move to a container-based solution where our desktop apps can't escape their little jails (almost like snap apps). Maybe learn from NIX a bit. Admit defeat that will ever get every linux user app in the world to care and adopt XDG base dir (or even user dir). It's hard enough we are supporting Mac and Windows ports that don't follow this behavior. Get away from the UNIX permission model of running the programs under the user and run each program under it's own uid. It's now android works.
> the loosy goosy config/data/cache split like they have it and the implied intent because not everything falls into those buckets and then you don't know the behavior of some system thing (is Dameon going to wipe my cache? are users going to copy the config folder around? what gets backed up exactly?) There is no standard for how the folders are really treated in the spec. Intentionally vague but possibly vague in ways that will break things if people make assumptions.
Can't say I've ever had issues with this regard, either in my own programs or in other programs I use. It always seems clear cut to me which should be used when. Can you give any examples?
> Get away from the UNIX permission model of running the programs under the user and run each program under it's own uid. It's now android works.
I don't blame other people if they want stuff like that, but personally I feel too old for that and will be sticking with what I've got (including XDG directories, because I find them easy and aesthetically pleasing.) All the newfangled sandboxing and stateless stuff is too much for me to learn for only academic benifit; I've never been pwned because thus far I have had good judgment with what software to trust. Learning whole new systems that disrupt my established workflows because I'm meant to be paranoid of the programs I choose to run just doesn't seem like a good tradeoff for me, personally.
It would be like ripping down all the walls in my house to install fire proof barriers, when I've been living in this house for 20 years without a hint of fire. If the house were built like that when I got it, then great, but it wasn't so I'm just going to be careful with candles and replacing smoke alarm batteries.
- Arch Linux isn't some fringe operating system full of zealots, it's one of the most popular Linux distribution (it was the most commonly used Linux distribution in the Steam hardware survey in April 2024, beating out Ubuntu). I think many people like it precisely because it takes an opinionated stance on issues such as these.
- You characterize people who ask for these features (for example in bug tracker) as "bullying" maintainers - the fact that such requests are common should show you how widespread support is for these ideas. It seems odd to procure feedback about what users want and then say, "well, that's not what they really want, it must just be a vocal minority".
- Directories like /use/local and /opt are not vestigial, they serve specific purposes. I don't think it's fair to call them a "holdover". For example, without /usr/local, how do you separate software compiled and installed by the user vs. from a package manager?
- I agree completely that containerization of apps is the future.
And go is wrong for doing that, at least on Linux. It bypasses optimizations in the vDSO in some cases. On Fuchsia, we made direct syscalls not through the vDSO illegal and it was funny the hacks to go that required. The system ABI of Linux really isn't the syscall interface, its the system libc. That's because the C ABI (and the behaviors of the triple it was compiled for) and its isms for that platform are the linga franca of that system. Going around that to call syscalls directly, at least for the 90% of useful syscalls on the system that are wrapped by libc, is asinine and creates odd bugs, makes crash reporters heuristical unwinders, debuggers, etc all more painful to write. It also prevents the system vendor from implementing user mode optimizations that avoid mode and context switches when necessary. We tried to solve these issues in Fuchsia, but for Linux, Darwin, and hell, even Windows, if you are making direct syscalls and it's not for something really special and bespoke, you are just flat-out wrong.
> The system ABI of Linux really isn't the syscall interface, its the system libc.
You might have reasons to prefer to use libc; some software has reason to not use libc. Those preferences are in conflict, but one of them is not automatically right and the other wrong in all circumstances.
Many UNIX systems did follow the premise that you must use libc and the syscall interface is unstable. Linux pointedly did not, and decided to have a stable syscall ABI instead. This means it's possible to have multiple C libraries, as well as other libraries, which have different needs or goals and interface with the system differently. That's a useful property of Linux.
There are a couple of established mechanism on Linux for intercepting syscalls: ptrace, and BPF. If you want to intercept all uses of a syscall, intercept the syscall. If you want to intercept a particular glibc function in programs using glibc, or for that matter a musl function in a program using musl, go ahead and use LD_PRELOAD. But the Linux syscall interface is a valid and stable interface to the system, and that's why LD_PRELOAD is not a complete solution.
It's true that Linux has a stable-ish syscall table. What is funny is that this caused a whole series of Samsung Android phones to reboot randomly with some apps because Samsung added a syscall at the same position someone else did in upstream linux and folks staticly linking their own libc to avoid boionc libc were rebooting phones when calling certain functions because the Samsung syscall causing kernel panics when called wrong. Goes back to it being a bad idea to subvert your system libc. Now, distro vendors do give out multiple versions of a libc that all work with your kernel. This generally works. When we had to fix ABI issues this happened a few times. But I wouldn't trust building our libc and assuming that libc is portable to any linux machine to copy it to.
> It's true that Linux has a stable-ish syscall table.
It's not "stable-ish", it's fully stable. Once a syscall is added to the syscall table on a released version of the official Linux kernel, it might later be replaced by a "not implemented" stub (which always returns -ENOSYS), but it will never be reused for anything else. There's even reserved space on some architectures for the STREAMS syscalls, which were AFAIK never on any released version of the Linux kernel.
The exception is when creating a new architecture; for instance, the syscall table for 32-bit x86 and 64-bit x86 has a completely different order.
I think what they meant (judging by the example you ignored) is that the table changes (even if append-only) and you don't know which version you actually have when you statically compile your own version. Thus, your syscalls might be using a newer version of the table but it a) not actually be implemented, or b) implemented with something bespoke.
> Thus, your syscalls might be using a newer version of the table but it a) not actually be implemented,
That's the same case as when a syscall is later removed: it returns -ENOSYS. The correct way is to do the call normally as if it were implemented, and if it returns -ENOSYS, you know that this syscall does not exist in the currently running kernel, and you should try something else. That is the same no matter whether it's compiled statically or dynamically; even a dynamic glibc has fallback paths for some missing syscalls (glibc has a minimum required kernel version, so it does not need to have fallback paths for features introduced a long time ago).
> or b) implemented with something bespoke.
There's nothing you can do to protect against a modified kernel which does something different from the upstream Linux kernel. Even going through libc doesn't help, since whoever modified the Linux kernel to do something unexpected could also have modified the C library to do something unexpected, or libc could trip over the unexpected kernel changes.
One example of this happening is with seccomp filters. They can be used to make a syscall fail with an unexpected error code, and this can confuse the C library. More specifically, a seccomp filter which forces the clone3 syscall to always return -EPERM breaks newer libc versions which try the clone3 syscall first, and then fallback to the older clone syscall if clone3 returned -ENOSYS (which indicates an older kernel that does not have the clone3 syscall); this breaks for instance running newer Linux distributions within older Docker versions.
Every kernel I’ve ever used has been different from an upstream kernel, with custom patches applied. It’s literally open source, anyone can do anything to it that they want. If you are using libc, you’d have a reasonable expectation not to need to know the details of those changes. If you call the kernel directly via syscall, then yeah, there is nothing you can do about someone making modifications to open source software.
The complication with the linux syscall interface is that it turns the worse is better up to 11. Like setuid works on a per thread basis, which is seriously not what you want, so every program/runtime must do this fun little thread stop and start and thunk dance.
Yeah, agreed. One of the items on my long TODO list is adding `setuid_process` and `setgid_process` and similar, so that perhaps a decade later when new runtimes can count on the presence of those syscalls, they can stop duplicating that mechanism in userspace.
You seem to be saying 'it was incorrect on Fuchsia, so it's incorrect on Linux'. No, it's correct on Linux, and incorrect on every other platform, as each platform's documentation is very clear on. Go did it incorrectly on FreeBSD, but that's Go being Go; they did it in the first place because it's a Linux-first system and it's correct on Linux. And glibc does not have any special privilege, the vdso optimizations it takes advantage of are just as easily taken advantage of by the Go compiler. There's no reason to bucket Linux with Windows on the subject of syscalls when the Linux manpages are very clear that syscalls are there to be used and exhaustively documents them, while MSDN is very clear that the system interface is kernel32.dll and ntdll.dll, and shuffles the syscall numbers every so often so you don't get any funny ideas.
> The system ABI of Linux really isn't the syscall interface, its the system libc.
Which one? The Linux Kernel doesn't provide a libc. What if you're a static executable?
Even on Operating Systems with a libc provided by the kernel, it's almost always allowed to upgrade the kernel without upgrading the userland (including libc); that works because the interface between userland and kernel is syscalls.
That certainly ties something that makes syscalls to a narrow range of kernel versions, but it's not as if dynamically linking libc means your program will be compatible forever either.
In the case where you're running an Operating System that provides a libc and is OK with removing older syscalls, there's a beginning and an end to support.
Looking at FreeBSD under /usr/include/sys/syscall.h, there's a good number of retired syscalls.
On Linux under /usr/include/x86_64-linux-gnu/asm/unistd_32.h I see a fair number of missing numbers --- not sure what those are about, but 222, 223, 251, 285, and 387-392 are missing. (on Debian 12.1 with linux-image-6.1.0-12-amd64 version 6.1.52-1, if it matters)
> And go is wrong for doing that, at least on Linux. It bypasses optimizations in the vDSO in some cases.
Go's runtime does go through the vDSO for syscalls that support it, though (e.g., [0]). Of course, it won't magically adapt to new functions added in later kernel versions, but neither will a statically-linked libc. And it's not like it's a regular occurrence for Linux to new functions to the vDSO, in any case.
Linux doesn't even have consensus on what libc to use, and ABI breakage between glibc and musl is not unheard of. (Probably not for syscalls but for other things.)
The proliferation of Docker containers seems to go against that. Those really only work well since the kernel has a stable syscall ABI.
So much so that you see Microsoft switching to a stable syscall ABI with Windows 11.
"Decoupling the User/Kernel boundary in Windows is a monumental task and highly non-trivial, however, we have been working hard to stabilize this boundary across all of Windows to provide our customers the flexibility to run down-level containers"
It's not that much work; after all, every libc needs to have its own implementation. The kernel maps the vDSO into memory for you, and gives you the base address as an entry in the auxiliary vector.
But using it does require some basic knowledge of the ELF format on the current platform, in order to parse the symbol table. (Alongside knowledge of which functions are available in the first place.)
It's hard work to NOT have the damn vDSO invade your address space. Only kludge part of Linux, well, apart from Nagle's, dlopen, and that weird zero copy kernel patch that mmap'd -each- socket recv(!) for a while.
It's possible, but tedious: if you disable ASLR to put the stack at the top of virtual memory, then use ELF segments to fill up everything from the mmap base downward (or upward, if you've set that), then the kernel will have nowhere left to put the vDSO, and give up.
(I investigated vDSO placement quite a lot for my x86-64 tiny ELF project: I had to rule out the possibility of a tiny ELF placing its entry point in the vDSO, to bounce back out somewhere else in the address space. It can be done, but not in any ELF file shorter than one that enters its own mapping directly.)
Curiously, there are undocumented arch_prctl() commands to map any of the three vDSOs (32, 64, x32) into your address space, if they are not already mapped, or have been unmapped.