I'm not sure if there is any open source XSLT tool as complete as jq is for JSON. There is xsltproc but IIRC it does not support streaming scenarios (jq has some support for streaming processing)
Though, personally, I prefer JSON. Probably due to superior tools (thanks to its popularity) and less-bloated syntax (it is somewhat easier for me to read raw JSON file than raw XML file).
I do not see license in either repository and it seems that this tool only has 30 day evaluation tier for free. Anyway, using this means that you have dependency on a single vendor and you accept their future pricing changes.
If XML tools aren't open enough for certain needs, then sure, I get it. But it's tragic to see highly-engineered, pro solutions just die out because younger devs don't like the learning challenge or because business owners are cheapskates.
And in the `result` directory you will have droplet image which can be uploaded to digital ocean as droplet template.
Its probably best to also create separate configuration only for the image. As a bonus, its already configured, so no need to run `nixos-rebuild switch` after droplet creation. Partitioning configuration is also handled by the builded, however, filesystem choice (and partitioning schema) is limited
(I'm not saying that this is easier than out of the box support for NixOS: IMO if you are even aware how to create custom NixOS image then you are probably somewhat experienced NixOS user)
The main issue with Java-based projects (possibly using Spring) is the amount of existing resources: there are thousands/millions of example projects, code snippets, answers on StackOverflow etc. and majority of them is very old (as far as software development is considered). Even fresh resource are often using outdated techniques.
Modern Java is pretty good (although Kotlin is a bit cleaner IMO), but you should really use Spring documentation (if you are using Spring) and avoid code snippets from SO/Github.
The issues that I see with Spring aren't really code snippets from SO, but the fact that annotations drop all the advantages that Java had as a language. Just two examples:
When you create an Object of the wrong class and try to pass it to a method, you get a compile-time error. When you use the wrong annotation, nothing happens during startup of your application... but you don't even know at which point in time something should happen. Or if.
When a method you call throws an exception, you can run the application in a debugger and single-step into the method, then single-step until the exception gets thrown. This doesn't always just solve the problem but more often than not it gives you a good indication. If an exception gets thrown due to an annotation, you get an enormous stacktrace from some code you have never seen and didn't even know was run, or why it was run, from a thread you have never seen, complaining about wrong parameters that you have never seen, passed from another method you have never seen.
Another one of my favorites. When you make an API for a library it's usually pretty clear what the user can do with it. When you make an annotation API for a spring library it's evidently impossible to expect how the user will use it. Random feature of a library won't work because the context you're using their annotation in aren't compatible with whatever assumptions they made.
These is just (another) reason for avoiding Spring. Whenever I'm tempted because of XY module that looks good, I'm reminded with feedback like yours that things are still the same.
Oh well, will continue using java far away from that framework.
I love java, and especially modern java, but I also love Ruby and python and the majority of my web dev has been in Rails.
I'm now working on a Java side project, and while typically in the past I've just done the backend in java and then used rails for the web UI and had them share a DB, I'm trying to see if I can use Java for the web part without going insane.
Part of the problem is that historically the big players in java web are HUGE enterprises that was to be able to have 50 teams all do a small part of a backend in parallel and then just deploy them all together. Thus was born the servlet API and application servers.
But there's so many assumptions and bizarre requirements that come out of trying to do this perverse form of engineering that the whole thing ends up nigh unusable for someone who could otherwise just "rails g" 85% of their project.
Jetty has always struck me as a bit of a middle man where if you want, you can do the servlet thing but it's also a production grade application framework that doesn't force you to do the java EE dance if you don't want to. Though there is some leakage. But it's a lot less magical than Spring and is also just the http server bits, not the rest of the db and view and etc.
But since virtual threads are pretty stable now, I really want to use them, and jetty is the first reasonably complete and robust option that seems to have included support for them.
So - you know how these things go. I'm currently writing an HTTP url path router that supports the rails syntax from routes.rb. And then I'm going to write a Handler implementation that wrangles all the database stuff and does convenient/terse parsing of params (like how rails folds path params, url params, and form params into a single params object) and rendering of responses (so I can render a json object without having to call Content.Sink.[...] and use gson all over the place.
It's meant to all be very non magical and you can step through the code in a debugger and not see a billion reflective invocations of methods. And I'm hoping I can make the API of my Handler convenient enough that you don't regret that it's not annotation magic-based.
Also - I know there are various attempts at easier/less annoying Java web things like vertx and such, but they a) don't support virtual threads yet, and b) many of them are small enough im worried they'll rot eventually. Jetty meanwhile isn't going anywhere.
> Also - I know there are various attempts at easier/less annoying Java web things like vertx and such, but they a) don't support virtual threads yet, and b) many of them are small enough im worried they'll rot eventually. Jetty meanwhile isn't going anywhere.
Vertx is arguably larger than Jetty or at least NOT smaller. Vertx is used by Quarkus and other Redhat/IBM projects e.g. if you use the newer Keycloak you're using Vertx. It's been around for a long time and has received constant updates.
Vertx is also 1 of the top performs of Techempower Benchmarks keeping Java in the race for those that care.
From my personal experience, the first example doesn't really happen: thankfully I rarely see people randomly throwing annotations at methods/classes hoping one of them will stick. For most of the time annotations are self-explanatory.
However, there are some real issues with annotations in Spring/Java:
- Application will sometimes run just fine without annotation processor/interceptor. Think of `@EnableScheduling` in Spring: you won't know that `@Scheduled` is not working (because of missing `@EnableScheduling`) until you observe that method is not executed. In this case static code is a clear win.
- Annotation order: not all annotation processors/interceptors in Spring support specifying order. Annotation order in the code doesn't matter: it is lost during compilation. Good luck figuring out what is applied first in a method with `@Retry`, `@Transactional` and `@Cached` - will retry be executed within transaction or each retry will have its own transaction? This also is easily solved with static code instead of annotations.
As for compile-time error vs runtime-error: personally I don't really care as long as there is any error (which is not always the case in the first example) during the build/test/init/assembly phase. When I'm writing SQL queries in the code, I'm getting SQL parsing/compilation errors during application runtime - but that's fine, because I've written SQL-s against DB execution engine. When I'm writing Spark SQL job, I'm getting errors during query planning phase - and that's also fine, because I'm writing code against Spark's execution engine. Writing annotations against "annotation execution engine" (annotation processor/interceptors) doesn't seem any different or wrong in principle. Although, there are things that could be improved.
Stacktraces: there are a few additional interceptor method calls in the stacktrace when annotations are in use, however, most of the complexity comes from library/framework structure and developer's familiarity with it. Spring covers a lot of use cases thus it has its share of complexity. I'm not sure if "Spring without annotations" would be noticeably easier to work with, although I assume that feature-parity with Spring (MVC) is not a goal of this project so it probably will be easier to understand.
> From my personal experience, the first example doesn't really happen:
It happens every day, all day. Other developers don't see it, because it doesn't even compile.
> I rarely see people randomly throwing annotations at methods/classes hoping one of them will stick.
I see it, plenty, in other projects. I have done it, trying to work around an arbitrary 3rd party restriction. I can't get something to work and see some old SO and hope it applies. Look at the arcane combination and through trial-and-error figure out what is relevant...then work backwards. Sometimes you can figure out what's going on without a blog explaining the opaque behavior or the overly-simplistic documentation explaining what something does...or used to do or doesn't always in some specific set of conditions, mainly mine. All you have to know is every bit of how each annotation works and why, and how that interacts with every possible element of your compilation and runtime. This is the unrealistic state of "understanding" annotations for the vast majority of java developers...or maybe it's just me and everyone I work with.
>When you use the wrong annotation, nothing happens during startup of your application... but you don't even know at which point in time something should happen. Or if.
Probably worth noting that this depends entirely on the annotation - they can (and many do) run at compile time, and can provide very strong safety guarantees.
Many (most? all? I dunno) of the bloated server-side DI frameworks do not do this though, and I 100% agreed that it's can be a truly awful experience.
> you get an enormous stacktrace from some code you have never seen and didn't even know was run, or why it was run, from a thread you have never seen, complaining about wrong parameters that you have never seen, passed from another method you have never seen
That's more typical of a framework than a library. This is one of the areas where the difference becomes clear: with a library, you will see your own methods in the call stack somewhere, and be able to tell what got called and why; with a framework, it's often not so simple.
Spring WebClient pseudo streaming interface is atrocious. Java has switch expression with pattern matching, but no, they had to create something completely perpendicular to the language. "Hey, let's reinvent a wheel, but it will be our better wheel" - and now we have a square egg.
Java programming paradigm is dead in the modern day. You needlessly write way more code that is needed, all without any benefit, relying on third party libraries for main functionalities that may or may not have egregious vulnerabilities like log4shell.
The build system also takes needlessly long, because of how many hacks it involves (Groovy being a language designed to fix Java, being used in Gradle build system which runs a whole Java VM just to compile code)
Hardware is cheap these days, developer time is more expensive. Use Node or Python.
Gradle got WAY better with Kotlin DSL. Groovy is a great scripting language with incredibly powerful runtime meta-programming features. If you used Groovy for a project, you could get an experience close to Ruby or Python.
However, Groovy is also a great scripting language with incredibly powerful runtime meta-programming features. In short, that means you're stuck inside often stripped down (Jenkins) or ill-conceived DSLs that you either know by heart, know someone who knows it, or are in for a world of hurt trying to do basically anything.
With Kotlin DSL - even though the stack got even more complex, including embedded scripting host inside embedded JVM and all that - you get auto-completion in the IDE and "go to definition". That resolves half the problem with Groovy, which is discoverability. The other half - the byzantine object model and PERL-like "there's more than 1 way to do it" - won't go away just by changing the scripting language. Still, it's better.
There's also the thing about versions and compatibility between Gradle versions, Kotlin versions, and plugins versions. Finding the right combination takes so much effort that seemingly nobody ever bother to update the build tools in projects. Unless there's a "build engineer", which sounds kind of dystopic, but after working with Gradle for half a year I have to admit that managing the builds properly is indeed a full-time job.
> Hardware is cheap these days, developer time is more expensive. Use Node or Python.
It's not that simple. Sometimes, latency matters. Sometimes you really need something to happen in 20ms. But then you won't be using Java, or what pjmpl would say, you wouldn't use the stock JVM, but something that provides AOT compilation.
There are so many dimensions you need to consider when choosing an implementation language for something, and Java can be an optimal choice in many situations. As unfortunate as I think it is, the problem is not technical, but cultural. People who started programming in Java will have this rigid, rule-based concept of what you can and cannot do. Their minds are semi-permanently stuck in Java-like mold. Since Java is so all-encompassing, they are never confronted on their beliefs. Echo chamber. Cargo culting. Prevalent in all monocultures. The problem is when they switch to another language and infect it with beliefs that stopped being well founded due to the changed circumstances. They do their best to fit the new language into Java-esque shape, no matter how pointless it seems.
Kotlin is the biggest victim of this. There's so clearly visible divide between "make Java great again" crowd vs. "it's actually a nice language, why not use it for what it is?" crowd. They produce drastically different libraries, have very different goals, and do very different projects. Different values, methods, and outcomes. In itself, it's maybe OK, but... try being stuck in the wrong camp... and stay sane.
I encounter this issue pretty often and it makes code review experience miserable. It's not blocking any work, but it is frustrating enough that for any new project I would try to avoid using Gitlab
And what a typical issue that is. Opened a year ago, initially some requests for internal clarification that went nowhere, plans to be fixed by an upcoming release, then loads of metadata and bot comments with minimal actual work happening, followed by pushing back the release they want to fix it in, followed by reducing its priority, deciding that actually they don't need to fix it and can just throw it on the backlog, more comments about users being affected, and then today a comment that it got brought up in a discussion here:)
Hey - I'm the PM for our Code Review group. Sorry to see you're running in to that issue, it's been a real challenge for us. If you're running in to it pretty often, it'd be great if you could provide some details on what steps you're taking (maybe even record a video) to help us figure out what's going on. For lots of bugs, the hardest thing for us to do is reproduce them... so if you've got a reliable way to do that, it really helps us prioritize things to fix.
> However, I can still reproduce it under certain conditions:
Separately, that's not much of a defense; inconsistent errors are still problems for users. Although, I suppose gitlab only fixing bugs that are 100% reproducible would explain a lot about their product.
One major disadvantage of triggers is the inability to do canary deployments and vastly increased complexity of rolling deployments. When SQL code lives within the application, we can trivially run multiple variants of such code simultaneously. Running alternate version of a trigger for e.g. 10% of traffic is way harder.
What I would recommend instead is making use of CTE (Common Table Expression), because DML (modifying queries) inside `WITH` are allowed and taking leverage of `RETURNING` keyword in both `UPDATE` and `INSERT` we can execute multiple inter-dependent updates within single query.
With such approach we can trivially run multiple versions of an application in parallel (during deployment, for canary deployment etc.) and we have similar performance advantage of a single roundtrip to database.
Additional advantage is the fact that there is only one statement which means that our query will see consistent database view (with very common read committed isolation level it is easy to introduce race conditions unless optimistic locking is used carefully).
This is true for SSH key, but not for all data on MacOS, e.g. if you run `find ~/Library/Application Support/AddressBook` the OS will ask you if you want to give access to contacts to iTerm2/whatever (unless you have given it before). I'm not aware of a way to create additional sandboxed "folders".
Also, some applications on MacOS are sandboxed, IIRC Mail is one of them. Also, some (all?) applications installed from AppStore. That's the reason I prefer installing applications from AppStore: they seem to be at least somewhat sandboxed.
For development, I try as much as possible to leverage remote development via [JetBrains Gateway](https://www.jetbrains.com/remote-development/gateway/) and [JetBrains Fleet](https://www.jetbrains.com/fleet/). VSCode also has remote development but they explicitly assume that remote machine is trusted (in the security note in the remote extension plugin readme). In the case of JetBrains tools I have not seen any explicit declaration whether remote host is trusted (as in: if remote machine is pwnd then we may as well let pwn your personal machine), but at a glance it seems like there are minimal precautions (if you run web application and open it in a browser, the Gateway will ask if you want to be redirected to a browser etc.)
Probably best scenario for such remote development clients on MacOS would be to put them in AppStore: this way they could leverage sandboxing and in the case of thin client, the sandboxing likely won't limit functionality.
Major benefit of Airflow is the number of already implemented integrations. Importing data from GCS to BigQuery, copying data from Postgres to GCS, KubernetesPodOperator and so on. IIUC with Temporal you get only workflow management which can be easily integrated with any application to implement business logic. And this is great, because implementing business workflow in Airflow is even more awful than the Airflow itself.
But for any ETL or plumbing job Airflow is IMO better due to existing integrations.
You are correct, that's the main difference. I wrote some more on the topic of
data workflow engines like Airflow vs general-purpose application development workflow engines like Temporal here: https://community.temporal.io/t/what-are-the-pros-and-cons-o...
I used Ansible in the past for the exact same purpose and it has one major flaw: Ansible is imperative. What I mean is: if I add a line in a config file and want to rollback then I have to manually handle revert (create playbooks with `delete` flag etc.) With Nix you get this for free.
Also, with Nix you can trivially create image for your configuration (even with slightly different options, e.g. only enable ssh on 0.0.0.0 for a fresh install, but disable it after first config apply) which I find useful when working with a cloud.
> A compromised remote could use the VS Code Remote connection to execute code on your local machine.
So I would say that it might be a bit harder for an attacker to gain access to your local machine, but you should not rely on it, because it's more like security by obscurity.
Darn. Maybe the solution is to use vs-code client in the browser? Like vscode.dev or https://github.com/coder/code-server ? It limits what keyboard shortcuts and extensions are available, but at least it's in a secure sandbox on the client side.
As long as your are creating web applications then browsers are pretty good at limiting blast radius of a single attacked website. Well, at least until attacker discovers that he can inject some fancy phishing into trusted site.
With local development environment it is a bit different, because unless you are running build/test etc. in a container/vm/sandbox, then attacker has access to all of your files, especially web browser data.
Though, personally, I prefer JSON. Probably due to superior tools (thanks to its popularity) and less-bloated syntax (it is somewhat easier for me to read raw JSON file than raw XML file).