For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | jgrant27's commentsregister

Plenty of rationalization of how java (the language) is "subjective for developers" in terms of productivity and happiness. From a non-engineer view though, let's be real : nobody ever got fired for picking java. This was just a decision made in favor of managers over the poor engineers that will have to deal with maintaining a large Java code base over time.


As an engineer, Java would be one of the languages I would choose for maintaining a large codebase over time. It is far better for that than most other popular languages.


If that is because of static types then I might agree with you, however once Spring and other reflection-heavy frameworks are added to the mix then I'm not so sure. Java's biggest problem is not the boilerplate, but the culture of over-engineering.


Luckily, the culture of over-engineering has shifted significantly thanks to modern approaches for designing Java-based systems.

The "Triple Crown" of Java frameworks, namely Spring Boot, Quarkus, and Micronaut, but also with the influence of smaller contenders (Dropwizard, Javalin, SparkJava, and a few others) have helped developers focus on business code rather 'design patterns' because these frameworks have embedded a lot of the "over-engineered" concepts into their 'opinionated' approaches.

Developers are no longer thinking too much about layers, and just writing components by following the guidelines and best practices of these frameworks. The final result is pretty similar among all of them, especially for microservices.


I like the sounds of that but I would definitely note that this requires _significant_ technical leadership. If you try to hire Java developers, 95% of the resumes you're going to get are people who learned Spring the bad way and have very little experience shipping efficient or reliable code.


Can I ask what you mean by learning Spring the bad way? I'm new to the industry and starting a Java role in Jan.


The general problem I’ve seen is complexity and action at a distance - basically many things where anyone trying to figure out how it works or make changes has to understand a lot of code in multiple places.

A mundane version of this problem is dependency management: every Spring app I know is constantly patching CVEs, almost always in optional code for features they aren’t even using, and sometimes upgrades are not trivial because you have to push around a lot of framework code. Node is similar but at least the tools are better.

None of that is specific to Spring, and I’m sure someone will pop up to say that’s doing it wrong but it’s been 100% of the work I’ve seen from multiple contractor teams so perhaps the way to characterize it is that the community doesn’t have enough culture of pushing back against support burden or complexity.


In my long time java shop we're doing pretty much the opposite - using less frameworks and libraries to minimize time spent on constant upgrades to the frameworks and their myriad dependencies for exposed vulnerabilities.


These things are changing - Quarkus and Micronaut are gaining a lot of popularity, and both use compile-time DI, not reflection, and Quarkus in particular is designed to target GraalVM, and Graal does not like reflection at all.

Spring is the fucking devil though.


Agreed. Even if we buy into the argument that Java is just wordy and slow to work with, once we're doing with long-lived code that needs maintenance, that's less of a concern than code that's hard to decipher, so the trade-off seems like a good one.


Yep. The wordy nature of Java is just giving you more info to work with when you dive into code you haven't seen before. Java is also a lot less wordy than it used to be, and if you use the right tooling you don't need to actually type everything by hand anyway.

What matters most for maintaining large codebases that a lot of people work on is the type system and the tooling. A large ecosystem of good tooling exists for working with Java, and it is empowered by the static type system to do some very impressive things. Some of the .net stuff is probably close to that with Visual Studio and the ecosystem around it. Everything else is pretty far behind, including Python and Ruby and Go.

I know people who work on a massive Java codebase that is almost 20 years old and has over 1000 people currently working on it -- and thousands more have passed through that codebase in prior years. It's not a lot of fun. But if it was a Python codebase, it would actually be impossible.


The type system isn’t impressive. You can find usages and thereby get navigation and refactoring via excellent Java-centric IDEs. But it seems like that’s it(?). On the other hand you have to constantly exercise your code because of lurking NPEs. Although hopefully we can get good annotations through third-party tooling (Facebook’s looks good).


I do like Kotlin's nullable types for that reason alone. At least Java has far better NPE messages now.

https://www.baeldung.com/java-14-nullpointerexception


That's a lot on its own, and being able to understand the argument and return types solely by signature is a big productivity boost compared to environments where that isn't possible.


Java can be hard to decipher in its own way. Mostly because of a culture of favoring just-in-case indirections, a propensity towards making deep call stacks as the codebase evolves (maybe because IDEs make that navigation tolerable, although it never makes it easy to see the whole context), and pretty imperative constructs outside of streams and lambdas.

But on the whole Java is not bad. I think it’s perfectly OK. I am more afraid of Java culture.


I disagree that it's a tradeoff, because those two are orthogonal. Java is often hard to decipher because there are many things that can happen dynamically outside the visible code path that affect execution logic, like annotation processing, classloading, dynamic bytecode manipulation, AOP, dynamic proxies, reflection, etc.


Most of those strategies are equally popular in Ruby but without the guardrails that at least make them cumbersome enough to make you think "is this really how I want to approach this?" The pitch for Ruby has always been productivity because it's terser so I'm trying to be charitable here and grant them their greenfield case.


In my experience (and our experience at TransFICC building middleware for financial trading systems, which is extremely throughput and latency sensitive), Java is the fastest language to work with simply because the tooling support is second to none.


> code that's hard to decipher

I've worked on a lot of codebases that appeared to be put together by developers who took that as a challenge - IME Java developers LOVE global variables (but they call them "public static" to make themselves forget they're using global variables).


I never understand java hate. The hate seems isomorphic to: "i hate types and i dont use a real ide"


Have you compared and contrasted Java and C# (especially with regards to let's say larger frameworks)?

Both of those put a strong emphasis on types, but only one of those (it's Java) is the sort of language that really attracts developers that love types like AbstractWidgetBoundaryFactoryWrapperFactory.

I can definitely understand the "Citizen Kane" effect of looking at a modern Java framework and going "yeah looks just like any other language, except with types and IDE support". Java didn't get there from nowhere. It got there after 20 years of being overly verbose and frustrating to work with.


Then haven’t seen some old .NET codebases. Both languages around the high of OOP-hype made similar monstrosities, but neither does it anymore.



My only real gripe with the language is JRE versioning and updating and that's the only legitimate complaint I've seen from others. It seems that you can sidestep that issue by baking in the JRE to your executable.

Honestly, I think the only reason C# is winning at all right now is because MS controls Windows and can silently keep .NET infrastructure up to date all the time. There must surely be a parallel universe where Oracle created their own OS where they enjoy the same benefits as MS.


Search for the memo where google employees were saying, "we the developers of google are losing to Youtube, as Youtube engineers use Python and we use Java. And iteration in Java is slower than Python." That's before google purchased youtube.

Edit : Python. The point remains the same.

Link : https://news.ycombinator.com/item?id=16674628


YouTube was based on Python from the very beginning. And since then has migrated more and more pieces to Golang. PHP was never a major component of their stack, if it was ever used at all.

Search for the memo where Twitter migrated to the JVM. That one actually did happen.


The link from your linked page states that Google were using C++, not Java. Which is correct. I was there. Writing web servers in C++ is a good idea if your web server is 1% HTML rendering/UI and 99% complex algorithms over large binary data structures i.e. a search engine. It's not so great if your web server is 95% UI.



Are you reading these links carefully enough? The first link you posted was talking about Google's use of C++, but you presented it as an argument against Java and in favour of PHP (which YouTube weren't using). This second source also doesn't say Java vs Python was the problem. They say:

"We're constrained on UI/Java development resources", "we have 1.5 engineers working on UI things and that is slowing us down" and "I think if we had one more good Java/UI engineer we'd be kicking butt vs YouTube".

So the problem was a lack of people ("resources") assigned to the UI side, i.e. too much of their headcount is being consumed by the C++ infrastructure leaving very little time for UI-centric work like social features. Google Video's problem is stated here to be too little Java development, not too much.

As someone who was there at the time and who read the internal post-mortem written by the Video team, Google Video vs YouTube wasn't primarily about implementation language. It was pretty much as the emails you cite say:

"They're cranking interesting features a lot faster than we are, but don't likely have a backend that will scale or a plan to make money. We, otoh, have these"

The YouTube guys did the now classic VC play of focusing on growth hacking without any idea of how to pay for it all beyond being acquired. At the time Google bought them the site was close to total collapse; the project to stop it running out of bandwidth was literally called BandAid. The Google Video team was also small but focused more on stable and scalable infrastructure, and product-wise they'd been chasing professional content as they couldn't see any obvious path to monetizing hobbyist produced video. In turn that pushed them away from the Flash plugin towards a more HD video oriented custom plugin, which hurt adoption. These were the wrong calls clearly, but, YouTube didn't really have a plan either. In the end both sides needed each other. One of the first things that was done after the acquisition was start moving core YT functions like video and thumbnail serving off Python and onto the Google C++/Java infrastructure. The web server UI on the other hand remained in Python for a long time so their (social) feature throughput wouldn't be disrupted. I think that codebase did eventually stop scaling and got rewritten, but my memory starts to fail there and I can't quite recall what the state was back when I left.


Youtube was first written in PHP.

> Before Google acquired Youtube, the majority of the code was initially written in PHP, but there were many restrictions and clutters in PHP at the time, so after acquiring Youtube by Google, they moved to Python as one of the core parts of its backend programming.

Python started after google's acquisition. It's not that youtube was "never written in PHP".

https://ourtechroom.com/tech/technologies-programming-langua...


Rate of iteration changes with the size of the codebase, and not in PHP's favor...


And yet, YouTube rewrote their stack in C++.


The more common and recent hate I hear is that so much has been layered on top of it that it's difficult to manage or really understand what's happening in the code. Mentioned in other comments, but things like loads of annotations, Spring abstractions, and so on.


Recently coming from a Python/Flask codebase to a Java/Spring codebase, I would say that the amount of "magic" is not all that different.

It's just that with Spring, I can go to an extremely well-written user manual, or to StackOverflow and get my questions answered. With the Python/Flask codebase, I had to splunk my way through all the layers of random libraries the original developers slapped together, in an attempt to reproduce something resembling out-of-the-box Spring.

I suspect that those original developers had fun making all of those custom choices back in the beginning. I don't know for sure though, since of course they've all left the company since then. The company chose to migrate because it wasn't maintainable once that tribal knowledge left.


That sounds more like a microframework vs a kitchen sink opinionated framework issue though than a language issue. eg Flask is a DIY collection of libraries with your own architecture vs say Rails or Django where you have 95% of those decisions made for you and baked in.

I'm sure there are Flask style frameworks in Java land too. And there was a port of Spring for Python a while back too :)


Exactly this.

Had to stand-up a server in Java recently.

At some point I realized I was doing more programming via XML files and Spring decorators than via actual Java code. Mostly because Java itself isn't a great abstraction for a CRUD server.


If you did it recently, you should not have needed XML. Even a decade ago, XML-based configuration was on its way out. The transition from Spring to Spring Boot enabled us to use regular code to configure our injectors, for example.

I haven't touched XML for nearly a decade as a full-time Java developer.


I've only ever developed Java at places that had their own infrastructure for everything, so this question may sound uninformed, but isn't Maven still the de-facto package manager used for most Java applications and isn't Maven configured with XML?


Well, while maven be a bit bigger, gradle has plenty of users as well, which uses groovy or kotlin for configuration.


Yes Maven uses XML.


Yes it's using XML for all the wrong reasons - as a config and even scripting format (hello ant plugin) when XML is for markup/text. Fscking Maven doesn't even allow basic XML/SGML features such as entities/text macros. And yes Maven's pom.xml is used for package metadata on maven-central and elsewhere even if you don't use Maven directly.

BUT I have to say, every project using gradle as alternative so far has receded into bizarre ad-hoc deployment scripting. Maybe that's just because gradle can do stuff that was hidden away in jenkins build files, but still ...


Most of the pieces we were using allowed for configuration via XML or code.

Predictably, the team therefore used both.


XML is pretty rare in Java code written this side of 2010. Not that it doesn't exist, but the whole spring mess is not something you really have to touch to set something up in Java.

I usually enjoy Spark[1] for bootstrapping a simple REST-like interface. There are other options, but in general, you don't really need glue-languages at all if you stray away from old-fashioned EE-style frameworks.

[1] https://sparkjava.com/


[flagged]


Yikes! You can't attack someone like this on HN, regardless of how wrong they are or you feel they are. Obviously it's completely against the site guidelines.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


I can only tell you my own lived experience, and that was it.

It is possible there's a better way to do these things my team was unaware of (I was aping an existing server and replacing its RPC handlers with our own, not starting from scratch), but yep. XML and I can't be arsed to remember if they call their @-forms "decorators" or "annotations" because I use too many languages that have some variant of them to keep track.

Point is, I wrote more of both than actual Java, and the annotations make debugging much harder than slapping a tracing debugger on.

ETA I think the peer comments and this one underestimate the stickiness of methodologies in old languages. The fact that there are better ways to do it now is irrelevant... Because there was a previous best practice that is no longer a best practice, that best practice lives forever in the code bases and shared knowledge passed by peer review of existing institutions. Hell, I can go to my bookshelf and pull down two Java tutorials that show how to do an RPC server with Spring and XML... Once it's committed to paper, it lives forever. One can make the assertion that a team should be constantly developing their process, but management is a lot more comfortable with standing up a new server that looks exactly like the old one than with trying a new methodology that is unproven at this company. If for no other reason than so we don't have multiple ways to approach debugging servers depending on what team set it up.


Has those people have any understanding of what is their computer? It is layers upon layers of abstractions, plenty in hardware and even more in software. In fact, the only tool we have to actually solve complex problems is (good) abstractions. So those haters should probably find better arguments.


Large codebases in dynamic languages are the pits. Determining if some bit of code is used and how is a probabilistic exercise. Eventually you learn to give up or YOLO and see what happens.


Everyone always says that's fine if you have tests, but that's never actually true.


If I write the tests, maybe. Some tests make refactoring even harder. If I couldn't already refactor my own code we're also in pretty bad trouble. If I've written more than 10% of the tests overall, things have already gone badly.


And so we have a chicken and egg problem.

Thanks to verbosity, Java creates the very problem that it tries to help with.

Did you need to have that many programmers and that much code? Who knows. But once you have it, continuing with Java makes sense.


Verbosity is not a problem. The problem is being able to navigate and understand a large codebase. Terseness and dynamic typing are the enemy of that goal.


Attempts I've seen to quantify it have found that you hit a productivity peak in a team of 5-8 people. Then you need to add processes to avoid n^2 communication overhead. You don't get back to the same productivity until you have a team of 20-25 people.

If you've never worked on a small team, the productivity difference from staying small may not sink in. But they are real and large. And companies should not lightly cross that threshold.

I agree with you that, on a large team and in a large organization, terseness and dynamic typing are bad. But I don't agree that verbosity is not a problem. It absolutely is. It makes you have to go to large teams sooner.


Why is that verbosity never mentioned as a negative for Go, when in fact, Go is more verbose than Java?

Also, I really have a hard time believing that java would be significantly longer than any other mainstream language, it is at most longer by a small constant amount.



Modern Java isn't remotely as verbose at scale. The language is very different and the API+3rd party support is so vast you can usually build huge things with very little code.


20 years ago, the rule was that Java took an average of 10x as much code to say the same thing as a scripting language did.

In all the time since I keep hearing "modern Java this" and "modern Java that". But every time I venture into some Java, well, my limited experiences haven't fit what I was told about "modern Java".

Maybe I just haven't encountered modern Java?

As for the API+3rd party support argument, the productivity of third party libraries has been used as an argument for ages. I remember when it was being made about Perl with CPAN. My experience of it has always been that libraries make a great start to the extent you have a common problem. Which is wonderful for demos. But once you're in the weeds, you still have to write lots of code. Maybe there isn't a library for your unique needs. Maybe the standard library has bugs. Maybe you wrote code because there wasn't a library but now there is.

No matter how it comes about, you wind up writing code.


20x was always an exaggeration. It counted class boilerplate and compared stuff like hello world which was always "decorated". As scale that overhead gets minimized. Java will always be a bit more verbose than some other languages. It's strict. By definition that's more verbose. It also forces declarations (e.g. public). Again, a choice to keep things clear.

But that's not the code that takes time to write or read. In that department Java isn't more verbose in a significant amount. If you're the type of person that gets upset that every line ends with a semicolon or that we need to repeat the word public for many methods. Then sure. Java is verbose.

Looking at the body of a Java method I don't see something I can significantly cut down with TypeScript or Python.


> Maybe I just haven't encountered modern Java?

even modern Java 18 codebases with exclusive use of records instead of classes, and with enabled enhanced support for instance pattern-matching instead of old-school conditional blocks and switches, are still significantly more verbose than Scala codebases, especially if the latter use `cats` or `scalaz` for all traversal/mapping/folding logic. Java developers would just manually encode all these patterns into series of nested loops by hand by default every time, because the required tooling as well as the compiler help to aid that tooling is not available in the language.


Mapping/folding is possible with plain old java streams. So while scala can be smaller, the difference is not too significant in my opinion.


refactor tools for Java is just so much better than what you can do in a plain text editor or if the language does not have type annotations.


It's the (static) types, not the type annotations that matters in IDEs these days. Even in Java there are (very limited) situations where you can omit the type and the compiler infers it. And the ide handles it just fine. And in, say, Scala, which has more inference, the ide works fine (albeit not as well as Java but that's more down to maturity).


>This was just a decision made in favor of managers over the poor engineers that will have to deal with maintaining a large Java code base over time.

What's wrong with maintaining large Java code bases? Say what you want, but the language is built for that.

The worst code-bases I've seen tend to come from dynamic languages. Try to maintain hundreds of thousands of lines of python or ruby code over several years, and see how fun that is.

>From a non-engineer view though, let's be real : nobody ever got fired for picking java.

No. That's not a correct framing. Java is one of the few languages that able to strike a great balance between performance, development productivity, 'debuggability", cross-platform support (i.e. works just as well under Windows as Linux), language safety, and library and framework ecosystem. Honestly, it's hard to come up with a use-case where Java isn't a natural right answer.


I disagree. The alternative is some rockstar developer saying "let's pick Elm!" or "let's pick Elixir!" and then leaving the company after a few months. I'd take Java (or C#) over several other languages in a heartbeat. I'll leave the more esoteric stuff for the weekend, thanks.


I picked Elm once and I managed to convince my boss to let me got rid of it 6 months+ before I left. Turned out it's a good move! (well.. a not that bad move)


Most of the flak that java gets is from people who have never used/maintained java codebase. I moved from java to python and I actually miss java. It’s much easier to navigate and refactor java code. Yeah, you can write code faster in loosely typed languages like python, but maintaining it is a different story.


At my workplace I would pick Java 8/10 times. I would only pick something else when Java is really not an option (e.g. Scala/Python for Apache Spark, Typescript for CDK).

Maintaining a larga Java code base is very easy (I think much easier than e.g. Python). The tooling is awesome, there are countless libraries, last but not least Java is the easiest to hire for.


I think nobody fires developers who pick a language the boss or whoever in power doesn't like. And the boss would just say: NO.


If you have used Rust in any real world capacity on actual projects you know that there are no "Little Books" when its comes to using Rust. An overly complex and unproductive language as much as C++.


Absolutely not my experience. My experience is there is an initial large learning curve (2-3 weeks) and then you can be vastly more productive than any dynamic language, with better safety, a better library ecosystem, and better tooling.


Both are overly broad generalization. If you are working on a backend for a web app I would certainly not recommend Rust and it would be a big loss of productivity compared to Python or JavaScript, or even Go.

For a game it could fit in some part but you would be very limited by the ecosystem so I would also not recommend except if you really know what you are doing and have significant resources.

For low level system programing it is pretty nice, and maybe the best alternative right now.

For a compiler I would say it depends if performance is the main priority, in that case yes, otherwise no.

There are plenty of other cases of course and for each the answer would be different.

It is not really a general purpose language that you could use without worry for everything like Python, at least not yet.


Using libraries is so much easier in rust compared to many other libraries due to rustdoc. I can trust the type signatures to show me the actual usage, I can trust the code snippets to not be outdated, and above all this is consistent for all code in the entire ecosystem.

Almost always when I use python libraries I have to get used to a new documentation format, learn how to navigate it and so on. And then it has to be detailed enough to make up for the lack of type signatures. I cannot tell you how often I have to skim a significant portion of my dependencies' source just to use them. (Though there are counterexamples, stdlib and numpy in particular actually have okay docs)


I would absolutely do web backends in Rust, with Go as the backup choice. Python doesn’t feature in my top 10, and nor does JavaScript without TypeScript to make it even moderately reasonable.


Wouldn't you say this about Go before Rust?


    Most applications I tend to see don’t even use concurrency because they are so small and simple that they don’t need it.
Sorry but I just can't take your opinion of any language seriously or that you have much practical experience at all because of statements like this.


Having once replaced a sixteen core go program with a single-process perl script that completed the same task 4x faster, I think remembering that a lot of the time we (as a profession) manage to overengineer things is worthwhile.


    Have you debugged.it ?
Debugging seems to be the author's coding philosophy which says it all.

    While I had over a decade of experience in other languages, such as Python, PHP, Java, etc. 
    I found it extremely difficult to wrap my head around Go.
This could be because the author's introduction to programming and most of their experience has been with some very problematic languages.

Maybe it's not Go that is a terrible language but just that the author is not a systems programmer who has worked with large code bases ?


No, it doesn't get any better when the codebase is bigger. The way to figure out WTF went wrong in a large code base written in Go is:

1. Take the line number where the error was logged, likely somewhere very close to e.g. the main event handling loop, and completely disconnected from the place where the error was initially thrown. There is no stack trace.

2. Try to figure out which parts of the error string are constant (i.e. searchable), and pray for the developer to have been familiar with the issue so that the string is globally unique (corollary: When writing Go, make sure your error strings are unique, introduce some form of variation if you're throwing similar errors, even if it's just punctuation or synonyms, to make sure the throw site is identifiable).

3. Try to guess what the flow between those two sites could have been.

The lack of proper stack traces (or even the line number where the error was thrown) on errors is one of the most insane effects of this error handling approach. (The other is hard to read code because the actual logic is hidden in 3x as much error handling boilerplate).


I’m not sure if I buy that. Did you read the article? The author is expressing explicitly what’s wrong in many areas and why they’re useful. I don’t like rebuttals like this because it then becomes a self-fulfilling prophecy of Go-lovers avoiding criticism from outside because “They won’t get it due to their background”. Great, now we’ve just created an echo-chamber. It’s good to get outside perspective and debate. I’m sure many Go experts would like to understand where they could improve - without losing Go’s core minimalism.

Author’s tone could be better but they’re just being straightforward that they don’t like Go and specifically why. Totally ok in my view.


I think this criticism is very unfair, if there is one thing java is (in)famous for, its large codebases. Kafka is written in it. I myself don't use it, but you need to substantiate your argument.

Is there any reason you believe python and java are 'very problematic"? Isnt every language promlematic in its own way? Are they more problematic than C, which breeds foodguns, or what are we comparing to?


Maybe, however the author has over 20 years of professional Lisp experience, there are interesting points made for anyone in that category. To be fair though, did you read the post ? The intended audience is much wider than you're assuming.


The irony of this is next level, after all Microsoft has been threatening innovation for decades.

This sounds more like their typical corporate propaganda to get workers back onto their campuses and under the whip of their middle-management class.


After using Rust for a few years professionally it's my take that people that really want to use it haven't had much experience with it on real world projects. It just doesn't live up to the hype that surrounds it.

The memory and CPU savings are negligible between Go and Rust in practice no matter what people might claim in theory. However, the side effects of making your team less productive by using Rust is a much higher price to pay than just running you Go service on more powerful hardware.

There are many other non-obvious problems with going to Rust that I won't get into here but they can be quite costly and invisible at first and impossible to fix later.

Simple is better. Stay with Go.


Agreed. My organization has been a great testing ground for comparing Go vs Rust service development. The teams that spun up web services in Rust have almost uniformly have had poor experiences. In addition to Rust's steep learning curve, the relatively feature-poor standard library (you have to pull in a third party package to create a SHA256!), and instability of best practices/tools around service writing, in one case, lengthy Rust compile times actually increased the time to resolve an incident. We've largely reached a consensus that all new services should be written in Go.

I don't see Rust having much of a place in web services development until there's years of improvements in place. There's plenty of other potentially appropriate places for Rust replacing systems code.


> (you have to pull in a third party package to create a SHA256!)

nitpicking here, but this is by design - it's also true for datetimes and random numbers. it isn't a fault, it's a different packaging philosophy.

i agree with the rest - the good things about Rust just don't matter as much when developing bit-shoveling HTTP services, which is what 99% of backend seems to do nowadays.


Just out of curiosity - what kind of service was it? My experience with web services (API and websockets) has been great with Rust and actix, so I'm curious if it might be a difference of the work that needed to be done.


Explicitly managed memory is useful for handling buffers. Everything else is peanuts anyways and could use a GC for ergonomics reasons. That being said, some really prefer the ergonomics of working with Result and combinators compared with the endless litany "x, err = foo(); if err !== null". IMHO there is still room for significant progress in this space, neither Rust nor Go have hit the sweetspot yet.


My experience differs quite a bit. I did a bit of production code in Go and a bit of Rust as a hobby + one production Rust service. I guess it might depend on the kind of problems that you work on, but for the most part I don't think that my Rust code is so much different than Go. Definitely more concise. I admit there are times when I have to spend more time to think about how to implement a certain thing, but honestly, if you don't need raw performance you almost always can get away with one of the smart pointers and cloning (or just cloning?). So I don't feel that I'm much slower writing Rust and I'm happy to have more compile type checks.

I don't think that my experience is something isolated, either, here is for example a quote from one of Microsoft employees:

> "For the first week or so, we lost much of our time to learning how borrows worked. After about two weeks, we were back up to 50% efficiency compared to us writing in Go. After a month, we all were comfortable enough that we were back up to full efficiency (in terms of how much code we could write)," writes Thomas.

> "However, we noticed that we gained productivity in the sense that we didn't spend as much time manually checking specific conditions, like null pointers, or not having to debug as many problems."

https://www.zdnet.com/article/microsoft-why-we-used-programm...


This is highly project specific. Go is not suitable for everything. Rust is designed as a C++ replacement not a language for writing backends. Even though a whole lot of effort was put into this space. Go is very good at writing backends, Rust is very good at replacing C++. Everything else the waters get much muddier.


Obviously this is a personal preference, but I prefer Rust for web services. And so I have a question - do you have experience in writing web services with Go and/or Rust? I'm often wondering what do people miss when writing Rust based web services.

Recently I even gave a shot to a todo-backend[1] implementation in Rust[2] and it honestly doesn't look that different from the Go versions.

Granted the todo-backend spec is very very simple. I would prefer to also include stuff like authentication/authorization and maybe even multi tenancy to compare better. But when I'm writing this kind of Rust code I'm often wondering - what makes Rust so unergonomic for other people?

  1. https://todobackend.com/
  2. https://github.com/drogus/todo-backend/blob/main/src/main.rs


Rust async is not as simple to use, the ecosystem is much smaller and segmented across async-std and tokio.

A good backend stack requires a rich ecosystem of various connectors to databases, cloud services, payment services, frontend stuff like server side rendering, graphql etc.


> the side effects of making your team less productive by using Rust is a much higher price to pay than just running you Go service on more powerful hardware.

This entirely depends on the ratio of development effort to deployed instances. At one end of the spectrum, lots of developers work for years on a system which is only deployed on one machine; obviously you optimize for developer effort and buy a single massive machine. At the other end of the spectrum, a few developers work for a short time on a system which is deployed at massive scale; obviously you optimize for performance.

At Pernosco we have a very small team deploying a relatively small number of instances, and after five years of Rust we're very happy.


My problem with Rust: I'm sure that if I used it as my primary language for a couple of years, I would be able to claw the productivity loss back. But I can't find any reason to justify using it at my current productivity level.

There is a vicious cycle: few projects use Rust because the productivity hit is large, and programmers do not get enough experience using Rust because few projects use it.


I don't think that 5 years is needed to feel productive. I started Rust a few years ago, but I dropped it due to lack of time and I remember that I had a really hard time with some of the stuff (most notably futures between async/await). I got back in 2019 and I wrote maybe two small projects (under 300 lines of code each I think) and I read quite a lot. After that I got to implement a production web service and also mentor/teach two people. It went very smoothly and for the most part there were no major blockers.

Obviously it all depends on a lot of stuff, but I think that for most people a few weeks to month of writing Rust at work (meaning full time, not like an hour in the evening here or there) should be enough to feel decently productive.

Another thing is that if you've tried Rust long time ago check it out again. I think that both the language and the ecosystem changed a lot in the recent years, it's hard to compare how easy it is to do Rust now vs 2016 or 2017 when I first tried it.


I don't think that vicious cycle exists. I was immediately productive with Rust, but OK, that's just an anecdote. Surveys such as Stackoverflow's Developer Survey show Rust usage growing rapidly.

If I met someone who took years to be productive with Rust, I would conclude they lack aptitude for programming. Maybe harsh, but probably true.


I guess I wasn't clear. We were happy with Rust from almost day 1. After five years, we're still happy.


>> Simple is better. Stay with Go.

Ive been feeling the same, but as someone who just played with Go/Rust (and never professionally), it's nice to hear that professionals feel the same.


I mean, I'm a professional and I'd say "it depends" (as always), but for most of the stuff that I do I would choose Rust, especially if I care about maximum reliability. Go is statically typed, but I've had situations when there was a runtime exception in Go cause of a mistake that wasn't caught by tests nor code review. In Rust you almost never see runtime exceptions, especially with good linting rules. And thanks to no data races I feel so much more confident writing concurrent code.


Why do you say "less productive with Rust"? In my experience I'm more productive with Rust because it's very strong type system catches so many bugs.


Can you name some non-obvious problems?


I wrote Pong in Clojure back in 2009 in <200 lines. https://imagine27.com/pong_in_clojure


They are still classes, still live on the heap and still need to be garbage collected. Compare with value types that live on the stack in other languages such as Swift, Go and Julia.


The heap is an implementation detail.

With escape analysis, the compiler can allocate the data on the heap, stack, or even stick it in registers.

https://www.beyondjava.net/escape-analysis-java

https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen...

https://www.javaadvent.com/2020/12/seeing-escape-analysis-wo...


Java compilers are getting more and more advanced, but I don’t think they will ever become the magical “sufficiently advanced compiler” that produces code that’s as good as humans _could_ (but often won’t, because of time constraints) write.

I don’t think anybody fully disagrees with that. At least, I haven’t heard people claim int can be removed from the language because a good compiler can produce identical code for Integers.

And yes, that can also apply to instances that do escape. A sufficiently advanced compiler could in some/many cases figure out that an array of Integer can be compiled down to an array of int. However, it’s way easier for a compiler to check a programmer’s claim “we won’t use features of Integer on these ints” than to deduce that code won’t, so a little bit of programmer effort allows for a simpler compiler that can produce faster code.

For me, records and (future) value types are examples of such “little bits of programmer effort”


I could be wrong but I don’t think dart has ints, I think it only has objects.


https://api.dart.dev/stable/2.6.0/dart-core/int-class.html:

“Classes cannot extend, implement, or mix in int.”

https://api.dart.dev/stable/2.6.0/dart-core/num-class.html:

“It is a compile-time error for any type other than int or double to attempt to extend or implement num.”

⇒ it seems that, technically, you’re right. int is an object in Dart. At the same time, it’s a restricted type of object.

So restricted that I think it is aan object only in name/at compile time.


Can you have an array of 1 million structs, not pointers to structs?


The first question is "through static analysis, can you guarantee that the structs do not leave the scope?"

The second question to look at is "which JVM are you using?"

Different JVMs may implement this differently. This isn't something that one can say about Java. It is something that one might be able to say about HotSpot, Zulu, or GraalVM.


You’re technically correct that this stuff is all possible in principle, but the answer in practice right now is “no”.


From the link about GraalVM:

> Something that Graal can do that C2 cannot, and a key advantage of GraalVM, is partial escape analysis. Instead of determining a binary value of whether an object escapes the compilation unit or not, this can determine on which branches an object escapes it, and move allocation of the object to only those branches where it escapes.

And from https://docs.oracle.com/en/java/javase/11/vm/java-hotspot-vi...

> The Java HotSpot Server Compiler implements the flow-insensitive escape analysis algorithm described in:

> ...

> After escape analysis, the server compiler eliminates the scalar replaceable object allocations and the associated locks from generated code. The server compiler also eliminates locks for objects that do not globally escape. It does not replace a heap allocation with a stack allocation for objects that do not globally escape.

----

So, some JVMs implement, others only do a limited subset of the optimizations available with escape analysis.

I would not say that the answer of "is it used in practice" is "no."


GraalVM is excellent in performing escape analysis on objects on the call stack, but it does not prevent the pointer overhead that a JVM array-of-heap-object-references has vs an array-of-structs that e.g. .NET supports [2].

Theoretically it could do hat, but that's just the classic "sufficient smart compiler" strawman [1]

[1] https://wiki.c2.com/?SufficientlySmartCompiler

[2] https://stackoverflow.com/questions/29665748/memory-allocati...


My point wasn't so much "can GraalVM do {some optimization}" but rather that the Java Language Specification doesn't say anything about it and that different JVMs have a different set of optimizations.

So "does Java allocate a record in an array directly as some structure of values in the array or as a pointer to a record object?" isn't one that can be answered by looking at Java.

It is an interesting question, and I'd be curious to see someone do a deep dive into the internals of GraalVM to show what can be done.

The other part that trickled out in other comments from the person posing the question about the array of records:

> It's a global array of structs, let's say.

and

> No, because my competitors who are attempting to fill the same orders I am attempting to fill are not chasing pointers.

... which, I'd be curious to see how .NET supports an array of struts (that are presumably changing over the lifetime of the array) that is allocated as a global. That sort of use case and the specifics of how it is implemented could make escape analysis give up and you'll see an array on the heap with pointers to records on the heap as they're passed off to different threads (which each have their own stack).


The point is those optimizations are not here now, and haven't been there for the last 25 years. Hand-waving them away as theoretically possible is dishonest. We're 25 years into the most popular programming language's lifetime and the most advanced VM available only recently learned good escape analysis. It isn't easy.

> which, I'd be curious to see how .NET supports an array of struts (that are presumably changing over the lifetime of the array) that is allocated as a global.

Very easy. An array-of-structs (which can still be on the heap mind you) will just be a continuous block of memory. This is totally independent of any locking and synchronization.

For example in a class with 2 32-bit fields, and an array of objects a b object-ref array will look like: [p_ap_b], p being a pointer to [a_0a_1] or [b_0b_1]. A struct-array will look like [a_0a_1b_0b_1].


Sure, but I think that's still important that it's possible. And if it doesn't get implemented, the reason may be because JVM developers have done the work to figure out that in the real world the optimization doesn't buy all that much.

Regardless, if you care about performance enough (via actual benchmarks) that you know that you really need some data to be guaranteed to be stack-allocated structs, then you probably shouldn't be using Java (or any GC'd language?) in the first place. Records don't change that calculus.


It's a global array of structs, let's say.


If it's a global, it's very likely allocated on the heap.

The question of "what is the representation of the object on the heap?" then open.

However, the "this is global" complicates it.

This isn't a question for Java to answer. You would need to dig into the specifics of the particular VM that you are using and how it allocates such a structure along with what optimizations it has available.


1 million is not a lot. I'd begin by asking myselves "can I afford to chase those pointers?", because maybe you can.


That's a good question to ask when faced with a problem that could be solved that way, but a real answer to the question would be useful too.


No, because my competitors who are attempting to fill the same orders I am attempting to fill are not chasing pointers.


You are chasing nanoseconds with a garbage collected language?


Those nanoseconds tend to add up.


are your competitors using java?


Java is relatively common in HFT yes.


Isn’t the point that Record classes will be able to be upgraded to value types easily once Valhalla is done? Or am I missing something


No they won't (or maybe they will be able to be speculatively opt-deopt?) Value types above a relatively small size are less efficient than references.


Yes, just add 'primitive' before record in the declaration.


Value classes, primitive types and specialized generics will be on the stack on the next versions. There is also this related work https://github.com/microsoft/openjdk-proposals/blob/main/sta...


I’ve not explored them much, how well does escape analysis work with them?


Flimsy counter-argument.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You