For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more vrnvu's commentsregister

Why does introducing interfaces and vtables have to be the right abstraction? Given the example, the simplest solution is best. If you don’t need runtime semantics, just write simple code.

The Cons of a "simple" solution: > As you add features, the class becomes more cluttered. But if it's a feature, it shouldn't be considered clutter. Clutter usually comes from over-engineering, not necessary additions.

> Each method does one specific thing. That seems fine until you realize your interface is full of shallow, one-off methods. That makes it hard to maintain and extend. However, each method is efficient and tailored to solve a specific feature, which boosts performance. Plus, if there's a bug, it's easy to find and fix since each method has a clear path.

Also, to maintain a clear designed API, I suggest following some data-oriented recommendations, like designing functions to take lists of elements instead of single elements.


I always tell people who obsess over design patterns as clean code and good coding practices to remember that design patterns often indicate a lack of language features.

For example, are you using a Builder? Would you use the Builder pattern if the language had named variables in arguments?

My favorite reference on the topic is Peter Norvig's "Design Patterns in Dynamic Languages" (1996!) https://www.norvig.com/design-patterns/


A builder can allow you create different kinds of objects from a common stem. They don't have to be bags of properties.

Say, SQLAlchemy is all built on chaining builder-like methods, which make one of the finest DSLs that translates to SQL. In the end, you build a representation of a SQL statement, but in no way could that work with named arguments alone.

Instead, consider named arguments as nice shortcuts over curried functions.


> Would you use the Builder pattern if the language had named variables in arguments?

Yes, absolutely. I see it all the time in the Ruby ecosystem and have used it myself in Ruby. Many times it gets called by a different name. I've seen it in Python and Elixir too.


I suppose any feature you need is a design pattern, and everything from functions onup are whys of implementing features your language doesn't natively support.


I always enjoy reading posts about optimization like this one.

Optimizing a running service is often underrated. Many engineers focus on scaling horizontally or vertically, adding more instances or using more powerful machines to solve problems. But there’s a third option: service optimization, which offers significant benefits.

Whether it's tuning TCP configurations or profiling to identify CPU and memory bottlenecks, optimizing the service itself can lead to better performance and cost savings. It’s a smart approach that shouldn’t be overlooked.


This can be a complex topic if you don’t set clear constraints on what constitutes a valid character in your URL or domain.

For instance, in query parameters, spaces are encoded as '+'. But what if '+' is also a valid character in your domain? You then need to disambiguate i.e between "name?foo+bar" meaning "foo bar" or "foo+bar". Which one is the user actually referring to?

In our case, we ended up needing users to send the name in the body, and now we have to manage multiple encoding protocols (url, queryparam, the body...).


Wouldn't you just encode the "+" if you wanted it as a literal?


A better example is the name: "foo%20bar".

A user might have entities named "foo bar", "foo%20bar", and "foo%2520bar". Sometimes, mistakes happened because users forgot to double encode or they used the wrong protocol. As this names were used in URL, query parameters, and the body, and each has its own.

As I mentioned, with clear constraints and rules, we can accomplish anything we need, it can get complex. My takeaway from this project is to limit the valid characters and make it simple for everyone.


Please, beginners, do not take "Best practices" too seriously. For example:

https://github.com/TheAlgorithms/Go/blob/master/strings/pali...

Can you tell how many extra memory allocations we are making to solve this problem? And how many are really needed? We could solve this with a for loop by directly comparing the characters and skipping non-alphabetical ones. This would be simpler to read and more efficient.

We should be careful with what we call "best practices for beginners", as they can sometimes lead to unnecessary complexity and inefficiency.


The highly overused "best practices" marketing term should really be called "most-likely good practice for most but not all situations highly dependent on context".

I think the problem is people want to know "Hey what's the right way to do this?". Turns out IEEE does publish some actual standards[0][1] related to software, but being completely honest I've never actually read it. I'm assuming I'm not in the minority.

[0] https://standards.ieee.org/ieee/29148/5289/

[1] https://standards.ieee.org/ieee/1012/5609/


This is the problem with an experienced dev writing a "Best practices for beginners" article. The beginner will read it as literally the best practice and commit it to memory as dogma, not considering suitable applicability and variations.

I would say that beginners shouldn't be writing core algorithms as performance often matters. The post should be called beginner-friendly implementations of algorithms in Go, or something to that effect. The point being to learn about the algorithms rather than learning the right way to code algorithms.


> Can you tell how many extra memory allocations we are making to solve this problem?

What suggests that the "best practice" is sliced down the efficiency lane? The intent could be to "best" demonstrate how to structure a collection of algorithms and data structures into packages, for example.


Just an example. Why should complexity and over-engineering be considered best practices for a beginner? In the example I've given, it turns out that a for loop and an if conditional are the simplest and most efficient solutions.

Additionally, from a beginner's point of view, it's important to learn algorithms that are language-agnostic. In this case, Go handles the allocations, copies, and memory management for you. However, this code would be much harder to implement in C. In any language, the algorithm I've provided would still be the best solution.

As others have commented, the terms "best practices" and "beginner" are sometimes used as clickbait to attract interactions and GitHub stars.


> Why should complexity and over-engineering be considered best practices for a beginner?

I'm still not clear on where the idea that this is a "best practice" is coming from. Is there some context I am missing?

Aside from the editorialized title, all I can find is an assertion of "following best practices", which does not even imply that it is trying to establish best practices (of any sort, let alone around efficiency or complexity), merely that the author tried to follow some best practices established elsewhere. It is understood that to follow opens the possibility of veering off course, so unless the beginner is also a beginner with spoken language, it would not be taken to mean that this demonstrates best practices.


Those who can't do, teach.


Yep this is why I refuse to read any text books or documentation. If they were any good, the author wouldn't have written it!


Also written as, "have done it, it's really boring, maybe teaching other people how to do it will be less boring"


I agree that there are some ill conceived resources out there but there are some who walk the walk and talk the talk: - Erich Gamma contributor to eclipse as well as JUnit (Design Patterns: Elements of Reusable Object-Oriented Software) - Kent Beck key contributor to JUnit - Joshua Bloch key author of the collections API in java (Effective Java) - Donald Knuth - James Gosling creator of Java and the Java programing language book


This saying is a virus.


And all of us went through education, taught by people who (in your words) couldn't do, so that we now can go ahead and do.


  Location: Spain
  Remote: Yes
  Willing to relocate: Yes. If full-time relocation isn't possible (e.g., unable to sponsor a US visa), I'm willing to travel for a couple of months per year.
  Technologies: Optimization, AI, Backend, SRE, Go, Java, Spring, HTTP, Distributed Systems, AWS, Datadog, Kafka, Docker, Rust, React, Tailwind
  Résumé/CV: https://drive.google.com/file/d/1GTS1t1fUx3PTit7t5pMvtnqsLXlg4zut/view?usp=sharing
I traveled a lot the past years. I have experience working fully remote across different time zones in distributed teams, from the US to East Asia.

This summer, I want to build a BitTorrent client and tracker web server in Rust to showcase as a project: https://github.com/vrnvu/rust-bittorrent


If a junior struggles with WFH you have an engineering culture problem. Which the junior is not responsible for by the way... It's staff and senior engineers responsability ˆˆ


Personally would put the blame higher up than that.


The aforementioned senior employees and managers should find a way to talk with junior employees instead of assuming that no news is good news.


Even when no news is good news, the juniors would get screwed because the managers don't know what's going on.

The people who can just quietly do their job (and the one above it) don't get promoted. Only the ones that say "look at me" or where the managers are tracking them closely.


Well, nobody forces you to use the setters/getters pattern. For internal implementation, you could access everything directly. For example, Zig encourages this style of programming.

In my experience in Rust, the use of getters and setters is less common compared to some other languages like Java.



Hmm, this reasoning is making a lot of really questionable assumptions:

> We’ll use the estimates of 100 billion neurons and 100 trillion synapses for this post. That’s 100 teraweights.

... or maybe actual synapses cannot be described with a single weight.

> The max firing rate seems to be 200hz [4]. I really want an estimate of “neuron lag” here, but let’s upper bound it at 5ms.

... but ANNs output float activations per pass. Biological neurons encode values with sequences of spikes which vary in timing, so the firing rate doesn't on its own tell you the rate at which neurons can communicate new values.

> Remember also, that the brain is always learning, so it needs to be doing forward and backward passes. I’m not exactly sure why they are different, but [6] and [7] point to the backward pass taking 2x more compute than the forward pass.

... but the brain probably isn't doing backprop, in part because it doesn't get to observe the 'correct' output, compute a loss, etc, and because the brain isn't a DAG.


Yeah, fore sure. I just share it as a fun read. I think they have been discussed in HN before.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You