For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Verdex's commentsregister

Heres the thing about clean code. Is it really good? Or is it just something that people get familiar with and actually familiarity is all that matters.

You can't really run the experiment because to do it you have to isolate a bunch of software engineers and carefully measure them as they go through parallel test careers. I mean I guess you could measure it but it's expensive and time consuming and likely to have massive experimental issues.

Although now you can sort of run the experiment with an LLM. Clean code vs unclean code. Let's redefine clean code to mean this other thing. Rerun everything from a blank state and then give it identical inputs. Evaluate on tokens used, time spent, propensity for unit tests to fail, and rework.

The history of science and technology is people coming up with simple but wrong untestable theories which topple over once someone invents a thingamajig that allows tests to be run.


I'm with you... personally, I always found Clean/Onion to be more annoying than helpful in practice... you're working on a feature or section of an application, only now you have to work across disconnected, mirrored trees of structure in order to work on a given feature.

I tend to prefer Feature-Centric Layout or even Vertical Slices, where related work is closer together based on what is being worked on as opposed to the type of work being done. I find it to be far more discoverable in practice while able to be simpler and easier to maintain over time... no need to add unnecessary complexity at all. In general, you don't need a lot of the patterns introduced by Clean or Onion structures as you aren't creating multiple, in production, implementations of interfaces and you don't need that type of inheritance for testing.

Just my own take... which of course, has been fighting upstream having done a lot of work in the .Net space.


Applause.

I am in .net as well. The clean code virus runs rampant.

Swimming in DTOs and ViewModels that are exact copies of Models; services that have two methods in them: a command method and then the actual command the command method calls, when the calling class already has access to the data the command method is executing; 3 layers of generic abstractions that ultimately boil down to a 3 method class.

Debugging anything is a nightmare with all the jumps through all the different classes. Hell, just learning the code base was a nightmare.

Now I'm balls deep in a warehouse migration, which means rewriting the ETL to accommodate both systems until we flip the switch. And the people who originally wrote the ETL apparently didn't read the documentation for any of it.


No, it's not really good.

It's a pain in the ass to work in, and it produces slow code.

https://www.computerenhance.com/p/clean-code-horrible-perfor...


Whenever the lawnmower thing comes up, I try to also mention dtrace. As far as things to be remembered for, they make some strange bedfellows... although it's better than anything I've managed so I guess congrats.

DTrace was absolutely a product of pre-Oracle Sun, not Oracle.

Hey friend, check the user name of the person I'm responding to (and perhaps check out the people responsible for dtrace and larry ellison lawnmower comparisons). I might appear more coherent afterwards.

Yeah, I see what you mean now. Sorry.

It's not like Sun wasn't also lawyer heavy.

While the LLM rust experiments I've been running make good use of ADTs, it seems to have trouble understanding lifetimes and when it should be rc/arc-ing.

Perhaps these issues have known solutions? But so far the LLM just clones everything.

So I'm not convinced just using rust for a tool built by an LLM is going to lead to the outcome that you're hoping for.

[Also just in general abstractions in rust feel needlessly complicated by needing to know the size of everything. I've gotten so much milage by just writing what I need without abstraction and then hoping you don't have to do it twice. For something (read: claude code et al) that is kind of new to everyone, I'm not sure that rust is the best target language even when you take the LLM generated nature of the beast out of the equation.]


It's also less frustrating to organize world wide ram production and logistics than to deal with a single mathematician.

Constantly sitting around trying to solve problems that nobody has made headway on for hundreds of years. Or inventing theorems around 15th century mysticism that won't be applicable for hundreds of years.

Now if you'll excuse me I need to multiply some numbers by 3 and divide them by 2 ... I'm so close guys.


The comment feels a bit like Verdex may have dated a mathematician at some point and it went sour.

Known in the business as 'pulling a jevons'

I suppose that you could have the phone listening in real time and generating profiles that are hidden and embarrassing but not illegal.

So when they ask for the real profile it shows in the next unlock a profile that makes it very clear you have a deeply embarrassing ASMR addiction.

It could cross reference your local laws to ensure to not spill the beans on something locally illegal.


Yeah, church turing suggests that a computer can compute any computable function. Or the universality of computable substrate. Maybe there's a confusion that computation universality implies everything universality?


> In theory a computer should be able to model any physical process

Wait, which theory is that? In church turing theory the computer can compute any computable function.

Why do we think that the computer can model any physical process?

Or are we suggesting that you can build a computer out of whatever physical process you want to model?


> > In theory a computer should be able to model any physical process

> Wait, which theory is that?

The Church-Turing-Deutsch Principle. (Which isn’t a theory in the empirical sense, but somewhat more speculative.)

> Or are we suggesting that you can build a computer out of whatever physical process you want to model?

Well, you obviously can do that. Whether that computer is Turing equivalent, more limited, or potentially a hypercomputer is...well, Church-Turing-Deutsch says the last is always false, but good luck proving it.


Church turing is about computable functions. Uncomputable functions exist.

For example how much rain is going to be in the rain gauge after a storm is uncomputable. You can hook up a sensor to perform some action when the rain gets so high. This rain algorithm is outside of anything church turing has to say.

There are many other natural processes that are outside the realm of was is computable. People are bathed in them.

Church turing suggests only what people can do when constrained to a bunch of symbols and squares.


That example is completely false: how much rain will fall is absolutely a computable function, just a very difficult and expensive function to evaluate with absurdly large boundary conditions.

This is in the same sense that while it is technically correct to describe all physically instantiated computer programs, and by extension all AI, as being in the set of "things which are just Markov chains", it comes with a massive cost that may or may not be physically realisable within this universe.

Rainfall to the exact number of molecules is computable. Just hard. A quantum simulation of every protein folding and every electron energy level of every atom inside every cell of your brain on a classical computer is computable, in the Church-Turing sense, just with an exponential slowdown.

The busy beaver function, however, is actually un-computable.


The busy beaver function isn't uncomputable.

You just compute the brains of a bunch of immortal mathematics. At which point it's "very difficult and expensive function to evaluate with absurdly large boundary conditions."


> The busy beaver function isn't uncomputable.

False.

To quote:

  One of the most consequential aspects of the busy beaver game is that, if it were possible to compute the functions Σ(n) and S(n) for all n, then this would resolve all mathematical conjectures which can be encoded in the form "does ⟨this Turing machine⟩ halt".[5] For example, there is a 27-state Turing machine that checks Goldbach's conjecture for each number and halts on a counterexample; if this machine did not halt after running for S(27) steps, then it must run forever, resolving the conjecture.[5][7] Many other problems, including the Riemann hypothesis (744 states) and the consistency of ZF set theory (745 states[8][9]), can be expressed in a similar form, where at most a countably infinite number of cases need to be checked.[5]
"Uncomputable" has a very specific meaning, and the busy beaver function is one of those things, it is not merely "hard".

> You just compute the brains of a bunch of immortal mathematics. At which point it's "very difficult and expensive function to evaluate with absurdly large boundary conditions."

Humans are not magic, humans cannot solve it either, just as they cannot magically solve the halting problem for all inputs.


My understanding is that waymo has gone on the record to say that they have human operators that remotely drive the vehicle in scenarios where their automated system is confused.

Which I assert is semantically equivalent to saying: Human drivers (even when operating at the diminished capacity of not even being present in the car) are less likely to make errors driving a car than AIs.


This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You