For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more alex3305's commentsregister


I think you meant this version ;) https://m.youtube.com/watch?v=jmaUIyvy8E8


nice :D - huge fan of the show


In The Netherlands the allowed voltage tolerance is 10% by law. So everything between 207V and 253V should be fine. And we also had 220V until 1989.


I'm unsure what a 'TrueCharts' is, but if it's anything Docker based you probably need to mount the Intel GPU device. You can do this pretty easily with a device mount: `--device=/dev/dri`. Another issues that may occur is that `/dev/dri` has insufficient permissions for Docker or that the device driver hasn't initialized.


Not for Intel, but if you use an nvidia card, you can also use ‘—-gpus all’ with the nvidia container runtime. https://wiki.archlinux.org/title/docker#With_NVIDIA_Containe...

The official JellyFin docs suggest using —-device for docker/Intel cases: https://jellyfin.org/docs/general/administration/hardware-ac...


Thank you.

On TrueNAS Scale you install applications that run on Docker/Kubernetes, and TrueCharts is a registry of these applications. They work with just a few clicks, it's very impressive and still new to me.

I think you're right on this, I'll investigate further today.


In my experience, just a little bit of insider knowledge goes a long ways to making better code. Arrays are fun things, especially when you do a deep dive into the System.arraycopy() function. But the same goes for all Collections in Java. For instance, most of them have a default size (mostly 10), and growing them is a costly operation. So knowing beforehand how large your collection can or may be, can benefit code. I could use this effectively when working with large document XML parsing.

I recommend everyone that uses a managed language (Java, C# or others) to at least get a basic understanding of these fundamentals. And also know which collection type to use when.


> For instance, most of them have a default size (mostly 10), and growing them is a costly operation. So knowing beforehand how large your collection can or may be, can benefit code.

It's really a tricky balance. Over-allocating collections "just in case" can quite often be very expensive as well, since large array allocations tend to be fairly slow (since e.g. they typically won't fit in the TLAB).


It's one of those things where you usually have to let profiling and other observations guide your approach. 99.9% of the time it doesn't really matter and the default behavior is fine. But I can think of a few times where this has been a big deal.

One in particular - I was profiling an application with low-latency needs and GC was taking up a ton of time. Mission control showed tons of allocations of arrays - at one point it was creating a bunch of lists in a loop and adding stuff to them, which triggered creating a new underlying array. We found that a) Many of the arrays were just over the first resizing size, and b) There was a good heuristic that we could use to give them an initial size that would never have to be expanded and wouldn't result in huge amounts of waste.

This had a pretty dramatic effect on our GC times and the overall latency. I think this is where the JVM really shines - tons of tooling to help you profile and observe these kinds of details to help you figure out when you actually need to care about stuff like the initial array capacity.


Depends a lot on what you're doing too. I do a fair bit of heavy data processing work with my search engine (tokenizing something like a billion documents into arrays of words etc), and allocator contention has a pretty huge performance impact for that type of work.

My intuition is that the best thing is to aim for the expected median size, rather than the maximum as one might assume would be the most performant. The maximum strategy minimizes re-allocations, but at the expense of always making costlier allocations.


I think it depends a lot on the other details, especially how expensive the extra GC will be vs the wasted space. Hard to give a rule that will work in all contexts.

In our case, it wasn't a single hard-coded number - the input data gave us the upper bound, and the difference between the upper bound and the median case was so small that going with the upper bound worked out best.


> It's really a tricky balance. Over-allocating collections "just in case" can quite often be very expensive as well

It is sometimes really tricky. When I worked with streaming XML documents that were gigabytes in size, there is a really fine margin you have to work with.

However some general knowledge can be pretty useful. I saw colleagues just do "= new ArrayList<?>(1000);" without considering the collection type or possible size. And besides being a bit ignorant, it can also be really confusing for other developers that take first look at such code.


> TLAB

TLB?


The TLAB is the Thread Local Allocation Buffer.

In short and a bit simplified, normally when you allocate memory, the allocator needs to synchronize between threads because RAM is a shared resource. This means that a thread that allocates a lot can disrupt the performance of other threads, among other weird effects. But there's a small buffer called the TLAB owned by each thread where this isn't true: Allocation in the TLAB doesn't require synchronization. The TLAB makes allocating small ephemeral objects much faster.


This is a good explanation. See also Shipilev's JVM Anatomy {Park|Quark} episode: https://shipilev.net/jvm/anatomy-quarks/4-tlab-allocation/


Thread Local Allocation Buffer


I cringe every single time I see a for loop for what System.arraycopy () has been providing since early days.

For better or worse, it shows me that the author isn't that into Java.


I cannot for my life remember the argument order, so I write the manual code and let IntelliJ convert it.


Doesn't autocomplete show the arguments? I usually use Netbeans when I write Java, so no idea if InelliJ is just that bad.


IntelliJ definitely shows the arguments lol


It shows the argument names and even highlights the current one where the cursor is. But sometimes my thought process is just different.


In Java, it's src array, src offset, dest array, dest offset, length. It's a natural order of from, to.

It's C memcpy() that's the odd one out by putting the destination before the source.


memcpy argument order matches the left-to-right arrangement of assignment. lhs=rhs is rhs is copied to lhs. memcpy(lhs,rhs) is the contents at rhs is copied to lhs.


British people generally don't, but Americans very often use "to...from".


They do, and it's jarring!

Even though I find memcpy and friends to be perfectly logical using the assignment analogy suggested by thwarted, I often need to re-read english sentences written that way.


> I cringe every single time I see a for loop for what System.arraycopy () has been providing since early days.

The worst thing is, that System.arraycopy() is an optimized JNI call which is much faster than copying it by hand [1].

> For better or worse, it shows me that the author isn't that into Java.

The thing is though, most of the time arrays in Java are used because of performance. Or maybe ignorance. Because why would anyone voluntarily give up all the comforts of a List<T>? It's not that Collections are very hard to find in the documentation. And most of the IntelliJ suggest switching to a Collection anyway.

1. https://www.javaspecialists.eu/archive/Issue124-Copying-Arra...


Or it might be that the person has used multiple programming languages, across which the order/meaning of copy arguments varies a lot, and thus prefer to not remember the decision of each language (if not for writing (at which point the IDE could help), then for reading). Whereas a loop is always easy to read and write equally in all languages, and it's really not unreasonable to expect it to perform well enough (if not as good as System.arraycopy, then at least good enough to be insignificant compared to the actual important logic in the code).


Given that I have programed dozen of languages since 1986, and have to jump between C#, Java, C++, Typescript, Transact SQL and PL/SQL for work, plus whatever is needed to keep the customer happy, isn't an argument I would sympathise with in code reviews.


Yet, somehow I doubt you write perfect code in all those languages. Do you cringe at yourself and conclude you just don’t care also?


No I don't, and when someone cringes looking at my code, I shut up, apply the fix and get to improve my skills on the language, instead of excusing myself.


So you agree you should be judged as someone who doesn’t care in those cases also, right? You didn’t mention anything about people excusing themselves initially, just that you judged them. I just hope you hold yourself to the same standard.


So is life, made up of judgments, not always fair.


There are more options than "write perfect code" and "not even attempt to learn how to write idiomatic code".


Yeah, of course. But maybe you should just be happy to share knowledge with those lacking it, rather than cringing and making some kind of personal judgement of them.


I agree. Understanding the inner working of languages and their runtimes is IMHO what gets you one step closer to a senior. Luckily, I had in my young career few seniors in the team who knew a lot about Java and shared their knowledge about the behavior.


Does anyone have any good book recommendations or links for insider knowledge of the JVM/Java? If Clojure focused all the better :)


There is JVM Anatomy Quarks.

I can also recommend reading the JVM specification itself, it is surprisingly not as dry as one might think, and not a novel, it’s a good read. Oh and of course anything written by Brian Goetz, usually about some new feature.


Maybe it's not really up your alley. But I learned Java with the Java in Action with BlueJ [1]. Although it's pretty basic, the text book really explains all the Java (and OOM) basics in a pretty clear way. The book is called Objects First [2].

In addition I really enjoyed exploring the JDK documentation. Especially Java <1.7 is extremely manageable. Java 8 introduced NIO and lambda's which make Java way more fun, but also a tad harder to learn.

It's not exactly JVM, but just wanted to share anyway :).

1. https://www.java.com/en/java_in_action/bluej.jsp

2. https://www.bluej.org/objects-first/


The default size of an ArrayList has been 0 for a while. On the first insertion, it is initialized to 10.


That's a bit semantic, isn't it? Because in practice it's still 10, but lazily initialized [1]. And an empty ArrayList is useless anyway.

1. https://stackoverflow.com/a/34250231


I used their trial for a bit to test it out with Vorta [1] in a container. Vorta (and Borg) seemed to work fine, until I wanted to restore an archive and I noticed that my recent snapshots were completely empty. Probably because of a misconfiguration on my end though. But it made me look elsewhere. For me backups should be a fire, test and forget solution.

Recently I made the switch to Kopia [2] which seems to have feature parity with Borg (and Restic [3]). It also has a web UI which is way easier to work with than Vorta. And I can easily view, extract and restore individual files or folders from there. This gave me way more confidence about this solution. The only thing I really miss is that I cannot chose different targets for different paths. For instance, with Borg I was able to backup a partial of my Docker appdata to an external source. And I haven't found a way to do this with Kopia. Besides that I'm pretty happy with this solution and I would recommend it.

1. https://vorta.borgbase.com/

2. https://kopia.io/

3. https://restic.net/


> 5-10 years ago I would think this is perfectly fine. I believe I was not alone in this, but maybe I was. The energy would have cost pennies too and why whine about it?

When I moved into my own home (8 years ago) I brought my 'homeserver' with me. Which was just a simple i5-2600 build with some shucked drives in it. I never thought about electricity prices when I lived with my parents. But that changed rather fast. With the server gobbling up a constant 90W, I quickly realized that, even back than, it would cost me 15 euro's a month on electricity alone.

I than proceeded to put a Pi next to it, that would listen to incoming Plex requests and would start up and shut down the server with WoL. That only reduced costs by about a third. The next couple of years I would move on to a NUC with a NAS that would only consume about 29W/h on average. Which was decent, but not great considering the poor performance of both machines (J4105 and J1800).

Last month I have gone back to the DIY route. Now with a i5-13500. I'm still in the process of optimizing it for power efficiency. Although much more stressful than the prebuilts, I love the hunt for the last watt.

Anyway, what I wanted to say is that I notice that family and friends don't really care about saving power in general. They mostly just pay for it and there's that. While my house runs 100% on electricity and I'm really proud if I can get 9kWh/day on average. Even when I see that (for example) the 8-bit guy uses 100kWh/day on average [1]. Which seems out of this world for me.

1. https://www.youtube.com/watch?v=bXd-aP06lug&t=45s


>I than proceeded to put a Pi next to it, that would listen to incoming Plex requests and would start up and shut down the server with WoL.

How hard is this to configure? I have a server at home I use to run a database and computational heavy code, however I am the only user so realistically it is only in use 8 hours a day and some weekends etc. However in the fear of forgetting to turn it on before I go to work (or if I suddenly find time to work while away) I find that I default to leave it on. Being able to control it would be fantastic.


> How hard is this to configure?

Not at all. Just ensure that you have WoL enabled on the host machine and than proceed to send a magic packet. You could even do this with Home Assistant [1] if you are into that. I did this with a script that used tcpdump to monitor for incoming traffic [2] for Plex with an additional (dummy) Plex server on the Pi. I also remember faintly that I had to add 1 library and 1 video file to make this work though.

Powering down - or sleep - is a bit harder. I built a 'Sleep on LAN' app [3] for myself years ago that could power down (or sleep) a system on demand using a REST API. I used this and Tautulli [3] with Home Assistant that would check if there were any active streams and if there wasn't any activity for a specified amount of time I would send a SoL request to my service.

As you can see it isn't super hard or complicated, but a bit cumbersome to find all the moving bits and make it work. But when it does, it's IMHO fantastic.

1. https://www.home-assistant.io/integrations/wake_on_lan/

2. https://gist.github.com/alex3305/8cc73ddd2c8ca6328f20235480a...

2. https://github.com/alex3305/sleep-on-lan

3. https://tautulli.com/


If it's just that, you can have a Pi next to it and just ssh in to send a WoL command. Basically nothing to configure.

You can make it simpler to use by making an alias in your shell, or a button on your phone (with one of the countless "ssh button" apps). Or even make a web page for it (some php or python that just calls the WoL function).

OP describes a more transparent (and complex) setup where the Pi presumably acts as a reverse proxy. I'd be curious to know the exact setup too, one of the simplest ways would be to use wake on unicast: https://news.ycombinator.com/item?id=35627107

Other ways include wrapping some scripts around socat, writing your own proxy, systemd socket activation, etc.


I used tcpdump with a dummy Plex server that listened to incoming requests [1]. Because those request are automatically generated when a user opens up the Plex app. And I than proceeded to send a WoL request.

A reverse proxy would of probably worked too, but I didn't want to be limited by the 100Mbps network interface of my Pi 3B.

1. https://gist.github.com/alex3305/8cc73ddd2c8ca6328f20235480a...


Hold on a second - how large of a household are we talking? House or apartment? What consumes all that power? Electric vehicle? Work from home?

I'm environmentally conscious (0 cars, 0 pets) but I haven't spent any time measuring and optimising electricity usage.

I've had a look at my own electricity usage every once in a while, now averaging 48 kWh/month (between 38 and 68) for a larger than average apartment for two people.


48 kWh / month is amazing! That's 4 kWh / day. Just cooking on our induction burner and instant pot can use up 1 kWh on a day with lots of meal prep.


Isn't 48 kWh/month = ~1.6 kWh/day? That seems impossible. Even 4kWh / day seems impossible for me. I exclusively use electricity, except for heating. My water heater alone consumes about 2kWh/day. Not even talking about cooking, using my computer or watching tv.


Interesting. Maybe our brand new kitchen appliances are more energy efficient?


> Which has even been “weaponised” before; there were stories some years ago about stores blasting high-frequency noise outside to deter loitering teenagers without affecting desirable adult shopper

Yeah, they should only work for teenagers. Except for when you are visually impaired like me. I'm almost 35 now and still hear those high frequency noise. It is extremely annoying and even borderline painful. Last year I even got the cops involved when a parked car in front of my apartment constantly emitted a high frequency noise for deterring stone marten and I didn't know who the car belonged to. It was a pretty funny interaction in the end because none of the parties involved (except for me) could even hear the noise. But the car owner admitted they had such a device installed. Resulting in them just moving their car.


> I appreciate the intent of this project, but it is not a sustainable approach.

Self-hosting anything isn't sustainable. Even your modem or router isn't sustainable. But having additional backups and privacy is worth something for me. However since I'm in Europe, electricity prices here are through the roof and I try to minimize power where possible. I self host Immich on my Intel Celeron J1800 NAS that uses 19.1Wh on average. I cannot run any ML stuff that it comes with though. But for me the carbon intensity should be about 250 gCO₂eq/kWh [1]. So running my NAS produces about 42 Kg of CO₂ per year for 2 users [2]. Including 10 or so additional applications besides Immich. That sounds pretty reasonable to me.

1. https://www.eea.europa.eu/ims/greenhouse-gas-emission-intens...

2. ((0,0191 * 250 * 365,25 * 24) / 1000)


What you are saying makes sense. If you are running other valuable apps, then it isn't a waste.

I am curious though, how did you come to 1.75kg per user? I calculated 20.9kg / user / year.

250/1000/1000 kgCO₂eq/Wh x 19.1 Wh x 8760 hours in a year / 2 users = 20.9 kg / user / year


> What you are saying makes sense. If you are running other valuable apps, then it isn't a waste.

'Valuable' is pretty subjective though. Most of the services are for entertainment purposes. But I also host quite a few useful things such as Bitwarden and CouchDB for Obsidian Sync.

> I calculated 20.9kg / user / year.

You are right. I forgot the 24 hours in my calculation:

    (0,0191 * 250 * 365,25 * 24) / 1000
I also noticed that my comment was incorrectly formatted. I've since edited my original comment. If I factor in my complete homelab I use about 40Wh, which corresponds with about 90kg of CO₂ yearly. Which doesn't sound all that bad to me.


> All you've done is told your customers that their data isn't safe with your service.

And not only Influx's current customers, but also their future customers. I really like Influx for my homelab. But with this attitude, I would be really hesitant for a real world production deployment.


What future customers? After seeing this astoundingly terrible behaviour for a company with "DB" in their main product's name, I can't imagine anyone ever making the decision to trust InfluxData again. I know I certainly won't, nor will any company I work for.


> Reddit has granted a pass for some apps on this basis though.

But with two major caveats. This is only for non-profit apps, which excludes almost all of them. For Android there is only RedReader now and for iOS there are Luna and Dystopia left.

And still without any NSFW content.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You