For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more Terribledactyl's commentsregister

wow thank you, I had toyed with this idea a few months ago but didn't have time to investigate it.


correct, the number of devices in a raidz1/2/3 vdev cannot change. However you can replace all of the drives one at a time with higher capacity drives and most zfs installs will grow your vdev.

You could also add another raidz vdev if you have the space in the server, eventually they should level out depending on workload.

"Best" solution, read most expensive, would be to copy the data over to a new server you've configured for the new load/capacity.


Adding 3 drives at a time is expensive and a non-option for a home NAS. As is "just copy it to a new server".

Both have high investment costs for something a plain mdRAID, snapRAID or LVM RAID can achieve far simpler with better results.


I add two at a time without much problem, mind you I have a rather extreme setup with an external 12-bay enclosure but I feel most people building a NAS instead of buying an off-the-shelf system from Synology/Drobo aren't looking to cut costs or corners.

A 4TB Seagate IronWolf drive costs $129 off Amazon, buying two to add as a new mirrored vdev to my TrueNAS box isn't outrageous.


You're limited then to only mirroring, you can't do RAID6 or similar on a massive array where you're only giving up 2/10 drives space but can have a 2 disk failure. Seems like a lot of wasted disks when the only thing I'm dependent on the NAS for is not losing my TV collection. Right now I know if that if I go out and buy 2 more 4 TB disk I get 8 TB more in my NAS and at the price of an only slightly increased risk of having more than 2 drives fail at once. That's probably my favorite feature of RAID6 for a home NAS.

I actually went custom because those off-the-shelf boxes are either very expensive or have weak CPUs so can't be used for video transcoding very well - it's cheaper to build it yourself, much cheaper if you already have old hardware to dedicate to the task.


Different use cases I suppose, my FreeNAS box stores my video collection and all the usual stuff but I've also got all the VM's that run my home network stored on it - performance+resliver times are a lot more important to me than storage efficiency.


And your network-running VMs have high disk IO requirements?


All of my virtual disks are hosted off my FreeNAS, I've got a direct 10GbE link between it and my oVirt host - raw throughput isn't so much the issue most of the time as IOPS are, I've got a local gitlab instance, OpenShift, some PostgreSQL databases, etc. and they like to hammer the crap out of my storage when in use.

Having an L2ARC helps out quite a bit, but only having 32GB of memory and wanting to keep most of it for the L1ARC means I still hit my spinning disks regularly (and mirrored vdev's help read IOPS tremendously in this case).


Reducing the cost of a custom system isn't irrational.

My own personal budget is very limited, buying two IronWolf HDDs in this case is easily a good chunk of my monthly income. If I can build a NAS that can expand with single drives as needed, it's more cost effective for me.

And I imagine a lot of others have the same problem.

In the end, by buying 2 drives when a single drive could have solved the problem equally well: expanding your space by 4TB, is wasting money. Period.


I do something very similar, my oldest build gets synched once a year and is in cold storage. I hope I never need it. The hard drives from my previous build hold cold copies of the most valuable volumes (monthly sync), and the current system has plenty of redundancy and snapshots (insurance, not really backups). I use zerotier to stay in sync when away from home.


I was under the impression that each scroll through Facebook could show different content in arbitrary orderings.


For vosper's technique to work, you have to select the Most Recent ordering for your news feed.


But, does Walmart actually care?

I'm not super familiar with the details of the acquisition but I thought it was more landgrab to extend their online presence than acquihire.


The general consensus, I think, is that the acquisition of Jet is Walmart stepping into the online retail space to fight off Amazon.


E5-1XXX core count is much lower, only 8 in v4 (latest shipping)


That might be, but the main point here was that it's way faster. Do you want to destroy a few MBs per drive or a few TBs?


I'm a hobby coffee roaster and anytime I have an off batch (by espresso or pour over standards), I add it to my cold brew bean bin and get very good results. Can't really distinguish the bean beyond the degree of roast, and mixing the beans achieves some consistency. The only times this has failed me was exploring the very darkest or lightest roasts.


On a tangent, but where do you source your beans from?


Usually from: https://www.sweetmarias.com (or their more commercial counterpart http://www.coffeeshrub.com)


>regex-redux (edge case?)

It looks like the JS version is using the built in runtime's regex engine.

I don't know exactly why (maybe differences or missing features in go's regex) but the go version is actually using the c FFI to call pcre.h, and without profiling it I'd guess a huge hit is that alone.


> I don't know exactly why … and without profiling it I'd guess a huge hit is that alone.

Don't guess. Look at the measurements. Look at the source code. That Go PCRE program is faster than --

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

Maybe faster still to use some of the techniques from this other Go PCRE program --

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...


I think Go's regex lib is native Go, but simply naive. In any case, this particular benchmark isn't useful unless your application is regex as a service.


That's a little disingenuous, they have a minimum order size, and consolidate delivery dates in an area to maximize route efficiency.

A fully packed delivery truck in a densely subscribed area would be more efficient in terms of fuel than all of those individuals driving to the store. But yes, minimal customers, in a big area, buying few meals consecutively would be worse.


I dont buy that you can assume that delivery saves fuel. You're assuming everyone makes one full round trip from home to the grocery store every week, when most people incorporate it with their other errands, commute, etc.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You