The article talks about eye safety a bit in section 4.
> a stuck mirror
This is one of the advantages of using an array of low power lasers rather than steering a single high power laser. The array physically doesn't have a failure mode where the power gets concentrated in a single direction. Anyway, theoretically, you would hope that class 1 eye-safe lidars should be eye safe even at point blank range, meaning that even if the beam gets stuck pointing into your eye, it would still be more or less safe.
> 20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory.
In the article, I point out a small nuance: If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.
Also, if a lidar is eye-safe at point blank range, when you have multiple cars tens of meters away, laser beam divergence already starts to reduce the intensity, not to mention that when the lidars are scanning properly, the probability of all of them pointing in the same spot is almost impossible.
By the way, the Waymo Laser Bear Honeycomb is the bumper lidar (940 nm iirc) and not the big 1550 nm unit that was on the Chrysler Pacificas. The newer Jaguar I-Pace cars don't have the 1550 nm lidar at all but have a much bigger and higher performance spinning lidar.
Beamforming with a phased array is talked about in the article, but you are conflating two very different types of arrays. You can't form beams with the types of macroscopic arrays I was referring to, since they consist of macroscopic array elements whose phase cannot be controlled, and reside behind a fixed lens that ensures that they all point in different directions.
Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
So yes, a mirror trip reset is probably a good start. But would I trust someone's vision to this alone?
> Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
Nope, nothing as complicated as that. You're close with the watchdog timer.
The solenoid is driven by a charge pump, which is capacitively coupled to the output of the controller. The controller toggles the gas grant output on and off a couple of times a second, and it doesn't matter if it sticks high or low - if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Same way we used to do for electron beam scanning.
>> if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
>> Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Sounds like a great plan, but I question the "about a second" timing; the GP post calculates that "about a second" is between 4X and 10X the time required to cause damage. So, how fast do these things scan/cycle across their field of view? Could this be solved by speeding up the cycle, or would that overly compromise the image? Change the scan pattern, or insert more check-points in the pattern?
A flash lidar is simply a 2D array of detectors plus a light source that's not imaged. It's mentioned super briefly at the start of section 3 but you're right, I should have gone into more detail given how common and important they are.
For pulsed emitters, indeed adding random jitter in the timing would avoid the problem of multiple lidars being synced up and firing at the same time. For some SPAD sensors, it's common to emit a train of multiple pulses to make a single measurement. Adding random jitter between them is a known and useful trick to mitigate interference. But in fact it isn't super accurate to say that interference is a problem for continuous-wave emitters either. Coherent FMCW lidar are typically quite robust against interference by, say, using randomized chirp patterns.
Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
This is Black Friday pricing at least, if you're willing to shuck. Seagate drives are still sub-$10/TB which... a single 24-26TB is enough for all my photos (ever), media and some dataset backup for work. I'm planning to backup photos and other "glacier"-tier media like YouTube channels to BluRay (a disk or two per year). It's at the point where I'd rather just pay the money and forget about it for 5-10 years.
I built the case from Makerbeam and printed panels, an old Corsair SF600 and a 4 year old ITX system with one of Silverstone's backplanes. They make up to 5 drives in a 3x5-1/4 bay form factor. It's a little overpowered (a 5950X), but I also use it as a generic server at home and run a shared ZFS pool with 2x mirrored vdevs. Even with inefficient space it's more than I need. I put in a 1080ti for transcoding or odd jobs that need a little CUDA (like photo tagging). Runs ResNet50-class models easily enough. I also wondered about treating it as a single-node SLURM server.
That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.
For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.
There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.
ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.
There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.
But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.
Yes, that's the reason why a dedup vdev that has lower redundancy than your main pool will fail with "mismatched replication level" unless you use the -f (force) flag.
I'm not sure about whether an L2ARC vdev can offload the DDT, but my guess is no given the built-in logic warning against mismatched replication levels.
Well, the warning makes sense with respect to the dedup vdev since the DDT would actually be stored there. On the other hand, the L2ARC would simply serve as a read cache, similar to the DDT residing in RAM.
It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.
Don't take manufacturers' recommendations so literally. I've been running 6 WD Greens in raidz for 81000 hours with no issues. Home media library etc. At the end of the day, any CMR disk is basically the same as any other, excluding manufacturing defects.
I won that lottery with 3x 26TB Exos shipped. I decided to try and get two more but they ended up being HAMR (returned). Then I managed to find two more earlier manufacturing dates in store stock at a somewhat-far Best Buy that I was driving past anyway.
It felt like an unnecessary purchase at the time (I'm still waiting to CAD a CPU cooler mounting solution for the build in a new case that has room for the drives). But it seems like that deal is going to be the high water mark for a few years, at least.
I don't know- apparently for many years the helium levels stay at 99%, if we extrapolate they will probably last for a few decades. So my concern over helium might be a bit too much
For those who are interested, the Will Evans course on graph drawing [1] covers a lot of cool graph drawing algorithms. Graph drawing is very interesting and many applications get it wrong. I once contributed [2] some small bugfixes to the Dot lexer for the Open Graph Drawing Framework, which has fast implementations of some amazing graph drawing algorithms, and my experience is that the OGDF draws graphs vastly better than the various algorithms in GraphViz (fewer crossings, faster, etc).
One day I'll muster up the motivation to bring my setup to Roaring Camp to scan those Shay geared locomotives but those moving parts will indeed appear weird and distorted.
I'm mostly curious if those distortions capture a sense of movement or motion in the pistons, with their regular sinusoidal beats. And no idea how steam clouds would come out. Our minds also visualize the moving parts differently to how a regular camera captures them or the eye sees them.
Yeah they have a whopping 300mm f/2.0 lens for photo finish! I have been using various primes including a Samyang 135mm f/2, Voigtländer Apo Lanthar 125mm f/2.5, Voigtländer Nokton 58mm f/1.4, Voigtländer Ultron 35mm f/1.7, Myutron 50mm f/2.6, etc. The problem with a really large aperture is that it's hard to nail focus.
Sorry for the purple trees. The camera is sensitive to near infrared, in which trees are highly reflective, and I haven't taken any trains since buying an IR cut filter. Some of these also have dropped frames and other artifacts.
Yeah I actually have it disabled by default since it makes the horizontal stripes more obvious and it's also extremely slow. Also, I found that my vertical stripe correction doesn't work in all cases and sometimes introduces more stripes. Lots more work to do.
As for RCD demosaicing, that's my next step. The color fringing is due to the naive linear interpolation for the red and blue channels. But, with the RCD strategy, if we consider that the green channel has full coverage of the image, we could use it as a guide to make interpolation better.
When you do the demosaicing, and perhaps other steps, did you ever consider declaring the x-positions, spline parameters, ... as latent variables to estimate?
Consider a color histogram, then the logo (showing color oscillations) would have a wider spread and lower peaked histogram versus a correctly mapped (just the few colors plus or minus some noise) which would show a very thin but strong peak in colorspace. A a high-variance color occupation has higher entropy compared to a low-variance strongly centered peak (or multipeak) distribution.
So it seems colorspace entropy could be a strong term in a loss function for optimization (using RMAD).
What's your FPS/LPS in this setup? I've experimented with similar imaging with an ordinary camera, but LPS was limiting, and I know line-scan machine vision cameras can output some amazing numbers, like 50k+ LPS.
Absolutely fascinating stuff! Thank you so much for adding detailed explanations of the math involved and your process. Always wondered how it worked but never bothered to look it up until today. Reading your page pushed it beyond idle curiosity for me. Thanks for that. And thanks also to HN for always surfacing truly interesting reading material on a daily basis!
> a stuck mirror
This is one of the advantages of using an array of low power lasers rather than steering a single high power laser. The array physically doesn't have a failure mode where the power gets concentrated in a single direction. Anyway, theoretically, you would hope that class 1 eye-safe lidars should be eye safe even at point blank range, meaning that even if the beam gets stuck pointing into your eye, it would still be more or less safe.
> 20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory.
In the article, I point out a small nuance: If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.
Also, if a lidar is eye-safe at point blank range, when you have multiple cars tens of meters away, laser beam divergence already starts to reduce the intensity, not to mention that when the lidars are scanning properly, the probability of all of them pointing in the same spot is almost impossible.
By the way, the Waymo Laser Bear Honeycomb is the bumper lidar (940 nm iirc) and not the big 1550 nm unit that was on the Chrysler Pacificas. The newer Jaguar I-Pace cars don't have the 1550 nm lidar at all but have a much bigger and higher performance spinning lidar.