For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more babas's commentsregister

What "op" exactly? git push/pull? git push/pull are bandwidth efficient and atomic. A failed pull/push wont destroy your local/remote copy.


It tends to drop your connection on large packs.

Thats more a server configuration thing, many git hosts disconnect slow clients.

I was in Botswana for a few weeks in November and I can relate to that sentiment: git was unusable down there.


Yuck, BTDTBTTS. Satellite latency is eye-stabbing, almost as bad as the utterly worthless Mountain View's Google Wi-Fi. Local internet elsewhere, in random countries, GFL... bring your own or start a service (yeah, a friend made some serious cash putting up a service on some Greek island).

Anyone on OSX can feel some pain just by enabling Network Link Conditioner prefpane by creating a profile as follows:

  Download Bandwidth: 256 Kbps
  Downlink Packets Dropped: 90%
  Downlink Delay: 1000 ms

  Uplink Bandwidth: 256 Kbps
  Uplink Packets Dropped: 90%
  Uplink Delay: 1000 ms

  DNS Delay: 2000 ms

Setup instructions:

http://mattgemmell.com/network-link-conditioner-in-lion/


The issue is never being able to complete the fetch or push because of a spotty network.

It goes without saying that source control should not screw up my local or remote copies.


What gets me the most about LASIK naysayers is that you also get the same or even much worse artifacts when using glasses or contacts. I had Femto-LASIK 2 years ago, after using glasses for 6 years and contacts for 6 more. I have very slight haloing around bright lights at night (Not as bad as described in the post though). The freedom of not having glasses or using contacts is FANTASTIC!


I have both myopia and astigmatism and I see no halos at night.

You are not supposed to see halos if your prescription is correct. You might get some artifacts if you look through your glasses at an angle, but that's it.

Halos are an artifact of LASIK, usually when the pupil is large enough that it expands past the area of the cornea which was "corrected".


Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data. It writes the correct data if one of the drives has a corrupted block.

https://raid.wiki.kernel.org/index.php/RAID_Administration

  How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.
This is complete nonsense without more data to back it up.


> Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data.

This is inherent to RAID-5/6. Doesn't really have anything to do with mdadm other than mdadm implements RAID-5/6. And now you probably have a write hole.


Here is one of the sources: http://linas.org/linux/raid.html


Just to make it clear: on raid 5/6 parity isn't checked on reads, so to get your "right thing when repairing/checking/scrubbing data" you'd have to do a full parity rebuild. This isn't anything like what ZFS does.


Orbital welding on this scale and much larger is already done in the offshore oil industry.


You can set the port in the configuration:

http://labs.bittorrent.com/experiments/sync/get-started.html...

The closed source nature of Bitorrent Sync is biggest drawback IMHO. But I guess FOSS clones are on their way. The bit torrent protocol in itself is very well known after all.


Dropbox does not upload your whole Truecrypt volume every time you change it. It just uploads the blocks that are changed. It's pretty efficient.

The real problem with using a Truecrypt volume on Dropbox is concurrent changes to the same volume. It happens if you mount the Truecrypt volume on 2 machines at the same time. It will mess up badly.


That is really interesting. I had to look it up.

https://en.bitcoin.it/wiki/Mining_hardware_comparison

The fastest Nvidia showing is the Tesla S2070 which is a $18k Server with 8 GPU's! It can just barely keep up with a slightly over clocked single gpu HD7970.


I'm not a miner, so correct me if I'm wrong, but that seems like a bit of an apples-to-oranges comparison. Tesla cards (and the servers designed around them) are intended for specific use cases: mission-critical enterprise solutions and scientific HPC. As a result, they run slower processor and memory speeds in comparison to nVidia's own consumer products, use ECC memory, and are optimized for double-precision over single-precision performance. Mining with a Tesla is like gaming with a Quadro card.


There is nothing mission critical or 'enterprise' about Tesla/Fermi cards. You can crash them and lock up your whole machine. Even if you can reboot the OS the card may not respond and the rebooted OS won't see it, we sometimes have to physically shut the machine down to reset the Nvidia card. Nvidia is still a gaming company at heart and it's going to take a while for them to adjust to providing equipment that is meant to be reliable and not just fast.


Here is another very similar one: https://github.com/thoj/go-ircevent


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You