As of late, I have one rule: Any unknown number I'm not expecting I let it go to voicemail, where I have a message along the lines of: leave your message and your number, and if it's important I'll call you back. The only time I pick up is when I am expecting, say, a delivery, or a doctor's call, etc, and in those cases I'm only expecting to hear about a delivery or a doctor's call, etc. Hoping that can filter and help on this front.
I guess I’d expect them to be in a country where it’d be difficult to be apprehended and extradited. Being in the UK seems like a stupid move to me, but what do I know!
Was it a professional operation? Says they were 17. Some people playing around with their Commodore 64 except it's connected to the internet and a pretty big company.
Let's not pretend these kids were trying to hack the Gibson just for the lulz. Calling into help desk, requesting password resets with social engineering, getting into network, installing ransomware is all well beyond playing around. I know there are smart teens, but I would not be surprised to find out there is someone more experienced in the background that got the kids going if not even on behalf of.
There are plenty of teens selling dope, stealing cars, breaking into homes, yet nobody thinks they're just knuckleheads playing around. Why do we think because "but on a computer" makes it different?
This doesn’t surprise me. I work for a company that hires a substantial headcount from TCS, probably one of their biggest clients, and the quality of the work is astonishingly bad.
I’d recommend avoiding at all costs but we all know these companies are brought in by non-technical people.
> In 3 of 4 calls, the service desk reset passwords and re-enrolled MFA with zero resistance. The caller simply gave a name – no validation, no callback, no check. On the 4th call, the attacker requested access to a privileged group. The TCS agent asked for an employee ID. The ID given didn’t even match our company’s format; and yet, the access was granted anyway.
In a proper capitalistic system, those who build low quality e-commerce services, including hackable ones, should go out of business and replace more competent companies. This includes buying services from bad suppliers.
This Reddit post hints that many shortcuts were taken, security not taken seriously when they should have, and now they reap what they sow.
There has been no reaping. MKS shares were largely unimpacted (despite this costing at least £300m). Management have tried to deflect, said this was a highly sophisticated attack, said that other firms had been hacked but just didn't report it, endless amounts of lying.
The reality is that decreasing costs is a far easier lever to pull than increasing revenue so managers will be heavily incentivised to do this if you give them profit-based incentives. This happens every few years with listed companies in the UK now, no-one ever changes their behaviour (retail, in particular, is ground zero for bluffers in the UK, managers are exceptionally bad, and even worse are comp committees that set targets that cannot be achieved without damaging long-term value).
There is no efficient market here. It is as simple as managers understanding the world we now live in, and that is unlikely because all these companies view IT as a cost and their managers are people who rotate through executive roles and politics despite leaving a flaming wreck in their wake. Things will stay the same.
> In a proper capitalistic system, those who build low quality e-commerce services, including hackable ones, should go out of business
If the impact is large enough, they do.
This not a case where binary thinking works for most situations, though. The costs associated with the attack will hurt them by damaging their balance sheets a little bit, taking capital away from more productive opportunities, and distracting their employees from more fruitful tasks.
There’s always a public thirst for immediate blood in these situations, but the damage is more subtle and manifests more as opportunity cost than a sudden collapse of the company. The demand for sudden stock market collapse of companies is ironic, given all of the criticisms thrown at companies for putting too much emphasis on short term stock results.
They do. Security is about risk management. It’s all very actuarial. If the damages from an attack are severe enough (ie. a company makes it go bankrupt), that’s capitalism working.
"proper capitalist system" aka fantasy capitalism, an utopic capitalism that lacks operations/tasks where deceiving is cheaper than doing things correctly, yes I am one of those that don't believe that such thing is compatible with human nature.
That's a very naive view of capitalism, there is nothing inherently preventing companies from being negligent in infosec no matter how "proper" that system is. Also wouldn't defunding the ICO make it more proper?
I own a Garmin InReach that I regularly take with me on hiking expeditions (mostly Scotland, Iceland and South Africa) and it’s super handy on remote areas with no coverage. The ability to send an SOS is the main feature, but the two-way messaging system is just amazing for peace of mind, for me and my family.
However, the main thing about the InReach is its ruggedness and battery life, which I consider essential. I always carry an iPhone with me that I use when and if I get reception, but I need to carry a battery pack for it and it’s always in the back of my mind that an iPhone is a relatively fragile device and it’s one misstep away from cracking/breaking/etc, hence the inReach.
Moving forward, having both will be great, but I think having to rely only on an iPhone would make me a bit nervous, so I’m not sure how much of a threat this is for the inReach devices (at least for now).
I think this drastically underestimates or at least undersells the impact of convenience, or in this case the maximum possible level of convenience which is already having the feature even if you don't know it. Similar arguments could have and indeed were made for every other small electronic device the smartphone has replaced. It's not that the advantages you listed don't exist, it's just that they won't hold a candle to the explosion of smartphones with satellite messaging built in by default. I feel like even just 1 year (and 200 million satellite-enabled iPhones) from today pointing out these advantages is going to look like people pointing out that land lines have better audio quality and lower latency than cell phones.
I always carry my InReach with periodic location sharing enabled when I'm on backpacking trips. I also almost always carry it during international travel and road trips, but not usually with periodic location sharing. If iPhones start offering plans with periodic location sharing, I'm fairly confident that I'd stop carrying the InReach unless I was on a particularly remote trip that was outside of my comfort zone (which isn't something I really do anyway).
I may be taking trips you’d be uncomfortable with, but the InReach is invaluable for two-way messaging and weather updates. I haven’t needed the emergency service (knock on wood), but I’ve used most of the other things it does. My phone is a much better GPS, though, and if Apple makes it possible to do those two things I mentioned at first without it using much of the battery life I’d consider only keeping my InReach for extended trips.
There’s something to be said for convenience that worries me a bit with this. The amount of people that may be “misusing” this feature and the strain that it’s going to put on,say, mountain rescue. If people start feeling more confident than their capabilities, or they ignore weather reports just because they can be rescued with an iPhone, I don’t know what effect that’ll have on these resources.
It’s a reasonable question to ask what’s the effect of millions of people in cell-less areas who now can make a AFAIK SOS with no other context. I assume Apple has discussed this with various authorities however.
Some of these effects are already seen with cell phones. OTOH while no panacea, cell phones can help when someone is in genuine danger.
Genies and bottles and all that. Of course authorities may get more liberal with levying significant fines for people who get rescued because they were unprepared.
I'm a Paramedic in a California county that isn't even that far from San Francisco and we have a LOT of mountain range area with zero cell service at all. Even worse, our Motorola radios don't work up in the same hills either. We frequently get very vague and difficult to pinpoint locations for car accidents, medical emergencies, etc.
Now someone that is in a wreck or sees one can stay put and contact emergency services with a way more accurate location, and I think that's awesome.
I’m thinking of backcountry hunting applications personally. When I’ve been completely away from cellular service and put my phone on airplane mode and low brightness it goes for several days without a charge as a (not always on) gps and camera. Since these trips have usually been in groups of 3-4 the fragility of the phone is a smaller issue.
But I get what you’re saying, if you have 2 you have 1 and if you have 1 you have none. Some redundancy so nice and the iPhone as a backup to the inreach will be a likely set up.
The problem with rsync is that it would have to traverse all the FS tree and check every single file on both sides for the timestamp and the file. In a relatively small FS tree is just fine, but when you start having GBs and GBs and tens of thousands of files, it becomes somewhat impractical.
Then, you need to come up with other creative solutions, like deep syncing inside the FS tree, etc. Fun times.
Add checksumming, and you probably would like to take a holiday whilst it copies the data :-)
if you want to copy everything and there's nothing at the target:
rsync --whole-file --ignore-times
that should turn off the metadata checks and the rsync block checksum algorithm entirely and transfer all of the bits at the source to the dest without any rsync CPU penalty.
also for this purpose it looks like -H is also required to preserve hard links which the man page notes:
"Note that -a does not preserve hardlinks, because finding multiply-linked files is expensive. You must separately specify -H."
be mildly interesting to see a speed test between rsync with these options and cp.
there are also utilities out there to "post-process" and re-hardlink everything that is identical, so that a fast copy without preserving hardlinks and then a slow de-duplication step would get you to the same endpoint, but at an obvious expense.
I use rsync to replicate 10TB+ volumes without any problems. It's also very fast to catch up on repeat runs, so you can ^C it without being too paranoid.
It sounds like OP had large hard-link counts, since he was using rsnapshot. It's likely that he simply wouldn't have had the space to copy everything over to the new storage without hard-links.
You cannot exaggerate what rsync can do... I use it to backup my almost full 120gb Kubuntu system disk daily, and it goes through 700k files in ~ 2-3 minutes. Oh, and it does it live, while I'm working.
To be fair: the case described here is two magnitudes larger in both directions than your use case.
And the critical aspect from a performance perspective was where the hash table became too large to fit into memory. Performance of all sorts goes pear-shaped when that happens.
rsync is pretty incredible, but, well, quantity has a quality all its own, and the scale involved here (plus the possible in-process disk failure) likely wasn't helping much.
Yep, my answer when cp or scp is not working well is always to break out rsync, even if I'm just copying files over to an empty location.
I've had good luck with the -H option, though it is slower than without the option. I have never copied a filesystem with nearly as many files as the OP with -H; the most I've done is probably a couple million, consisting of 20 or so hardlinked snapshots with probably 100,000 or 200,000 files each. rsync -H works fine for that kind of workload.
> In a relatively small FS tree is just fine, but when you start having GBs and GBs and tens of thousands of files, it becomes somewhat impractical.
I regularly rsync machines with millions of files. On a slow portable 2.5" 25MB/sec USB2 connection it's never taken me more than 1hr on completely cold caches to verify that no file needs to be copied. With caches being hot, it's a few minutes. And on faster drives it's faster still.
Unless you are doing something weird and force it to checksum, checksumming mostly kicks in on files that have actually changed and can be transferred efficiently in parts (i.e. network; NOT on local disks). In other cases, it's just a straight copy.
Have you actually used rsync in the setting you describe, or at least read its manual?
> The problem with rsync is that it would have to traverse all the FS tree and check every single file on both sides for the timestamp and the file.
- In this scenario, the receiving side is empty, so there is no need to check every single file.
- Since 3.0, rsync walks the dirtree while copying (unless you use some special options). So, in some sense, rsync is already "deep syncing inside the FS tree", as you put it.
rsync since version 3 has stopped doing the full tree traversal It's not in RHEL/CentOS 5 but is in 6, and is really trivial to compile by hand anyway. That's made life a lot easier for me in the past.
Some of the other features in rsync would seem to make it a more logical fit for this task anyway, e.g. it's resumable, it supports on-the-fly compression (though the value of that depends on the types of files being transferred), native logging etc etc.
Damn, I could use some advice there. Right now I have a system which uses rsync to backup some files. It takes about 8 hours to go over 7+ million files (~3.5TB). I'd love to speed up this. I should mention that copy is done over curlftpfs :(.