"Share" is one of the worst inventions of all. What it does in phones is random across apps and platforms, and usually has nothing to do with what the word "share" means in any other context.
You're sharing data between apps. It's an app->app API, essentially. You can easily send an app store listing to your Reminders "Wishlist" section if you want, for example.
I wasn't even thinking social. Problem is, the actual operation being done is one of:
- Give the other app a temporary/transient copy of a document or a file
- Give the other app the actual file (R/W)
- Give the other app the actual file but some other way (there's at least two in Android now, I believe?)
- Give the other app some weird-ass read-only lens into the actual file
- Re-encode the thing into something else and somehow stream it to the other app
- Re-encode the thing into something else and give it that (that's a lossy variant of transient copy case - example, contact info being encoded into textual "[Name] Blah\n[Mobile] +55 555 555 555" text/plain message).
- Upload it to cloud, give the other app a link
- Upload it to cloud, download it back, and give the other app a transient downloaded copy (?! pretty sure Microsoft apps do that, or at least that's what it feels when I try to "Share" stuff from that; totally WTF)
- Probably something else I'm missing.
You never really know which of these mechanisms will be used by a given app, until you try to "Share" something from it for the first time.
Now, I'm not saying the UI needs to expose the exact details of the process involved. But what it should do is to distinguish between:
1. Giving the other app access to the resource
2. Giving the other app an independent copy of the resource (and spell out if it's exact or mangled copy)
3. Giving the other app a pointer to the resource
In desktop terminology, this is the difference between Save As, Export and copying the file path/URL.
Also, desktop software usually gives you all three options. Mobile apps usually implement only one of them as "Share", so when you need one of the not chosen options, you're SOL.
I've joked that on some services, when you're clicking buttons, you're actually opening tickets that a human needs to action.
That scenario is an example. You complete an action on a web page and nothing works. You make no further changes and hours later it works perfectly. Your human wasn't fast enough that day.
That's the "digital escort" process mentioned in the very long OP. Understandably, the US government got mad when they found out that cheap Chinese tech support staff were being used for direct intervention on "secure" VMs.
That's not what the "problem" was. It's that cheap American support people were "escorting" foreign Microsoft SWEs, so they could manage and fix services they wrote and were the subject matter experts for in the sovereign cloud instances which they otherwise would have no access to.
And this was NOT for the government clouds we have that hold classified data. Those are air-gapped clouds that physically cannot be accessed by anyone who doesnt have a TS clearance and physically go into a SCIF.
source: I work in a team very closely related the team who designed digital escort.
Yeah, it’s not a great name. But it originates from the government. When somebody without a security clearance needs to go to a secure area, they must be escorted by somebody.
When the blog post mentioned Hegseth and “digital escort” in the same sentence, I was surprised to learn it wasn’t about his OnlyFans habit at his work desktop.
Yes but this misses the underlying point: this is the same software. It suffers from the same defects. If your management stack keeps crashing and leaking VMs you are seeing a reduction in the operational capacity of the fleet. If you are still there just tour Azure Watson and tell me if you’d want the military to rely on that system in wartime? Don’t forget things like IVAS and God knows what else that are used during operations while Azure node agents happily crash and restart on the hosts. The system should be no-touch and run like an appliance, which is predicated on zero crashes or 100% crash resiliency. In Windows Core we pursued a single Watson bucket with a single hit until it was fixed. Different standards.
I'm only commenting on parent comment's understanding of what digital escort process is specifically. Escort is used by all kinds of teams that are just doing day-to-day crap for various resource providers across azure. I've never worked anywhere close to Azure Core so I don't know about these more low-level concerns. Overall I agree and sympathize with your assessment of the engineering culture.
You also make it sound like getting a JIT approved is getting keys to the kingdom. It's not -- every team has it's own JIT policies for their resources. Should there be far less manual touches? Ideally. But JIT is better than persistent access at least, and JIT policies should be scoped according to principle of least privilege. If that is not happening, it's a failure at the level of that specific org.
> when you're clicking buttons, you're actually opening tickets that a human needs to action
I had one public cloud vendor sales literally admit this was the case with their platform. But they were now selling "the new one" which is supposed to be better.
I recently changed ISPs and have IPv6 for the first time. I mostly felt the same way, but have learned to get over it. Some things took some getting used to.
An "ip address show" is messy with so many addresses.
Those public IPs are randomized on most devices, so one is created and more static but goes mostly unused. The randomly generated IPs aren't useful inbound for long. I don't think you could brute force scan that kind of address space, and the address used to connect to the Internet will be different in a few hours.
Having a public address doesn't worry me. At home I have a firewall at the edge. It is set to block everything incoming. Hosts have firewalls too. They also block everything. Back in the day, my PC got a real public IP too.
NAT really is nice for keeping internal/external separate mentally.
I'm lucky enough my current ISP does not rotate my IPv6 range. This, ironically, means I no longer need dynamic DNS. My IPv4 address changes daily.
A residential account usually gets a /56, what are you talking about? Nowhere near a /48! (I'm just being funny here...)
There are reasons to need direct connectivity that aren't hosting a server. Voice and video calls no longer need TURN/STUN. A bunch of workarounds required for online gaming become unnecessary. Be creative.
I'm not confused about the NAT / firewall distinction, but it might be nice if my ISP didn't have a constant, precise idea of exactly how many connected devices I owned. Can that be _inferred_ with IPv4? Yes, but it's fuzzier.
The ISP still doesn't know how many devices are connected, because a lot of those devices are using randomized and rotating IPs for their outbound connections.
Okay but why does this matter? They're your ISP they also have your address, credit card number and a technician has been in your home and also supplied the router in the common case.
The theoretical vague problem here is being used to defend a status quo which has led to complete centralization of Internet traffic because of the difficulty of P2P connectivity due to NAT.
On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)
All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).
reply