For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | doubled112's commentsregister

I can remember using an AiW card to play PS2 on my computer screen when my TV died. The latency wasn’t great but we still had fun.

As somebody who started teaching himself guitar as a teenager, put it down for 10+ years and started back up, this resonates.

Everything takes twice as long to learn because I first have to unlearn the old habits.


I'm also a big believer that "head up, voice down" will reduce your likelihood of becoming a target.

People don't usually bring trouble to themselves for no reason. Don't give them a reason.


The first time I used iOS I noticed a lot of things it considers "normal" are completely undiscoverable unless you know.

Swipe down from the top. No, the other top.

Click share, now click "find in page". Wait, that doesn't share at all?


"Share" is one of the worst inventions of all. What it does in phones is random across apps and platforms, and usually has nothing to do with what the word "share" means in any other context.

You're sharing data between apps. It's an app->app API, essentially. You can easily send an app store listing to your Reminders "Wishlist" section if you want, for example.

It's definitely not only social sharing.


I wasn't even thinking social. Problem is, the actual operation being done is one of:

- Give the other app a temporary/transient copy of a document or a file

- Give the other app the actual file (R/W)

- Give the other app the actual file but some other way (there's at least two in Android now, I believe?)

- Give the other app some weird-ass read-only lens into the actual file

- Re-encode the thing into something else and somehow stream it to the other app

- Re-encode the thing into something else and give it that (that's a lossy variant of transient copy case - example, contact info being encoded into textual "[Name] Blah\n[Mobile] +55 555 555 555" text/plain message).

- Upload it to cloud, give the other app a link

- Upload it to cloud, download it back, and give the other app a transient downloaded copy (?! pretty sure Microsoft apps do that, or at least that's what it feels when I try to "Share" stuff from that; totally WTF)

- Probably something else I'm missing.

You never really know which of these mechanisms will be used by a given app, until you try to "Share" something from it for the first time.

Now, I'm not saying the UI needs to expose the exact details of the process involved. But what it should do is to distinguish between:

1. Giving the other app access to the resource

2. Giving the other app an independent copy of the resource (and spell out if it's exact or mangled copy)

3. Giving the other app a pointer to the resource

In desktop terminology, this is the difference between Save As, Export and copying the file path/URL.

Also, desktop software usually gives you all three options. Mobile apps usually implement only one of them as "Share", so when you need one of the not chosen options, you're SOL.


If you think of it as "send" rather than "share" it makes a lot more conceptual sense. Don't get caught up on the word.

It's almost always to send the content somewhere, whether it's a platform, an app, the clipboard, etc.

Not always always, but almost always.


I still despise whoever decided that swipe-from-top needs 2 versions somehow

Apparently bookmarks and self-hosting a read it later web app on my home server but only having 5 tabs open at a time makes me a filthy casual.

I think you failed to correctly apply DeMorgan's laws to the statement you're reacting to.

I am somewhat surprised by how long 1080p being the standard has stuck.

People often seem shocked that I use mostly 4K screens, but I've had one of them for almost 10 years now.

It also seems that 8K has died for now. I think we still have time.


The dnf autorefresh service bit me running a small Alma instance for the first time.

Debian and apt seemed to require less RAM then, but that was a few years ago now.


I've joked that on some services, when you're clicking buttons, you're actually opening tickets that a human needs to action.

That scenario is an example. You complete an action on a web page and nothing works. You make no further changes and hours later it works perfectly. Your human wasn't fast enough that day.


That's the "digital escort" process mentioned in the very long OP. Understandably, the US government got mad when they found out that cheap Chinese tech support staff were being used for direct intervention on "secure" VMs.

That's not what the "problem" was. It's that cheap American support people were "escorting" foreign Microsoft SWEs, so they could manage and fix services they wrote and were the subject matter experts for in the sovereign cloud instances which they otherwise would have no access to.

And this was NOT for the government clouds we have that hold classified data. Those are air-gapped clouds that physically cannot be accessed by anyone who doesnt have a TS clearance and physically go into a SCIF.

source: I work in a team very closely related the team who designed digital escort.


I would definitely fight against calling anything I work on „digital escort”.

Yeah, it’s not a great name. But it originates from the government. When somebody without a security clearance needs to go to a secure area, they must be escorted by somebody.

When the blog post mentioned Hegseth and “digital escort” in the same sentence, I was surprised to learn it wasn’t about his OnlyFans habit at his work desktop.

Yes but this misses the underlying point: this is the same software. It suffers from the same defects. If your management stack keeps crashing and leaking VMs you are seeing a reduction in the operational capacity of the fleet. If you are still there just tour Azure Watson and tell me if you’d want the military to rely on that system in wartime? Don’t forget things like IVAS and God knows what else that are used during operations while Azure node agents happily crash and restart on the hosts. The system should be no-touch and run like an appliance, which is predicated on zero crashes or 100% crash resiliency. In Windows Core we pursued a single Watson bucket with a single hit until it was fixed. Different standards.

I'm only commenting on parent comment's understanding of what digital escort process is specifically. Escort is used by all kinds of teams that are just doing day-to-day crap for various resource providers across azure. I've never worked anywhere close to Azure Core so I don't know about these more low-level concerns. Overall I agree and sympathize with your assessment of the engineering culture.

You also make it sound like getting a JIT approved is getting keys to the kingdom. It's not -- every team has it's own JIT policies for their resources. Should there be far less manual touches? Ideally. But JIT is better than persistent access at least, and JIT policies should be scoped according to principle of least privilege. If that is not happening, it's a failure at the level of that specific org.

Policies vary. The node folks get access to the nodes and the fabric controller by necessity.

I guess we agree on the point where it should not be necessary, which echoes Cutler’s original intent of “no operational intervention.”

This is not an impossible task, after all it’s just user-mode code calling into platform APIs.


200 requests a day, lol

on average :)

> I've joked that on some services, when you're clicking buttons, you're actually opening tickets that a human needs to action.

I just experienced one startup where the buttons just happen to only work during business hours on the US west coast.


Infrastructure-as-a-ServiceNow Ticket

> when you're clicking buttons, you're actually opening tickets that a human needs to action

I had one public cloud vendor sales literally admit this was the case with their platform. But they were now selling "the new one" which is supposed to be better.

It was, a lot. But only compared to the old one.


That sounds pretty nice. The same mini PC I paid $195 for in 2023 is now $450. Seems to be life in Canada sometimes.

It had caused me to look around though. I have found the Pi Zero 2W to be surprisingly capable for Pi sized jobs.


I recently changed ISPs and have IPv6 for the first time. I mostly felt the same way, but have learned to get over it. Some things took some getting used to.

An "ip address show" is messy with so many addresses.

Those public IPs are randomized on most devices, so one is created and more static but goes mostly unused. The randomly generated IPs aren't useful inbound for long. I don't think you could brute force scan that kind of address space, and the address used to connect to the Internet will be different in a few hours.

Having a public address doesn't worry me. At home I have a firewall at the edge. It is set to block everything incoming. Hosts have firewalls too. They also block everything. Back in the day, my PC got a real public IP too.

NAT really is nice for keeping internal/external separate mentally.

I'm lucky enough my current ISP does not rotate my IPv6 range. This, ironically, means I no longer need dynamic DNS. My IPv4 address changes daily.

A residential account usually gets a /56, what are you talking about? Nowhere near a /48! (I'm just being funny here...)

There are reasons to need direct connectivity that aren't hosting a server. Voice and video calls no longer need TURN/STUN. A bunch of workarounds required for online gaming become unnecessary. Be creative.


> Having a public address doesn't worry me. At home I have a firewall at the edge. It is set to block everything incoming.

Concern is privacy, not security. Publicly addressable machine is a bit worse for security (IoT anyone?), but it is a lot worse for privacy.


I'm not confused about the NAT / firewall distinction, but it might be nice if my ISP didn't have a constant, precise idea of exactly how many connected devices I owned. Can that be _inferred_ with IPv4? Yes, but it's fuzzier.

Is this solved by the device having between 1 and X randomly generated IPv6 addresses?

Some of my devices have 1, some 2, and some even more. Takes some precision out, at least.


The ISP still doesn't know how many devices are connected, because a lot of those devices are using randomized and rotating IPs for their outbound connections.

Aren't your home addresses assigned by your local router?

the ISP can see 58 different ipv6 addresses sending packets in the last hour

With ipv4 it can see one ipv4 address

Now sure that 58 could all be on one device with 58 different IPs and using a different one for each connection

In reality that's not the case.


Okay but why does this matter? They're your ISP they also have your address, credit card number and a technician has been in your home and also supplied the router in the common case.

The theoretical vague problem here is being used to defend a status quo which has led to complete centralization of Internet traffic because of the difficulty of P2P connectivity due to NAT.


No device on my ipv6 vlans can establish P2P tunnels outside with random clients.

Firewalls and good old monetisation prevented your p2p connectivity utopia, not nat.


With SLAAC and a random IPv6 you would get at least the same level of privacy. One public IPv4 isn't different from /48 IPv6 network.

You already have a public IP address the only difference is if you have a rotating IP address which is orthogonal to IPv6.

The only difference is most ISPs rotate IPv4 but not IPv6.

Heck IPv6 allows more rotation of IPs since it has larger address spaces.


IPv6 can "leak" MAC addresses of connected devices "behind the firewall" if you don't have the privacy extensions / random addresses in use.

There are a number of footguns for privacy with IPv6 that you need to know enough to avoid.


Privacy extensions are enabled by default on OSX, windows, android, and iOS: https://ipv6.net/guide/mastering-ipv6-a-complete-guide-chapt...

On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)


They are now - I'm not sure when they implemented them but I know Windows at least would do some really stupid stuff very early on.

Aren't we talking about now?

No one is saying we should have activated IPv6 in its first iteration.


All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You