With some of the newest 3.x changes like copy-on-write, I find pandas getting quite verbose now as well.
In a world where AI is writing the code, I guess I shouldn't complain, but when I am discovering something the ai of choice yet again missed, both pandas and polars still feel verbose and lacking sugar.
And the Beagles only do digital-domain stuff, they won't get you an eye diagram or a bit error rate. Even for USB 2.0 that's quite expensive, the test fixture from Keysight¹ is about $800, the differential probe they recommend is the $11,000 N2752A², the oscilloscope in the minimum configuration that meets the >1.5GHz bandwidth requirement they sell is the DSOX6002A³ for $24,000, and you'll also need a software option license for the eye diagram testing option. Not to mention a PC to run the USB-IF compliance test software, but that's pocket change at this point in the spending.
Keep in mind that's just for the 480Mib/s USB 2.0. If you want to test USB 5Gbps, USB 10Gbps, USB 20Gbps, USB 40Gbps, or USB 80Gbps you'll be spending much more money. The USB4 V2 GEN4 Electrical Compliance Test Specification⁴ requires a 25GHz bandwidth realtime oscilloscope with ≥80Gs/s sample rate or better. Not to mention a ≥25.6Gbps pattern generator/BERT, 20GHz network analyzer, & RF signal generator, all of which are "call for quote" but likely to be in the 6-digit price range. You're easily looking at the better part of half a million dollars to test the 80Gbps cables fully.
Much like X, it's what you choose to use it for. Papers are posted, approaches are debated, and loose groups form to align. It's easy to scroll past the pandering, but there is useful stuff in the dross.
But agreed, it is getting harder and harder to dig to the gems.
I never understood the rationale of giving out /64 and /48 like candy after what happened with ipv4. I know it's still a massive increase in capacity and I know it makes the networking easier but it seems like we went from something that definitely won't run out (ipv6 addresses) to something that probably won't (number of /48 ranges)
I can think of at least two reasons why this isn't worth worrying about.
One is quantitative: you have to remember that 2^48 is a much much bigger number than 2^32. With 2^32 IPv4 addresses, you have about 0.5 addresses per human being on the planet, so right away you can tell that stringent allocation policies will be needed. On the other hand, with 2^48 /48 ranges, there are about 8,000 ranges per human being.
So even if you hand a few /48s out free to literally everyone who asks, the vast majority will still be unallocated. A /48 is only about 0.01% of what could be said to be a "fair" allocation. (And yet, a /48 is so huge in absolute terms that even the vast majority of organizations would never need more than one of them, let alone individuals.)
The other is that unlike, say, the crude oil we pump out of the ground, IP address ranges are a renewable resource. If you hand out a free /48 to every person at birth, then long before you start running out of ranges, people will start dying and you can just reclaim the addresses they were using.
/48s are "small" enough that we could give ~8 billion people each 35,000 of them and we'd still have ~1.5 trillion (over 300x the size of the ipv4 space) left over. Addresses are basically infinite, but routing table entries (which fragmentation necessitates) have a cost.
In IPv6 the smallest 'subnet' is /64 if I recall correctly.
It's weird having a subnet size equal to a complete IPv4 Internet worth of IPv6 Internets but I believe the rationale was that you would never in practise run of out IPs in your subnet. A lot of Enterprise IPv4 headaches are managing subnets that are not correctly sized (organic growth, etc.). IPv6 is always routable for the same reason (companies reusing RFC1918 making connecting networks a pain).
There are different headaches with IPv6 - such as re-IPing devices if they move subnet - i.e. move physical location, or during a failover etc.
I'm not sure what the best practise there is as many enterprises don't use IPv6 internally. In my experience anyway.
The big issue I see is every enterprise has a solid numbering plan for RFC1918 networks. Unfortunately, many of them have the SAME plan, and when peering networking between SaaS vendors and customers was more popular (now, of course, privatelink seems to be the move) we constantly ran into conflicts. There's still the risk of conflict with IPv6, but I think if numbering decisions are made thoughtfully, they can be avoided.
There's no risk at all if you're using your own allocated prefix, because those are managed by IANA/RIRs/LIRs to not overlap.
Incidentally, if you find yourself experiencing an RFC1918 clash, one simple way of fixing it is to use NAT64 to map the remote side's RFC1918 into a /96 from your v6 allocation. You can write the last 32 bits of a v6 address in v4 format, so this leads to addresses like 2001:db8:abc:6401::192.168.0.10 and 2001:db8:abc:6402::192.168.0.10, which don't overlap from your perspective.
(If you wanted something simpler to type you could put them at e.g. fd01::192.168.0.10... but then you do start running the risk of collisions with other people who also thought they could just use a simple ULA prefix.)
Yes. I wish they had simply used a more sane address length instead, and maybe given everyone 65535 addresses at most. More than enough for the craziest home lab ever.
Really, just adding 2 bytes to IPv4 would have fixed everything and made it a lot simpler to move over. IPv6 is overkill and I think that really hurt its adoption. I remember being at uni and being told "this is the next big thing". In 1993. And it's not even a big thing now. Not on the user side anyway, I can still access everything from IPv4.
Adding two bytes would have been just as much work as adding 12 bytes, and would have left us with too few addresses rather than too many. The MAC address space is now 64 bits and L3 is necessarily less dense than L2, so 128 bits is the smallest power of 2 where we can be reasonably sure we won't end up with too few addresses.
Considering how hard deploying a new L3 protocol is, we're only going to get one shot at it so it's a lot better to end up with too many addresses rather than too few.
Ehm but IPv6 packets still have the L2 layer as well right? Which already includes the MAC address. So that 64 address MAC space is doubled, it's not like you're saving any. It was a pretty arbitrary decision to accommodate the MAC address inside the IPv6 address and these days it's usually randomised anyway for privacy purposes, so the MAC part of an IPv6 packet doesn't have to be the size of the MAC address.
L3 has nothing to do with MAC addresses anyway so I've always found that a pretty weird decision anyway. Sure, it avoids having to implement ARP but we need that again now anyway with the randomisation. And ARP is like a one-time in a few minutes kinda thing anyway.
I'm pretty sure that if we'd just gone for "a couple bytes extra" we'd have long been completely over. It's the whole L3 transition itself that suffers from the complexity. I remember it well in the 2000s, nobody in telecoms wanted to touch it. And when IPv6 was invented in '93 or so, the installed base was extremely small. It'd have been a piece of cake to get it over with then.
The point of L3 is to aggregate hosts into networks, so that routing only has to keep track of network prefixes instead of the individual MAC address of every machine. (The amount of routing updates needed for the latter would scale as something like O(hosts²) which just wouldn't work for large networks, let alone the Internet.)
The aggregation necessarily "wastes" L3 addresses, so if you think you'll have enough machines to justify an L2 address size of n bits then that also implies needing an L3 address size of n+m bits, where m is a number that represents how densely packed your L3 address space is. Anything smaller than that will be too small to handle the full extent of your L2 address space.
> It was a pretty arbitrary decision to accommodate the MAC address inside the IPv6 address [...] Sure, it avoids having to implement ARP
You're thinking of SLAAC, which picks the address by slapping the MAC/EUI-64 into the right-hand 64 bits, but this is just a convenient way of picking the address bits. There's no special significance to those bits and you still need to do ARP.
> I'm pretty sure that if we'd just gone for "a couple bytes extra" we'd have long been completely over.
We still can't get people to stop hardcoding socket(AF_INET, ...) or manually crafting sockaddr_in structures. This is the minimum amount of work that will always be needed, regardless of how many extra bits are involved, and even this part hasn't been quick.
If we actually get to the point of address shortages,
Either, NATv6 would become a thing, or instead I hope SLAAC would get deprecated and dhcpv6 would become mandatory so we could give out smaller than /64s
2^64 is 18,446,744,073,709,551,616. That's 18 quintillion. 10^19. There are ~10^10 people on the planet. Each person could have a 10^9 networks (not even devices) before we ran out of /64s.
> able to run ~340 undecillion devices on my home network
You now can have these devices connected to network called Internet.
Unlike IPv4 were the number of devices on the Internet in home network is one (the main router) or zero (in case if CGNAT) and the others just pretend.
I find this court response confusing, but an alt pov: Amazon wants to block _other people_ from using my ID (say, to watch prime video or access prime shipping or discounts). They can extend that logic to suggest that an "agent", with agency, is the same issue: it's just "another person".
And we've seen the continual push against "Bots" who act without agency, but are seen to abuse systems as they act as "more than a person".
But I think we'll see the usual outcome: the barrier has to be upheld, so the folks behind the barrier can exert a toll on access.
> If joins are necessary, we learned to consider breaking down the query and move complex join logic to the application layer instead.
We often try to leverage the power of the DB to optimize joins on our behalf to avoid having to create them. At a certain point, I guess you wind up having to pull this back to your layer to optimize "the one job" of the database.
I jest, but only slightly. We don't just want to persist data, but link it for different purposes, the "relational" part of RDBMS. Good to know there's still room to grow here, for PostgreSQL and the DB industry.
These are great books. I had them in paper, and they were great for understanding both how the 6502 worked, and metaphors for managing higher level constructs in ML.
This helps you see how your browser tries to block or deflect fingerprint and trackers. I miss their "You are one of x,000 users" from the old site but it still gives a nice summary of bits of info your browser leaks and how fingerprinting basically works.
Watch makers have moved to this model over the last few years as well. The Swatch group has restricted access to movements and parts by acquiring major movement manufacturers, forcing the industry to use duplicate parts, some produced in violation of patents (dep on region). The patents attempt to protect proprietary approaches adding complexity through unique pieces, which rarely add functionality or new value.
Having just bought a Mini Cooper, I'm in the initial free-services period of the ownership and honestly, if it's the $10 a month I suspect, I'm okay with it. Primarily for the navigation...it really is superior to the Carplay experience (a square in the middle of the round screen, vs a map that takes up nearly the entirety of the screen...IIRC, traffic, Weather, remote start, and voice control come along for the ride.
I won't pay for 5g (phone does that) or Serius XM (for all the channels, I'm really not jazzed about the offerings)...but $10 for the above features seems reasonable.
> it really is superior to the Carplay experience (a square in the middle of the round screen, vs a map that takes up nearly the entirety of the screen
I haven't used CarPlay since I'm an Android user, but this reeks to me of a manufacturer developing a problem so they could seek rent for the solution.
If there were no financial incentive otherwise, they certainly would have ensured the CarPlay experience was as nice as their own solution as a selling perk.
At the time the code was/is written, I'm betting carplay offered up a square viewport. Apple's since wanted to offer up multi-display solutions to be the AV/GUI for cars...I think Aston Martin took them up on that...but a number of other manufactures are backing off on that (GM) I suspect because they want to distance themselves from a look and feel you could get in other cars.
I'm not a fan of subscriptions, but in this case, the $10 seemed like good value for features. the map is better, and it's much better integrated into the HUD.
It felt like you were getting more, unlike having buttons that don't work unless you pay...weird psycholgical difference between a subscription and being held hostage.
I have a 2016 VW Beetle (which is a pleasure to drive, the 150PS diesel), but it has buttons on the steering wheel that do absolutely nothing without me paying £200 for the privilege of VW unlocking them.
Want to press that phone button to make a call? Sorry please visit your VW dealership.
Want to have CarPlay or Android Auto? Sorry please visit your VW dealership.
Want to speak to your car to make a call? Sorry please visit your VW dealership.
I also have a 1972 VW Beetle which doesn't require any intervention from anyone else on Earth to use. Guess which is the classic between these 2 models?
No, it bloody well isn't, and I kindly request you stop with that nonsense. Remote start is a glorified CAN message paired with either a TOTP or HOTP message. That's it. There should be zero room for a manufacturer to justify inserting themselves in the middle except greed. Goddamn tired of solving problems only to see companies keeping the problem around for the sake of market segmentation.
Bundling. You get 5 things, one of which makes the other ones worthwhile. Yeah, I get it...
By the same token, all ranges of Mini Cooper now have the B48 turbo 4...the top of the list had a bigger turbo and improved intake...the bottom two I suspect are identical with differing software.
I will happily take advantage of that when the car's out of warranty. (I have the base motor)
You probably should be getting the automatic serviced every 10 years. That basically involves cleaning it, replacing the mainspring, and applying new lubricant. If it's a dive watch, they would also replace the gaskets which dry out.
The last time I persuaded him to give it for service he complained that the service cost was 3x what he had paid for the watch in the first place. For months.
He had fashioned a strap for the watch using a piece of rubber when the bracelet broke and was perfectly happy with that unsightly arrangement.
Well, that at least establishes how much he values the watch.
Luckily for him, the market for watches is big and when he finally needs something new he has a lot to choose from. The watches on the low end are better than ever and the watches on the high end are excellent but crazy expensive.
"Wait, Boeing made life-saving features an expensive option, and almost got away with it??? Get marketing on the phone! We need to double down on our evil."
It's been a few years since the "Expert Mouse" was released, and there's still no "Magic Trackpad" equivalent on the Windows side. Still, the earlier Expert Mouse trackball have been pretty good for folks wanting an HID besides the keyboard. But it's been years since an updated design.
This version improves the scroll ring, but adds many more buttons and knobs, many of which are probably extraneous (imo).
Early users point to an odd "off-axis" placement of the main sensor which results in unexpected cursor movement when rotating at the north end of the ball, so may not fit your working style. But maybe worth the eyeball if you like your trackball.
In a world where AI is writing the code, I guess I shouldn't complain, but when I am discovering something the ai of choice yet again missed, both pandas and polars still feel verbose and lacking sugar.
reply