Given the number of people now relying on Clouflare 1.1.1.1 to "get Internet" (ie using 1.1.1.1 as recursive name server), I can't imagine APNIC deciding to stop Clouflare using this range.
It seems "too late" to revert this decision. Otherwise people will experience "Internet stopped working", blaming their ISP.
APNIC may decide to keep a working DNS server on 1.1.1.1, but ethically, routing traffic to someone else than Cloudflare is not great.
If just temporary assigned to Cloudflare, APNIC shouldn't care if it sees a better use for the range. Supporting unintended uses only encourages various types of abuse. And changing DNS settings is easy enough.
That said, if a lot of people rely on 1.1.1.1 as DNS, it's worth considering whether reassignment qualifies as 'better' use of this resource. Not to mention the hassle caused by making changes to popular [anything].
Fixed IP addresses change and are deprecated all the time. It's of zero concern to APNIC that customers of Cloudflare or various ISPs can no longer access the internet because they relied on a temporary IP assignment, after the service was gracefully terminated and deprecated, with ample lead time.
That being said, the use by Cloudflare is an excellent way to reclaim this part of the IP space, I don't see why they would terminate this collaboration.
The SEO mess was indirectly caused by Google PageRank and other related optimizations.
Maybe we want 2007 Internet instead?
The AI world will inevitably lead to "content optimization", so the Chatbots that will be asked "questions about life" (as of Today, where people usually search on Google "I have fever/depression/a turbulent child/tomatoes in my fridge, what should I do?") will more frequently answer specific products.
Instead of promoting a single website (Current SEO strategy), content producer will be tempted to produce many texts on a great variety of sources, to reinforce the model on a specific subject.
Time will tell if in 2035, we will want 2023 ChatGPT.
SSO is not contributing to the core product USP and is pure money extraction mechanism. If company can add enough value on enterprise plan, they could easily drop SSO on less expensive tier. If company cannot add enough value to the core product, they use SSO and reachable customer support to justify more expensive subscription. This may deincentivize customers to buy more or reduce overall security if customer fails to implement processes for standalone login and manual provisioning of accounts.
Yes, you act like this is a bad thing. You hold back and charge for the features customers want enough to pay for. You’ve never noticed that whenever there’s a Free/Pro of an app the one feature
you need is always on the Pro version?
> add enough value on enterprise plan, they could easily drop SSO
That really isn’t how it works. You find some small set of features that enterprises must have like SSO, auditing, and compliance and charge them out the ass for it. This is where the real money for every B2B SaaS comes from and subsidizes the low cost tiers which they hope will translate to an enterprise sale when you ask for it at work.
It makes more sense to them to add that other value to the non-enterprise plans or licensing to attract more users, then charge the businesses that MUST have the SSO or audit functionality, because they know enterprise will pay it without blinking an eye.
It is a common misconception that SSO is useful only on enterprise scale and that companies where SSO and provisioning is crucial for security have huge IT budgets. Any scale-up still on the way to profitability needs it at few hundreds of employees and it’s really hard to justify 100k budget for it. Couple junior admins for provisioning and accepted and misunderstood risk of credentials explosion look more attractive than tripling the bills for every subscription. Who suffers? Customer who is exposed to cybersecurity risks.
I’m not at all discounting the value of SSO to all users, totally agreed. Just that in the business of software this just plainly makes the most sense for most companies. It’s useful for everyone, but it’s required for enterprise (via security policy or other mandate), hence why the screws are put to them.
I’m a bit curious why we don’t see more price segmentation happening with the SSO feature set included, presumably most of these SaaS are seat-limited by plan anyway. If I had to guess, they just don’t want to deal with the headache of tons of small SSO implementations clogging up their support resources.
Unrelated to SSO, I’m involved in audits that regularly seek changes which don’t improve safety or security but which often help the bottom line of big providers.
If you want a product to succeed without natural growth, get an an auditor to require it.
It’s selling your soul and those being audited will hate you, but it’s very lucrative.
I find it funny that people say things like this, because not only is it demonstrably not true looking at different product segments, but even if it was you're basically admitting to self-selecting as a customer who would never have paid in the first place and so companies are overjoyed that you're not using them.
"I would have paid you if you gave me X for free" is the biggest lie.
It is more nuanced like that. If you do not have any other value proposition for paid tiers, you might keep telling yourself that, it is your sales model after all.
Okay, look. There's two universes here. Universe A is where split up the features of our product into tiers based on "value" -- some arbitrary groups based on how useful we think each feature is, how expensive they are, how long they took to develop, estimated person-hours saved, whatever. Sweet, it feels right. Now the free/low cost tiers are genuinely less useful than the higher tiers. Pay more for more. SSO probably still lives at the mid or enterprise tier for no other reason that it's a PITA, is the cause of like 20% of support requests, and our SSO vendor charges us per month per SSO connection.
Universe B is where the free/low cost tiers have every feature except for specifically the features and increased usage limits that get SMBs and Enterprise to pay us.
Both on the sales side and the user side I want to live in Universe B.
There is no magic universe where "just increase your value proposition to Enterprise customers" -- it's the same product just carved up differently and non-enterprise customers lose in Universe A.
Imagine you go to a supermarket and see the same brand carry two tiers of eggs: "Eggs" and "Salmonella-free Eggs".
Even if you could easily afford the salmonella-free eggs, the mere fact that they are willing to sell salmonella eggs at all says a lot about how many shits they give about food safety.
SSO isn't a premium or differentiator feature, it's table stakes.
> SSO isn't a premium or differentiator feature, it's table stakes.
Not for B2C, hobby projects, very small businesses. That's why it's great as a differentiator: because it separates the wheat from the chaff. And is often non-trivial as the number of integrations grows. Hence the SSO middleware market.
The feature is implemented. I'd prefer to use it. It would cost them nothing to let me do so. Yet, I can't, because then big corporates wouldn't be milked for as much cash. I accept this as just another one of those inefficiencies of market capitalism, but it's still a little irksome.
> The feature is implemented. I'd prefer to use it. It would cost them nothing to let me do so.
The other way to view it is, by withholding a nonessential feature, Docker gets big customers to subsidize all the little guys, and their product is more accessible overall.
I've discovered by default, CloudRun only allocates one core per HTTP request, and exclusively during request execution [0].
So your app runs at 0 CPU when there is no requests ongoing, and can't use more than 1 processes in the same container. It means you can't have a container with nginx+rails, as only nginx will have a core to execute, rails has no core and it leads to a timeout.
Maybe you should ensure your app is not trying to use a second process?
Only allocating CPU while a request is in progress, and thus only paying for that time, is cloud run's defining feature, so that part makes sense.
But the problems with having multiple processes are weird. I'd have expected patterns where the http server blocks while waiting for a helper process to work. That sounds like a rather annyoing limitation, and I saw no mention of it in the documentation.
Can you double check if there is really such a limitation and not a misattribution?
Maybe our ORM or Postgres connector runs a process. We ourselves are not directly running multiple processes.
This feels like a leaky abstraction that defeats the purpose of container-as-service especially when the intended audience is hobbyists / small teams who don't have a networking SRE, much less keeping track of fragile infrastructure assumptions.
If we have to stick with GCP, then it seems GCE VMs are a safer bet.
He actually called, and held regular "defense councils" (with generals, etc.) to deal with the Covid situation.
Was it because they are less compromised by industry than public health officials? Or just because it's popular with voters? Your guess is as good as mine.
If you can afford a one off 1 second of latency for your SQL queries, then using logical replication with pgbouncer seems way easier :
- setup logical replication between the old and the new server (limitations exist on what is replicated, read the docs)
- PAUSE the pgbouncer (virtual) database. Your app will hang, but not disconnect from pgbouncer
- Copy the sequences from the old to new server. Sequences are not replicated with logical replication
- RESUME the pgbouncer virtual database.
You're done. If everything is automated, your app will see a temporary increase of the SQL latency. But they will keep their TCP connections, so virtually no outage.
You can temporarily reduce query timeout to a smaller setting as part of the automated failover. The long running transactions will fail but you can minimize the window where you can't talk to postgres
Not really, new connections will block as it's pausing. But you won't be able to shut down Postgres until those long queries complete. Perhaps I was not super clear, but what I'm trying to say is that PAUSE is not instantaneous.
yeah what I'm saying is that you can only pause as fast as your slowest currently initiated query. So if you have a diverse set of query patterns, you could be waiting for a really small percentage of small queries to wrap up.
To be fair about this page, this was used to migrate versions of postgres __prior__ to the introduction of logical replication. Logical replication makes this significantly easier (ie you no longer need the triggers)
Exactly this. The OP’s approach reminded me so much of the days of Slony, and I wondered why a simpler approach with logical replication would not just suffice.
Rather than pgbouncer, I did this in the actual application code once (write to both databases at the same time, once everything is in sync and you’re confident the new server works well, fail over to the new one only), but it depends upon how much control you can exercise over the application code.
Any approach that is based on triggers makes me shiver, however.
This is precisely the migration I'm planning on doing in the next few weeks with pglogical under the hood for replication. Seems like the atomic switch is much easier than any sort of problem that could stem from conflict or data duplication errors while in a bi-directional replication strategy.
Yep, you can also prepare the new database by using a snapshot of the primary's volume, and use pg-rewind to get them in sync. Altogether the tooling can make migrations super easy without minimal downtime.
I use pgbouncer and had no idea it supported logical replication. I cant find anything about it in the docs. Do you have something you can link me to to read more?
Which is only possible if you are using a version of postgres which is new enough, and isn't restricted, such as some versions of RDS. Which, explains the whole original post.
It seems "too late" to revert this decision. Otherwise people will experience "Internet stopped working", blaming their ISP.
APNIC may decide to keep a working DNS server on 1.1.1.1, but ethically, routing traffic to someone else than Cloudflare is not great.