The percentage of comments written primarily for the purpose irrational political ranting is frustrating, considering the genuinely interesting nature of the story.
It's not irrational to assume current statements from this particular government are not true. If anything, it's irrational to just believe what you're told. We're well past the point where lies outnumber truths, so if you're a betting man, you should assume what you've been told is not true.
The speculation, however, is just that. But, I think we all understand that the reason this was done is not for the reasons stated, and there is something else going on that we are not privy to.
I hope we get regular updates. Email deliverability is a frustration outside of the M365/Gmail ecosystems, but it’s not as bad as it’s sometimes made out to be, and I’m optimistic that increased rigour with the implementation of SPF/DMARC/DKIM will lead to better deliverability across the board. I’m curious if they see increases/decreases in spam, missed messages, successful phishing attempts, etc. Lastly, I’d love to know if they have had to change any security policies, and how are they handling identity management across the organization.
> Email deliverability is a frustration outside of the M365/Gmail ecosystems, but it’s not as bad as it’s sometimes made out to be […] I’m curious if they see increases/decreases in spam, missed messages, successful phishing attempts, etc.
It's probably not much of an issue in this specific case. If someone doesn't get your email, that's your (the sender's) problem; but if someone doesn't get the government's email, then that's their (the recipient's) problem.
To add to this, most emails are likely within the organization and/or between public institution.
E-Mail was (last time I checked) not an approved medium for delivery of important documents as it does not (per design) have a mandatory receipt of the message being delivered. So a citizens does not need to worry a lot about this for important documents/mail.
(Fax was so popular for public institutions in Germany because it satisfied this standard. It meant it usually was the lowest barrier option and you could rely on it for all (un)important documents)
As former email admin, it’s not bad if someone is dedicated to it and you have your own block of IP. It’s frustrating for self hosting because lack of your IPs and most people don’t want spend free time on this busy work.
We operate an MSP business for tens of thousands of customers and have our own ASN, but gmail outright refuses all our corporate email. Why? We do not know and gmail refuses to tell us. Their postmaster tools lie, are incomplete, display no data, display errors or contain no useful information.
There is no human postmaster to contact, all our attempts have been ignored successfully. It’s downright silly but we have to send our corporate mail via a paid third party relay to be delivered to gmail.
These gmail postmaster tools seems to exist to make antitrust cases difficult, not to enable other MSPs to deal with deliverability issues.
At the same time gmail is emerging as the number one source of spam for our customers. If our spam fighting is too tight we falsely flag important mail as spam, and this is absolutely unacceptable to customers. As a consequence we have to relax our spam classification for gmail senders, which manifests itself in false negatives from the perspective of our customer.
But to the customers this reflects on us, not on gmail.
It’s just gmails best interests to make other MSPs miserable to operate. It drives our users to them.
Even if you do ALL the techinical work you can still find yourself banned/ignored as I learned years ago the hard way.. even big providers outside MS/Goog duopoly finds themselves partially unable to deliver business emails at times.. fun times for a small shop (not).
I think I love you? This is great. Do you have them running in arrays of 3? What’s your favourite cut? What’s the best cost:deliciousness cut? What bags do you use to minimize plastic leeching?
It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank.
Do you take pride in knowing that you eat cooler than anyone else, because you should.
Short rib is shocking where I am. Even chuck is pushing past $15 a pound.
What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.
Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs.
I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.
I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.
I adore 11ty. It’s not inherently component driven, but it’s super straightforward and endlessly customizable. It allows you to really experiment with organization, too. I like it a lot.
I think you’re reading into the name a little, haha. I’m interested in your alternative method for session token replacement, though! I think you make a good point, but I’m not an expert by any means.
Usually on low-risk projects where I don't want to bother myself with handling token pairs (or where it's impossible) I have similar simplified approach but regenerating token:
- Session token has two timepoints: validUntil and renewableUntil.
- If now > validUntil && now < renewableUntil - I'm regenerating session token.
This way user is not logged out periodically but session token is not staying the same for 5 years.
I agree with this. I think all tokens should expire. If you accidentally zip up an auth token in an application's config directory it is nice if it becomes inert after a while. If you extend the token it could live forever.
For my application the token is valid for a few months, but we will automatically issue you a new one when you make requests. So the old token will expire eventually. But the client will update the token automatically making your "session" indefinite.
So when you throw away a drive that you had sitting in the junk drawer for a year that token is inert. Even if you are using a cloned machine that is still extending the same "session".
I appreciate these definitions and distinctions. Thanks for sharing. You've helped me understand that I need a better, more precise vocabulary about this topic. I think on an abstract level I would think of AGI as "the brain that's capable of understanding", but I really then have no way to truly define "understanding" in the context of something artificial. Maybe ChatGPT "understands" well enough, if the output is the same.
It does understand to a certain degree for sure. Sometimes it understands impressively well. Sometimes it seems like a special needs case. Ultimately its understanding is different than that of a human’s.
The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.
That's the thing, nothing points to a world with a single winner in AI models. I get what you are saying, but not sure OpenAI can survive the burn unless they build an unmatchable AGI. And that's pure speculation at this point.
I mean, someone needs to rise to the top, unless society as a whole just says "There's no value here." and frankly there's too much real value right now for that. So someone's surviving, at least at the service level. Maybe they just end up building off of open source models, but I can't see how the best brains in the business don't find a way to get paid to make these models. Am I missing something?
There’s definitely a future for LLMs from an enterprise point of view. Even current capability models will be widely used by companies. But it’s seems that will be highly commoditized space, and OpenAI lacks the deep pockets and infrastructure capabilities of Meta and Google to distribute that commodity at the lowest cost.
OpenAI valuation is reliant IMO on them on them 1) AGI possible through NNs, 2) them developing AGI first and 3) it being somewhat hard to replicate. Personally I’d probably stick 10%, 40%, and 10% on those but I’m sure others would have very different opinions or even disagree with my whole premise.
I am not saying that LLMs don't provide value, just that this value might not be captured exclusively by OpenAI in the future. If the idea is that OpenAI will have an unmatched competitive advantage over everyone else in this area, then that has already been proven to be wrong. The rest is speculation about AGI, the genius or Altman, etc.