I'm curious why you're being downvoted for this. Whoever disagrees, please share some context.
What you said about "wrapping" I interpret as: based on an image for instance with OS level dependencies you create another image with application level artifacts, e.g. a python application. When your app changes, you don't build the base image again, you only build the app image. This makes sense to me.
> On other nitpick is that startups live and die with automation by their lean nature. They have fewer resources and must automate everything they can. It's built into startups mindset.
I am at a medium sized company. We acquired a startup. They did not "live and die with automation", instead they contracted out the production operations. A dozen customers, all on snowflake systems which were constantly tweaked directly in prod with no documentation or audit process.
My experience on the acquirer side is that startup engineers like to just build stuff and deploy it. Design? Testing? Documentation? Automation? None of that existed in this company in any fashion. And they are themselves grating against a modicum of process. Please explain what you want to do, why, and provide a high level design. You are going to replace the whole auth system, it's not something you just "figure out" as you go along... Sigh.
> A dozen customers, all on snowflake systems which were constantly tweaked directly in prod with no documentation or audit process.
I've directly seen this at all sizes of companies. I worked at a billion+ revenue company and watched my boss edit stored procedures directly on production (for way more than a dozen customers). I ended up encrypting them in prod to force him through the build process I made.
The point is that there is wild west going on everywhere, and the size of the company is only one factor. There are startups with great engineering and big companies with terrible engineering. It's just the state of software right now.
> My experience on the acquirer side is that startup engineers like to just build stuff and deploy it
This has been my experience exactly. Automation has a high upfront cost where that payoff is in the long term. As a startup the priority is on income now. Not cost saving that pays off over the next year.
Yes, but I'll be the first to admit it's a lot of fun to be able to work like that... from time to time.
Context is important also, a lot of young companies are still trying to figure out their optimal business model, so agility really is more important; adapt or die (kind of).
Lastly, you have to consider why anyone would ever work for a startup; they often pay less, more stress, less security, unpredictable environment, etc... but you get to do ridiculous experiments in production and try to build something cool. It's part of the incentive structure.
Agreed. It's sad how many security engineers still push outdated processes, when there is significant research which indicates _why_ it actually _reduces_ security.
There isn't a clear answer to that but here's what Gene Spafford wrote in 2006:
> So where did the “change passwords once a month” dictum come from? Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking. Resources, however, were limited. As best as I can find, some DoD contractors did some back-of-the-envelope calculation about how long it would take to run through all the possible passwords using their mainframe, and the result was several months. So, they (somewhat reasonably) set a password change period of 1 month as a means to defeat systematic cracking attempts. This was then enshrined in policy, which got published, and largely accepted by others over the years. As time went on, auditors began to look for this and ended up building it into their “best practice” that they expected. It also got written into several lists of security recommendations.
> This is DESPITE the fact that any reasonable analysis shows that a monthly password change has little or no end impact on improving security! It is a “best practice” based on experience 30 years ago with non-networked mainframes in a DoD environment—hardly a match for today’s systems, especially in academia!
I've heard variations on this idea which all stem back to that same kind of scenario of a DoD facility where things like access were limited and, for example, a spy who cracked or shoulder-surfed a password might have to wait some period of time to use it, none of which makes much sense in our modern security landscape. I haven't seen anything definitive about the origins but it's very hard to find an actual security expert who thinks it's a good idea (as opposed to a compliance process enforcement person who might have had this trained into them) and these days I'd really be focused on how you could make WebAuthn mandatory.
>Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking.
When we first got our terminals, at first, shared terminals, there was absolutely no guidance on passwords. Password security wasn't part of the consciousness. At least in my corporate experience in the 80's. Especially with all the data being on tapes. A security utopia, briefly.
Password rotation still makes technical sense today. The benefit is that it limits the utility of stolen credentials.
That’s basically all an MFA token is: a rapidly rotating second password. In fact the widespread availability of MFA options is one reason memorized passwords don’t need to rotate anymore. Just implement MFA instead.
Another reason is that forced rotation of memorized passwords gives users an incentive to create passwords that are simpler, and therefore easier to steal in the first place. So the technical advantage was nullified by a human factors disadvantage.
Security models from the dawn of computing, which operated on assumptions that no longer hold true, including passwords being stored in plaintext in /etc/passwd, then later, crypted in /etc/shadow. If the /etc/passwd file were stolen, then you'd have everyone's password. By forcing the password to be changed every X days, then even if an attacker got a copy of /etc/passwd, those passwords would not work after N days.
> I think there's another word missing: "data sovereignty". Depending on your business, your customers might need to keep user PII within their country.
That's assuming a single multi-tenant saas footprint.
There is absolutely nothing preventing you from standing up a footprint in each compliance region and customers are assigned to the region that satisfies their requirements for data location.
You get the benefit of less footprints to manage, while still meeting the requirements necessary to serve your customers.
And if you have a customer that absolutely must have isolated infrastructure, stand one up for them and pass along the increased cost associated.
It is worth emphasizing an aspect of your point when you mention "compliance regions"- this does not necessarily mean geographic regions or different countries.
This setup is how every company I have worked for that has had government clients handles them- a dedicated 'footprint' isolated from your commercial footprint, even if it remains on the same cloud provider. AWS even has GovCloud regions specifically for this scenario.
The study states:
"We studied sera from adults (ages 18 to 55 years) who received two doses of the Moderna mRNA-1273 vaccine in phase 1 clinical trials (12). The majority of our study focused on 14 individuals who received the 250-μg dose, although we validated key conclusions with a smaller subset of eight trial participants who received the 100-μg dose. The sera were collected at 36 and 119 days after the first vaccine dose, corresponding to 7 and 90 days after the second dose."
Strange that the study would focus on individuals who received a 250-μg dose. The adult dose is 100-μg (https://reference.medscape.com/drug/mrna-1273-moderna-covid-...). Yes, the study states that key findings were validated with 8 participants who received the regular dosage, but still seems odd to me.
The comparison dataset is explained as:
"The convalescent plasma samples were characterized in earlier studies (13–16) and grouped into an early time point of 15 to 60 days after symptom onset and a late time point of 100 to 150 days after symptom onset."
So the comparison was between vaccinated individuals, most of whom received a higher dose than is being given out, to infected individuals, and the time periods don't exactly line up. For the vaccinated dataset, the time periods were 7 and 90 days after the second dose. For the other dataset, the time periods were 15-60 days, and 100-150 days after symptom offset.
Deeper into the study, there is also this:
"The escape maps of the 100-μg cohort resembled those of the 250-μg cohort and fell into the 456/484-targeting, core-targeting, or flat categories (fig. S6). Although the sample sizes are small, and a higher fraction of the 100-μg dose escape maps were flat than for the 250-μg cohort (4 of 8 versus 5 of 14, respectively), this suggests that 100- and 250-μg doses elicit antibody responses similar in the breadth of their RBD-binding specificity."
Can't read whether this is sarcastic or tongue in cheek ;-)
Java 11 is the newest LTS release.
The next LTS release will be Java 17.
Many companies won't touch anything that isn't LTS. There are still a large number of companies staying on Java 8 :shrug:
Though there's no confirmation I could find, there's an indication that they may/are:
Let’s take a quick look at the features that we are including in Open Distro for Elasticsearch. Some of these are currently available in Amazon Elasticsearch Service; others will become available in future updates.
Instead, build your artifact and publish it to an artifact repository, just like we used to.
_then_ wrap that artifact in a Docker image.
Vulnerability found in the docker image? No problem. Build a new image with the same artifact.