* If you use 3DES or any other 8-byte block cipher you import several additional security concerns you have to code around
* If you use CBC you also have to get the IV right, which tons of carefully designed crypto has failed to do
* SHA-1 is also insecure
* Neither MD5 nor SHA-1 is a MAC
* Your choice of MACs brings with it new security pitfalls
* How you apply the MAC also has pitfalls; see, for instance, all the systems that have managed to leave CBC IVs out, because they were specified as separate arguments
* Padding is a pitfall if you use CBC... or a few other modes --- guess which!
* If you use something other than CBC you get other pitfalls
* "Be careful with padding" is a vague description of like 6 different padding vulnerabilities you have to know about
* You still haven't even generated a key yet
* In the unlikely event you get all of this right, all you've managed to do is write a very basic symmetric key seal/unseal --- you're still only 40% of the way (in functional terms) to something as simple as NaCL
So, no, I would not say that was the TL;DR of that piece. I think the TL;DR of that piece is right there in the title.
What? This entire vein of conversation is nothing but miscommunication.
These are two choices a developer should use:
* crypto_secretbox() and crypto_secretbox_open()
from NaCl or libsodium
* rm -rf code/ && shutdown
Yes, a privileged few are actually capable of cobbling together a secure cryptosystem out of AES, HMAC, SHA2-family hash functions, and maybe Ed25519 and X25519 if they have a sane implementation available. The general public should just use whatever AEAD mode they're provided and not build their own disaster.
> * I would never use 3DES.
Good.
> * I am aware of that, just TLDRing. SHA-256 is my cup of tea.
Except for passwords. You don't use simple hash functions for passwords.
> * Did I say they were a MAC? I would use CBC HMAC + SHA-256 for that.
I'm assuming you meant AES-CBC + HMAC-SHA-256 here, in an encrypt-then-authenticate mode.
By "assuming" I of course meant "hoping".
> * No idea, since I'm not an expert at AES.
AES expertise isn't the issue here. Composing a secure cryptography protocol out of standard implementations is a rare skill set among software engineers.
I'm not offended that you didn't like the article. I'm saying you failed to summarize it. I'm pretty sure I know what the point of this particular article was. :)
Were there any hints you noticed in retrospect that the insufficient experience or a bad hiring test results were probable? If I could somehow predict those experiences up front I think it would save myself and them time and anxiety by stopping short. The best I have on the experience bit is just being very honest up front with how familiar with that Brand I am rather than trying to skate by and stretch the truth e.g. "I can do C with classes so I know C++!"
The Seattle area seems to have equivalent or better salaries locally or when the remote company is in SF. Might just be my limited view though. I know a guy in my company who relocated to France and took a lower pay, but I suspect if he had stayed in the US it wouldn't have gone down much if at all. There are a few others who transition into a mostly-work-from-home person, then full-time-work-from-home except occasional sync ups for planning or whatever, and then leave the city entirely for a cheap cabin with fiber somewhere nice (still the occasional meetups though). Not sure if their pay gets cut.
Anecdote: My current employer, which has employees in many locations, pays the same position differently by location. HR told me that Silicon Valley and Manhattan are both above Seattle, and that the DC area is at the same level as Seattle.
Yeah, I tend to agree that location is BS to use as a way to try and save a buck. Everyone's cost of living is different, and even in a local area can diverge dramatically.
If you've filtered for basic competency where you can be sure you're not being BSed about their basic capabilities, I think I as a hiring manager would predict a slight correlation between better negotiating skills and better programming (and other) skills that I actually need them for. Often better negotiating is just asking for a higher-than-average price up front because you know you're worth it, because you know you're probably more skilled than the other person, and you know that in all likelihood you'll be contributing beyond the official bullet-points of your role -- i.e. your "given position" on paper is the same as the other person but you're doing a lot more.
Of course many places just need average skilled programmers, and if you can fill your roles with good enough people at a cheap price that's good business. But the general assumption that everyone implements the same Cog interface so they should get paid the same, while fair, is incorrect -- just like everyone has a different cost of living, I've never come across two identical devs nor two identical (except maybe on a short piece of paper leaving out a lot of details) dev positions. I think cutting out the possibility for negotiating better than others will also cut out the skilled-and-know-it batch from joining you unless they're super passionate about the product for some reason.
A few years ago I contacted a few people, and got contacted myself a few times with a "who wants to be hired" post. Some of those led to interviews, the one that led to a job was from being contacted through my "who wants to be hired" post.
I've intentionally been keeping myself ignorant about async/await in JS until it's common enough I can consider using it natively without any great concern about cross browser compatibility. Why is everyone so excited for it? From the example it looks like a thin syntactic sugar layer over normal promises, what's so great about saving one level of indentation?
Well, I guess there are three main reasons I'm so excited about async/await:
1. It allows you to reason linearly about code with IO mixed in, without blocking the entire event loop. This is incredibly useful. Computer programmers are very good at reasoning linearly. It is just much, much more comfortable [1] to think about code that uses async/await than the same code with promises.
2. I think the way that promises and async/await interop is very beautifully designed. The way it ends up working in practice is that you can write a function that "blocks"[2], but if you later decide you want to call it in parallel with something else, that's fine because async functions are just a thin wrapper around Promises. So you can just take that existing function and call `Promise.all()` on it. Similarly if you start with a function that returns a promise, and you decide you want to call it in a blocking way -- you just do `await thingThatReturnsPromise()`. This is an almost perfect example of primitives composing to be more than the sum of their parts.
3. (and this is worth the price of admission alone) error handling works properly and automatically. If you `await` an async function and it throws, you'll get a plain ol' JavaScript error thrown which you catch in the normal way. If I never debug another hung JavaScript app that dropped a rejected promise on the floor it will be too soon.
[1] please don't tell me that I just don't "get" asynchronous programming. I was writing js event handlers when you were in diapers (something something lawn something).
[2] `await` doesn't really block the event loop, it just appears to.
One more thing that's particularly nice about `async/await` is just using the `async` part. For existing functions that should always return promises, when that function is `async` you can be sure of it. Even if it throws. Even if it returns a value that sometimes isn't a promise.
I'm not up to speed on async/await -- do you happen to know if an async function throws on synchronous code (before the first await keyword) -- does that result in an asynchronously rejected promise or a synchronous exception?
This added contract between the caller and callee is huge. No more searching for that one branch of a non-trivial function that failed to wrap its immediately available return value in a Promise.
> [1] please don't tell me that I just don't "get" asynchronous programming. I was writing js event handlers when you were in diapers (something something lawn something).
This sounds pretentious and detracts from an otherwise good comment, even if you didn’t mean it to.
The main problem with normal promises is you can no longer use native control structures, because the code before and after the operation has to go into separate functions. Loops no longer look like loops, the bodies of conditionals are less obvious, exception catch points are hidden, etc.
With async/await the language takes care of lowering those control structures that straddle async operations, similarly a compiler lowering them into conditional branch instructions.
I've used both. In my opinion, promises are fine for simple scenarios, but once you start doing anything complex, the synchronous "feel" of async/await make your code much easier to read and reason about.
Ability to use asynchronous functions without pulling everything depending on them into a closure, which comes with a lot of baggage in how you have to structure things.
Depends how you look at it. The straightforward answer would be "no" because await lets you assign the result of a promise to a variable without calling `.then(...)` with a callback and then accessing the variable inside the callback (where the callback would be the closure). In that way of thinking, async/await directly let us write code with less closures.
Note that that explanation was purposefully ignoring any implementation details about whether or not closures are actually being created/allocated and was purely focusing on whether the programmer had to type out/read closures as extra syntax.
Actually, functional programmers might point out that even without async/await, assignment in imperative languages is still just some syntactic sugar for actually using closures. That is, in functional languages, its almost never idiomatic to directly "assign" to a variable (that is, overwriting the variable's previous value); instead values are "bound" to variables in let statements (or whatever the language's idiomatic equivalent is)--and let statements are effectively syntactic sugar for function calls/closures.
If you're familiar with haskell: you can think of imperative code as haskell's do notation but with the Identity macro (aka no special mode of computation, just the results of each function flowing into the next with "assignment" converted into functions). Marking a function as async (and thus allowing you use to await inside it) is just switching that function to be under the Promise monad. (but be careful! JavaScript promises aren't strictly monadic in that they auto-resolve if nested... so you can't have "Promise Promise a")
(... Sorry, this ended up being a bit of a mess of a braindump because you specified "implicit" closures :) )
async/await is well designed in that it "converts" into a Promise (and you can await Promises). Even though async code is still "contagious" if you need the result of the computation, sync and async code can now be mixed – the sync code can pass the promises around. This makes it a lot more convenient to use i.e. code is more concise.
Just as a point of interest (maybe), I've been using Redux-Saga on a project it essentially affords the same benefit; it abstracts over any of the JS async primitives (callbacks, promises, generators, etc). It is awesome!
If you have a linear chain of async operations (do this, then this, then this, ...) it doesn't change a lot compared to raw promises.
However if you have control flow in it (execute this async function 100 times, add the results, depending on the result call another async function, ...) things will get a lot more ugly with raw promises.
I love jk, been using that one for at least 8 years. I even got in the habit of tapping it a few times while thinking, sort of like how you might sometimes shake your leg or whatever. I trained myself off of that though when I had to use Eclipse more and more -- edits or undoes to a file can bring Eclipse to its knees....
I bet you learned the 'new math' of addition and multiplication algorithms before long division. (e.g. for multiplication you construct a new set of numbers to add to get the final result.) But we probably should skip much of the arithmetic curriculum. We can move on to more interesting problems more quickly, or study different algorithms if that is the goal.
This is accurate. The absolute cheapest option is around $35k plus an annual membership fee (or a one-time few-k fee I believe). The more premium (but head-only) option is $80k plus annual membership fee. (A bit more if you opt for a standby team.) A life insurance policy that covers these amounts does not cost very much.