I See 7 job titles. A couple are hardware designers ("architect" like design a building), a couple are "solution architects" which a name for a pre-sales coach/tutor/guide role, and a couple are buzzword soup in project management that I can't figure out what the role actually is, maybe the fabled software architect being derided here.
Exactly!! There is no such thing as a cure all. Anybody angry over a technique that does not solve all problems is clearly suffering from the Silver Bullet Syndrome.
A lot their "training" seem designed to weed out those who lack specific innate traits
I don't think it's entirely about innate traits. There's a lot of training that occurs before going to some these schools. For example, in the Army you're not eligible to 'try out' for Special Forces until you're an E4 grade (not sure about officers), which may take 1.5 - 3 years to achieve, assuming you enlisted as an E1. Even then a lot of guys spend 2-4 months in special training programs before going and some don't pass until their second or third attempt.
Sure, some of the monitoring systems I've written do some fairly detailed testing. For example, I wrote the Justin.TV chat system. There's a group of bots that periodically log in to the production chat server, send messages to each other, and email me if anything doesn't work.
I speak only for myself, but I would say the following: The real key is fully automated testing, no matter how it is done. The finer the grain, the better, generally. Matching an actual business case is near ideal.
Fully automated is necessary so you can run them frequently, automatically, near-continuously.
But beyond that, I think you get into religious territory, and I think people getting dogmatic about the definition of "unit test" tends to mistake definition for virtue (a common failing). If you've got automated testing, great! You win. Doesn't matter how it works.
Or, rather, it does matter, but only within your context, which nobody else is really competent to judge you on.
Yeah, the only thing that matters to me is whether the pro-unit testing position has become more religious or whether the anti-unit testing position has become more religious. I'll take whatever seems serious. I liked what Joel Spolsky had to say yesterday but I disagree with the linked blog post today.
I’ve never seen Kent Beck as a crazy rules person. I think his comment from stackoverflow demonstrates this:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
I disagree. I think comments are often overused but I'm not convinced that code and tests are expressive enough to always convey why functionality exists.
For example, sometimes limitations in third party dependencies force you into adding additional code that might look out of place or redundant. In cases like this it can be hard to capture why this code was introduced without using a comment.