Why did you think you needed to hire a junior dev before even starting work on the application? I know estimation can be a difficult task but the typical "I'm moving so fast..." type experiences usually mean you didn't or don't understand your tooling or the scope.
Also how were you going to take on a junior dev and a new framework at the same time? Were you expecting them to know the framework?
As the saying goes though the last 20% takes 80% of the time.
Because the project was big enough to warrant more than one person. I have a whole team surrounding me to handle non-technical/non-development incidentals. Most companies would have had a lot more budgeted and would have pre-hired five devs. Then everything would have moved glacially slow, fulfilling the prophecy that five devs were needed.
> Because the project was big enough to warrant more than one person.
But based on what, the scope? If you weren't familiar with the tech stack how would you gauge that? I understand people can conceptualize frameworks at a high-level.
> I have a whole team surrounding me to handle non-technical/non-development incidentals.
Are these the people finding the junior or (5) devs that would be needed. Do they have experience with the framework to know how to scope the project? The hiring of 1 - 5 developers in-house or even as contractors is a labor intensive process so I'm not really sure companies would have just done it based on an idea of an application. I can see where they might have hired early based on winning a contract but they probably under estimated the work if that was the case or padded the cost to account for ramp-up time.
> Most companies would have had a lot more budgeted and would have pre-hired five devs.
Maybe you haven't worked places that do spikes or just allow people to develop prototypes without entire scoping documents or hiring people. Also keep an eye on your worth here. If you are saving the company the cost involved in getting (5) more developers then you should be getting a bonus or have decent compensation. A lot people fall in this trap of "saving" the company money as if its their own, its not, and unless you are getting some of that savings you are diluting your current pay and working twice as hard.
> Then everything would have moved glacially slow, fulfilling the prophecy that five devs were needed.
Yeah this is understood as the "mythicial man month" in terms of things slowing down. Adding the wrong head count is a planning and leadership issue. There is nothing stopping teams from being dynamic at a point but that depends on how long the application is going to be supported. Having (5) people now can spread out the working knowledge and workload enough that "no single" developer is holding up forward progress. If you are having to mentor people on the project or fix mistakes then they are the wrong people or wrong skillset for the team. A leader will be able to convey the issue to management and have people let go or replaced. People don't like to do this but there is no reason to keep a failed process going as we are all professionals. Alternatively people above you have accepted this as part of the application development process, it justifies their job, and are fine with it so getting the work done any faster is just a bonus to them.
Honestly it sounds like it wasn't a tool that is needed often, if it was you or someone else would have already written it. Or you don't regularly day-to-day program enough in javascript / python to do this quickly. There isn't anything wrong with that, as you mentioned, you have entry level security engineers that typically handle those tasks. Creating a tool goes fast when you know exactly what you want it to do and don't have to explain to another person all the requirements and pitfalls to avoid based on experience you might have in writing quick scripts. I don't know if this really changes anything.
80% of being a good security engineer is knowing the big picture, all the parts and how they work. The correlation that LLM produces has no value if its not actionable. You are the one that determines the weights, values, and features that are important. I'd be very curious of how you currently account for scheduled outages, unscheduled outages, new deployments, upgrades of existing systems, spinning instances up and down for testing, laptop device swap-outs, traffic in different silos. How are you baselining normal communications and session timing between services or across protocols? If you are in the cloud is baselining done by service HTTP, DNS, DB, etc? I could see different weights being constructed to represent defense-in-depth but this would seem to be a constant amount of work while also investigating or feeding true/false positives back into the system.
Entry-level cybersecurity isn't a thing which is why it isn't working out as typically you need prior dev, ops, devops, sre, sysadmin, etc existing experience. The talent shortage is because you can't do an undergrad in cybersecurity and somehow pick-up the prior operational knowledge that develops your skills for understanding and troubleshooting how systems, networks, and applications all function together. Cybersecurity as it stands, and you mention, is in my experiences best as a focus off-of computer science. I mean even the CISSP requires working experience in the field.
The one item I think you are overlooking is that you have the experience in how everything works together which makes a tool like ChatGPT or some other analyzer where you can ask the "right" questions a useful tool because you have the mental mapping and models through experience of the questions to ask. So while a security analyst job might go away you are back at the original problem of developing security engineers that know the architecture, flows, daily expectations, etc and having a LLM buddy is not going to turn a security analyst directly into a cybersecurity engineer over night.
> The correlation that LLM produces has no value if its not actionable.
For security, there are two parts to this:
- correlation within detection engines, i.e. what Crowdstrike does: CS and so on are already doing what you describe (baselining normal system and identity behaviors). It is hit-or-miss still, but noticeably better than a few years ago, and I think this current AI era will push it further. These already took away the need for several sec eng hires.
- correlation across logs, i.e. an incident is happening and under time/under stress, and usually this is a IR team putting together ad hoc search queries and so on. LLMs, as many of them seem to have indexed query languages docs and much of the open source docs on AWS, O365 etc, are in almost invaluable tool here. It's hard to explain how quickly security dev across pre-incident prep or in-incident IR are sped up by them.
> where you can ask the "right" questions a useful tool because...
Yes, this specifically is one of the great value-adds currently - gaining context much quicker than the usual pace. For security incidents, and for self-build use cases that security engineers often run into, this aspect specifically enough to be a huge value add.
And I agree, it will exacerbate the existing version of this, which is my point on replacing analysts:
> you are back at the original problem of developing security engineers...
This is already a problem, and LLMs help fix the immediate generation's issues with it. It's hard to find good sec engs to fit the developmental sec eng roles, so those roles become LLMs. The outcome of this is... idk? But it is certainly happening.
1. Maybe you aren't the intended audience for this.
2. They are lecture notes and homework which are typically distilled from a textbook or industry experience topic.
These are helpful items for someone teaching or speaking to these topics, you aren't supposed to directly learn from the notes. Its the equivalent of saying you could learn c++ from a power point presentation.
Can you elaborate on what you would add for the cloud, AD, or vulnerability content?
In my perspective what you listed are simply tools and vendor offerings of which reading the documentation or getting a vendor specific certification is the expected process. This course teaches the foundations on which the items you listed were built from. The reason you probably feel that its so dated is because security hasn't changed we just like to keep calling it different things. Classes like this tend to focus on the more permanent area of network protocols as most exploits just ride on top of existing standards which if you understand those you can understand the "latest" vulnerabilities, cloud infrastructure, IAM and so on.
here is a simple example: DDoS is handled on almost every app platform a developer can deploy on, but misconfigured cloud resources (#5 in the newest OWASP top 10) is not described here at all. In fact, the cloud primitives of compute, storage and workloads are not described and instead classic 2000's network security is covered.
The lectures aren't a how-to guide. The items that are explained are to provide reference to the lecture material. For example the apache2 setup could just as easily be nginx, lighttpd on Windows, FreeBSD, Redhat, etc. Its explaining the concept of a DDoS, malware, viruses, spam, cryptography. Cloud primitives? how would that relate to computer and network security instead of being covered in an operating systems course? They are just abstractions of physical hardware properties and would be specific to the implementation you were working on, ie AWS, GCP, Azure, etc. Any specific implementation or security is completely dependent on what the vendor implements and is ephemeral.
The OWASP top 10 is self described as an awareness document[1] it wouldn't be something you teach a college course on.
At what point though is this just consulting? Since everyones risk tolerances are different and may or may not have good network architectures or software practices how would this apply generally to other companies or networks?
I would add the additional problem of CVE's being devoid of any useful information which lead to generic tests being created by vulnerability scanners as they have the same lack of insight as everyone else thats trying to patch the issue. Thus creating higher false positives or wasted effort trying to confirm an exploit yourself. I get not wanting to provide a PoC because "script kiddies" might use them but if we want vulnerabilities patched regularly you have to provide better assurances that they are valid and that we can show they are patched aka tests.
Shouldn't your analysis/understanding show that upgrading the library is enough? If a CVE or vulnerability scanners test isn't telling you the problem that needs to be solved upgrading a library or anything else won't make a difference and you wouldn't know the problem either way.
Approaching vulnerability management from a developers view is a very narrow scope.
What sources do you have to cite on development costs and regular maintenance? Autonomous trucking is an add-on to existing trucks which are already being built regularly. We already perform road maintenance and account for it in local, state, and federal budgets as it services more vehicle's and destinations than just "autonomous trucks."
This is my thinking as well. We need to build for the task at hand not try and integrate into what we already have. The first sentence in this article is where the failed assumptions are.
"Trucking was supposed to be the ideal first application of autonomous driving. Freeways contain predictable, highly structured
driving scenarios.."
When sharing the road with human drivers this statement makes no sense as all. Vehicle's are their own entities with no connection to each other outside of the road, signage and defined lanes. Instead of trying to build sensors for a existing 8-lane highway just do it for (2) isolated lanes. You don't have to plan for 100% of human scenarios if they are mostly removed. We don't fly airplanes adjacent, behind, ahead, etc of each other why is the assumption that autonomous trucks need to be on the same road as everyone else.
Also how were you going to take on a junior dev and a new framework at the same time? Were you expecting them to know the framework?
As the saying goes though the last 20% takes 80% of the time.