>>> Meaty feet can be held to a fire. To quote IBM, "A computer can never be held accountable."
>> This is the question I keep asking leaders (I literally asked a VP this question once in an all hands). How do we approach the risk associated mistakes made by AI?
> What was the answer? Asking for a vp friend
This is a difficult issue to tackle, no doubt. What follows drifts into the philosophical realm by necessity.
Software exists to provide value to people. Malicious software qualifies as such due to the desires of the actors which produce same, but will no longer be considered here as this is not germane.
AI is an umbrella term for numerous algorithms having wide ranging problem domain applicability and often can approximate near-optimal solutions using significantly less resources than other approaches. But they are still algorithms, capable of only one thing - execute their defined logic.
Sometimes this logic can produce results similar to be what a person would in a similar situation. Sometimes the logic will produce wildly different results. Often there is significant value when the logic is used appropriately.
In all cases AI algorithms do not possess the concept of understanding. This includes derivatives of understanding such as:
- empathy
- integrity
- morals
- right
- wrong
Which brings us back to part of the first quoted post:
To quote IBM, "A computer can never be held accountable."
Accountability requires justification of actions taken or lack thereof, which demands the ability to explain why said actions were undertaken relative to other options, and implies a potential consequence be imposed by an authority.
Algorithms can partially "justify their output" via strategic logging, but that's about it.
Which is why "a computer can never be held accountable." Because it is a machine, executing the instructions ultimately initiated by one or more persons whom can be held accountable.
This is fascinating, but at the same time I got to ask. Wouldn't it easier for a person as experienced as Tavis Ormandy to simply write a console version of a spreadsheet software from scratch using a modern stack?
There are medical bodies staffed by professionals that study these matters, and the states (should) base their decisions on what their findings are. To me the problems starts when decision makers and then the general population start trusting fringe conclusions and trying to interpret medical data when they have no training to do so.
When a new cure for cancer is found, we don't expect the minister of health to recommend or advise its use to the general public on TV, do we ? We assume professional MD will learn about it via their regular channel, and recommend the treatments on patients that they judge relevant.
I think current context of the world as it is is full of unjustified and unjust generalization.
And as unfortunate as it sound it look like all victim of such generalization, the alumni would have to fight the prejudice associated to their choice of university.
I don't really think it pays off to make such distinction between virus and trojan.
`Trojan` is often used to refer to malware that provides a backdoor into your system, and if someone gets to run code on your machine it isn't your machine anymore.
The real value is in evaluating your risk, which includes an analysis of the infection vector. A virus (or worm) can be more risky because it typically exploits a weakness in the system. And some trojans are more risky to some demographics than others, depending on which social engineering techniques they use to trick a user into installing them.
- One who will drive you crazy demanding to see things that don’t matter, argue with you over non-issues, think they are way smarter than they are, and just produce a lot of irrelevant paperwork
- One who is barely technically literate but got their CISSP certificate, definitely can’t code, and has never written an exploit in their life. They just want to see a familiar tool name and some checklists.
There are zero kinds of compliance auditors who will ever find an actual vulnerability, and I’d be surprised if they can even explain common basic attacks like SSRF.
The reason is simple. That work is so boring, that if you had any skills at all, you’d be doing more interesting security work.
I know you're being facetious, but any SOC2 audit would overlook this because they're just going down a checklist making sure specific controls are in place and not actually probing for the various possibilities to bypass these controls.
> any SOC2 audit would overlook this because they're just going down a checklist
From the GP, "They also stored (1) all the production database and server passwords in a plain text file accessible to half the company and (2) no audit trail of any kind on logins."
Any competent SOC-2 auditor would not overlook this. Large swaths of criteria are specifically geared to uncover both of these (unencrypted credentials and audit trails).
SOC-2 audit firms have a strongly vested interest in hiring competent auditors, because if a SOC-2 auditor did not ask questions covering these areas, then failure to complete the audit would be actionable malpractice and the auditor would be liable, unless the company lied (committed demonstrable fraud) in its written responses.
I see your point: if they were competent, they wouldn't overlook it. My point was more that because SOC-2 is more pedantic (than, say, HIPAA), it's harder for a SOC-2 auditor to mess this up.
The "boxes" to be checked are actually asking open-ended questions about that, and asking you to provide copious written documentation backing up what you are claiming. This process takes weeks or months and is quite expensive.
The relevant box here is "Are all accesses to production systems logged in an indelible manner?" and "Is the principal of least privilege followed when accessing production systems?"
These questions aren't perfect, since they don't actually prevent security issues and merely document them extensively, but answering no to them will fail the audit.
That would be defamatory, and potentially extremely unfair, especially if someone lied or was just incorrect, in an anonymous internet forum.
If someone experienced a loss because they relied on untrue statements from an auditor, they would have grounds to bring a case, and the auditor would have a chance at due process to respond to that case.
It probably seems slow and unwieldy when we all want justice now, but this is how we can be assured that justice is, in fact, done, and not injustice.
A HIPAA audit would certainly miss this, but I would be very surprised if a SOC2 auditor missed this (or that they would remain in business long with the damage that would do to their reputation).
Let's say that you've been given a key for a locker at Grand Central Station and told there's a gun inside. We agree that I'll use the gun to shoot someone. You give me the key and I go to collect the gun. In fact, the locker is empty.
The gun doesn't even exist and yet there's still a conspiracy to commit murder.
Yes: but don't you think that going to collect the weapon furthers the conspiracy to commit murder?
There'd inevitably be argument over whether it met the required standard, which differs by jurisdiction. e.g. did it advance the conspiracy substantially? did it take the conspiracy past the point of no return?
> Yes: but don't you think that going to collect the weapon furthers the conspiracy to commit murder?
Well arguably that didn't happen.
> There'd inevitably be argument over whether it met the required standard, which differs by jurisdiction. e.g. did it advance the conspiracy substantially? did it take the conspiracy past the point of no return?
I'd say that going to a place where you would get equipment is not something that substantially advances the conspiracy, even if there was a gun! It's actually getting the equipment that might do so, depending on what the equipment is. And if you're still collecting necessary equipment, or especially doing a prerequisite to collecting equipment, you're nowhere near the point of no return.