For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | jpcosta's commentsregister

What was the answer? Asking for a vp friend


>>> Meaty feet can be held to a fire. To quote IBM, "A computer can never be held accountable."

>> This is the question I keep asking leaders (I literally asked a VP this question once in an all hands). How do we approach the risk associated mistakes made by AI?

> What was the answer? Asking for a vp friend

This is a difficult issue to tackle, no doubt. What follows drifts into the philosophical realm by necessity.

Software exists to provide value to people. Malicious software qualifies as such due to the desires of the actors which produce same, but will no longer be considered here as this is not germane.

AI is an umbrella term for numerous algorithms having wide ranging problem domain applicability and often can approximate near-optimal solutions using significantly less resources than other approaches. But they are still algorithms, capable of only one thing - execute their defined logic.

Sometimes this logic can produce results similar to be what a person would in a similar situation. Sometimes the logic will produce wildly different results. Often there is significant value when the logic is used appropriately.

In all cases AI algorithms do not possess the concept of understanding. This includes derivatives of understanding such as:

  - empathy
  - integrity
  - morals
  - right
  - wrong
Which brings us back to part of the first quoted post:

  To quote IBM, "A computer can never be held accountable."
Accountability requires justification of actions taken or lack thereof, which demands the ability to explain why said actions were undertaken relative to other options, and implies a potential consequence be imposed by an authority.

Algorithms can partially "justify their output" via strategic logging, but that's about it.

Which is why "a computer can never be held accountable." Because it is a machine, executing the instructions ultimately initiated by one or more persons whom can be held accountable.


In the all hands I got an answer about techniques that would be used to reduce the likelihood of mistakes. Ie not an answer.


Unfortunately, all too common in the security field.


It's almost the entire field these days - just trying to get CVEs for the CV.

The days of Metasploit 0-days are over, so now it's just loads of sensationalist reporting and box-ticking regulation in companies.


This is fascinating, but at the same time I got to ask. Wouldn't it easier for a person as experienced as Tavis Ormandy to simply write a console version of a spreadsheet software from scratch using a modern stack?


Yes and no - 1-2-3 is an extensively advanced piece of software that took thousands of man hours to get where it was.

You could write 20% of it in a few days, maybe 50% in a month, but that last bit would take … thousands of man hours.


Yes, but the result won't be as polished as 1-2-3 or QP, both of which have tons of developer time behind).


There are medical bodies staffed by professionals that study these matters, and the states (should) base their decisions on what their findings are. To me the problems starts when decision makers and then the general population start trusting fringe conclusions and trying to interpret medical data when they have no training to do so.


Why go through state then ?

When a new cure for cancer is found, we don't expect the minister of health to recommend or advise its use to the general public on TV, do we ? We assume professional MD will learn about it via their regular channel, and recommend the treatments on patients that they judge relevant.


neutered in what sense? besides doing the keynote speech of course


Maybe try running a business?


That seems to me like an unjustified and unjust generalization.


I think current context of the world as it is is full of unjustified and unjust generalization.

And as unfortunate as it sound it look like all victim of such generalization, the alumni would have to fight the prejudice associated to their choice of university.


I don't really think it pays off to make such distinction between virus and trojan.

`Trojan` is often used to refer to malware that provides a backdoor into your system, and if someone gets to run code on your machine it isn't your machine anymore.


The real value is in evaluating your risk, which includes an analysis of the infection vector. A virus (or worm) can be more risky because it typically exploits a weakness in the system. And some trojans are more risky to some demographics than others, depending on which social engineering techniques they use to trick a user into installing them.


If you are making a risk evaluation based on the generic term someone else uses to describe a threat, you've already lost.

The genie is out of the bottle and there is no putting it back - virus, malware, worm, trojan, etc. are all interchangeable marketing terms now.


where would one hire an auditor like this? asking for a friend


There are two kinds of auditors:

- One who will drive you crazy demanding to see things that don’t matter, argue with you over non-issues, think they are way smarter than they are, and just produce a lot of irrelevant paperwork

- One who is barely technically literate but got their CISSP certificate, definitely can’t code, and has never written an exploit in their life. They just want to see a familiar tool name and some checklists.

There are zero kinds of compliance auditors who will ever find an actual vulnerability, and I’d be surprised if they can even explain common basic attacks like SSRF.

The reason is simple. That work is so boring, that if you had any skills at all, you’d be doing more interesting security work.


I know you're being facetious, but any SOC2 audit would overlook this because they're just going down a checklist making sure specific controls are in place and not actually probing for the various possibilities to bypass these controls.


> any SOC2 audit would overlook this because they're just going down a checklist

From the GP, "They also stored (1) all the production database and server passwords in a plain text file accessible to half the company and (2) no audit trail of any kind on logins."

Any competent SOC-2 auditor would not overlook this. Large swaths of criteria are specifically geared to uncover both of these (unencrypted credentials and audit trails).

SOC-2 audit firms have a strongly vested interest in hiring competent auditors, because if a SOC-2 auditor did not ask questions covering these areas, then failure to complete the audit would be actionable malpractice and the auditor would be liable, unless the company lied (committed demonstrable fraud) in its written responses.


> Any competent SOC-2 auditor would not at all overlook this

I think this statement is tautological, in a way


I see your point: if they were competent, they wouldn't overlook it. My point was more that because SOC-2 is more pedantic (than, say, HIPAA), it's harder for a SOC-2 auditor to mess this up.


No, I'm saying their audit process focus more in box checking than actually seeing "you left the key under the mat"


The "boxes" to be checked are actually asking open-ended questions about that, and asking you to provide copious written documentation backing up what you are claiming. This process takes weeks or months and is quite expensive.


The relevant box here is "Are all accesses to production systems logged in an indelible manner?" and "Is the principal of least privilege followed when accessing production systems?"

These questions aren't perfect, since they don't actually prevent security issues and merely document them extensively, but answering no to them will fail the audit.


Are you talking about HIPAA or SOC2?


As a counter example, a competent fire inspector might overlook it ;)


Given the current political climate, such a SOC-2 auditor should be outed by name.


> Given the current political climate,

Not sure what politics have to do with it.

> such a SOC-2 auditor should be outed by name.

That would be defamatory, and potentially extremely unfair, especially if someone lied or was just incorrect, in an anonymous internet forum.

If someone experienced a loss because they relied on untrue statements from an auditor, they would have grounds to bring a case, and the auditor would have a chance at due process to respond to that case.

It probably seems slow and unwieldy when we all want justice now, but this is how we can be assured that justice is, in fact, done, and not injustice.


A HIPAA audit would certainly miss this, but I would be very surprised if a SOC2 auditor missed this (or that they would remain in business long with the damage that would do to their reputation).


You still have to prove who had the gun in the first place


No, actually.

Let's say that you've been given a key for a locker at Grand Central Station and told there's a gun inside. We agree that I'll use the gun to shoot someone. You give me the key and I go to collect the gun. In fact, the locker is empty.

The gun doesn't even exist and yet there's still a conspiracy to commit murder.


No, actually, you need additional steps to achieve conspiracy to commit murder in that hypothetical.


What's the legal basis for that? The "overt steps" are you giving me the key, and me going to the locker.


You could get the gun to prevent the conspiracy from taking place.


Does the overt act not have to further the conspiracy?


Yes: but don't you think that going to collect the weapon furthers the conspiracy to commit murder?

There'd inevitably be argument over whether it met the required standard, which differs by jurisdiction. e.g. did it advance the conspiracy substantially? did it take the conspiracy past the point of no return?


> Yes: but don't you think that going to collect the weapon furthers the conspiracy to commit murder?

Well arguably that didn't happen.

> There'd inevitably be argument over whether it met the required standard, which differs by jurisdiction. e.g. did it advance the conspiracy substantially? did it take the conspiracy past the point of no return?

I'd say that going to a place where you would get equipment is not something that substantially advances the conspiracy, even if there was a gun! It's actually getting the equipment that might do so, depending on what the equipment is. And if you're still collecting necessary equipment, or especially doing a prerequisite to collecting equipment, you're nowhere near the point of no return.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You