For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | oskarsv's commentsregister

Yeah, although technically it's "out of scope", I think there are times when you should stop debating the technicalities and consider the business impact.

I mean, do you look at that demo and think "yeah, that's technically just 'important' let's fix it in 2 months"?


there are different levels of security for ElectronJS, some, like in this case are not enough.

I think it will take a long time before we can call ElectronJS secure. there are regular sandbox escapes and that is from what we know publicly


The OP is asking for more detail than “not enough”, though:

“Can it escape the Chromium renderer sandbox? Or is that sandbox disabled?”


to simplify - no it’s not enabled

the real answer is more complicated as it is not necessarily a global setting and depends on what you call a “sandbox”


Thanks. I'd pay (moderately) for the more complicated answer. An ebook on Electron security might be a good idea.


I'm not an expert on Electron security!

But if not addressed to me, there is no need to pay, you can start here: - https://www.electronjs.org/docs/tutorial/security - https://github.com/electron/electron/security/advisories

As you can see there are plenty of considerations and pitfalls to take into account. Best option is to enable contextIsolation for everything.

Further, Electron security is closely tied to Chrome security so that is one deep rabbit hole


Best Electron security is not using it in first place.


Yeah, let's stick with raw C/C++, that would be much safer...

Or maybe let's use some research language made by Wirth, and get access to all 10 of packages and 5 devs worldwide using it :-)


For starters, leave it on the browser.

I didn't mention any programming language.


Telegram Desktop is a cross-platform C++ app. What similar remote code execution exploit has existed in the wild for it?



One of them requires the user to click run on a file, much like running an EXE. The other, simply saves potentially malicious data to external storage which would then have to be run by a separate malicious third-party app. This are far from RCE exploits that execute immediately without poor user decision making, and Rust is not impervious to security exploits similar to these.


C'mon. Just because there is one C++ app without remote exploits doesn't mean all C++ apps are immune.


FYI it's not just PL that factors into security. The engineers, for example.


Rather just keep it in the browser? ;-P


This is safer to a significant degree.


I wrote this. This is one of five similar reports for MS Teams.

Even outside RCE, just consider the impact of access to SSO tokens and wormability :)


Could you clarify the "one of five" statement please? Are the other 4 vulnerabilities still unfixed, or they are fixed but a write-up is still pending? If there are still 4 unfixed RCE bugs in Teams I'd rather people uninstall Teams than wait for the fix...


It would be safest to assume that you have at least one unfixed RCE bug in Team, even if oskar did not discover it yet.


Could you provide a disclosure timeline and the version or indication of the version which has fixed this issue?


you can find both disclosure dates and versions in the report.

As for when it was fixed - I have no idea, as they never told me, one day it just was.


Thank you for reporting it and not selling it on the black market!

I agree the categorisation is very bad.

I hope raising this here will help you getting rewarded properly.


> Thank you for reporting it and not selling it on the black market!

I disagree. If MS is going to treat major issues like this then researchers should be selling them to the highest bidder. Maybe that way they'll actually treat disclosures properly.


Pretty bold to advocate for blackhat behavior on one of the most schoolboy vanilla places on the internet, but I can't say I necessarily disagree with your sentiment, big tech needs a lesson but is this really the vulnerability we want? 115 million DAU on teams...

The amount of damage the NSA or some other state sponsored actor could do with this... It would be very bad to say the least. How bad depends on which state acquires it.

If a script kiddy got it they would likely do a mass randomware infection, hospitals would get hit, people would die. Millions in crypto would be lost to unencrypted wallets found on the vulnerable machines (yes people do that..), this could cause some to lose their life savings... People have commit suicide for less.

My point is its important to look past FAANG being cheap and look at 2nd and 3rd order effects from something this powerful and widespread.


Governments around the world already regularly trade in exploits that are as or more severe than this one.

That isn’t to advocate for brokering to a government, just to say that the market already exists and contains comparable exploits. It’s only a matter of time until we see the next EternalBlue to WannaCry lifecycle.


> look at 2nd and 3rd order effects .. which FOSS engineers have spent their lives on, while FAANG acumulates patent and SSL money across international borders? forcing TEAMS kool-aid with surveillance built-in, down your desktop with the help of C-Suite and their attorneys?


The ethical thing to do is immediate full disclosure, not selling it and not this (ir)responsible disclosure crap.


> researchers should be selling them to the highest bidder

But what about all of the innocent people who would be harmed by such a callous approach? I'm glad some researchers have a conscience.


> But what about all of the innocent people who would be harmed by such a callous approach?

They should then think again about their choice of using teams. Why should Microsoft rake in money from a shabby product while volunteers have to fix their shit?

Assigning a ridiculously low score to significantly lower the bounty as a billion dollar company is disgusting.


> They should then think again about their choice of using teams.

Try saying that to a student who is using Teams on a school-issued laptop, by no choice of their own.

I'm not in any way defending how Microsoft handled this. Frankly, I'm ashamed of my former employer (though I worked in a completely different division). But your outrage toward the company should not extend to its unwitting users.


There wouldn't be very many unwitting users if their software had a serious reputation for being a serious security risk.


Bullshit. Currently there are millions of children who are obligated to use Teams for their publicly funded education.

And thinking these huge metrics get changed by selling black hat exploits to what? Teach Microsoft a lesson? While harming an already vulnerable population (not just children are obligated to use Teams). As if the long term goal of educating "unwitting" users is advanced at all by blackhat behaviour.


Let's dump public education!

Deschooling is done on discord


Microsoft has had a serious reputation for being a serious security risk for the 30 or so years I've been in IT. It's one of the oldest jokes in the industry. People and the world in general clearly do not work the way you apparently think they do.


Zoom still has a ton of users, and every single thing they make or do is a serious security risk (or has been in the past, evidencing a distinct lack of secure development culture).


Windows XP is still seen in the wild.


Problems need to hit the users, otherwise the market is uninformed and cannot work.


There are lots of vulnerabilities in most door locks, does that mean we should go around stealing things because Chubb have made money selling insecure locks?


A wormable, widely deployed, Chubb lock would be interesting.

Let's see how Ring goes over the next few years... ;)


> They should then think again about their choice of using teams.

What percentage of Teams users do you think have a choice in their use of Teams?


If it's on their work machines then it primarily endangers their employer's data, much less their own.


Funny thing to say when we're in the middle of a global pandemic, and more people are working from home than ever.

I work at an university and I've been forced to install that crap on my home computer because I need to teach from home. And so do all the professors in around half the universities I know in my country.


Interesting, I'm surprised that they don't have to provide you with the tools needed to do your job!

In Australia, the emp is generally responsible for providing any necessary tools or equipment needed to do the job (contractors are another matter though)


In normal circumstances they do provide the tools needed for the job, as they should. But this was a sudden state of emergency triggered by a pandemic, there were no funds, reactions weren't fast enough... so basically, they didn't.

Anyway, those of us who have research projects (as is my case) typically do have computers provided by the university at home, because research has strange schedules and working from home has always been a need (meeting with colleagues in different timezones, waiting for experiments to complete at night, rushing for deadlines, etc.).

But... it's not really practical to make room for two different desktop computers for my own use in an already spaced-starved flat, or to work in a laptop for many hours when I could do so in a desktop. So in practice, my home computer and my work computer are one and the same. And it's like that here for most, if not all, people I know.

We are a Latin country and also tend to live in small flats, maybe in other places it's different. I can imagine that if I had one of those American McMansions, it would make sense to have a home office with a sober, black work computer, a good camera setup and a green screen, and then a gaming room with a flashy gaming computer and huge speakers (near the billiards and darts room, probably :)). But that's not really how things work about here. Here, separation of home and work computers at home is almost exclusive of jobs with high security restrictions. Most people in normal jobs just don't do it because it's not practical.


And then when the company loses business from the disruption, do you think employees walk away scot-free?


I consider that inherent risk. Not getting a raise because the company made business decisions that turned out suboptimal (such as gaining short-term profits by not investing IT security) is a risk that any employee faces. If you want a more stable environment you go for a more risk-averse employer, perhaps even public sector jobs.


That's a silly proposition. If my field of expertise is inherently private, I don't have that choice. Also I can't solve for every variable when searching for jobs. I choose among the ones I get an offer for, and obviously their IT decisions aren't top of my list (nor do I know what those are prior to hitting the desk)


Ruining companies that can't (or won't) get their act together (whether it's security, finance or any other critical and undervalued area) is a short-term pain that fixes the issue. Refusing to fix simply prolong the problem - at some point you have to say "enough is enough" and tear the bandaid off, if you don't, and you don't do so with severe enough consequences then businesses will simply conveniently ignore what they're being asked to do.

Necessity is the mother of invention, I have no doubt that the opportunities created by blowing away poorly-behaved incumbents will cause a healthy collections of startups who will be operating within the required framework.


You may not see yourself as having a choice but that wasn't really my point. What I was getting at is that being an employee in general comes with a diffuse risk of many factors that can result in not getting a raise or the company even going bankrupt. Many of them are outside your direct responsibility or influence and yet you take up the whole risk package when joining that company. The company getting ransomwared is just one more factor. It's not special. Well, one issue with it is that it requires criminal activity so it's dragging us down to a worse equilibrium where more resources have to be spent on countermeasures. But arguably that cat is out of the bag, so the next best thing that we can do is to make security best practices easy. And microsoft wasn't doing its part here.


Punishing innocent people is not the answer.


They are not innocent. They made very poor life choices, picking microsoft software. Why should the world reward their poor choices?


To what extent should the blame for any harm fall on Microsoft? They are the ones relying on effectively free labor to protect the innocent. In such a case blaming the free labor instead of blaming the ones relying on free labor seems to create some very bad incentives.

Personally I would prefer just having all new vulnerabilities immediately disclosed once found. No selling, but letting people decide for themselves if they want to continue to use a product after someone has found a vulnerability. I also think the incentives this creates would mean that Microsoft and similar shops would put more effort into testing their own software because they would no longer have the safety net of a grace period when someone finds a problem.


Thing is, we don't know if this was found before by malicious actors and sold and/or abused.

This thing sounds like it is mostly pretty straight forward to find once you start looking - "you" being somebody experienced in this field of research, that is. At least you don't have to construct fancy weird machines (with type confusion, heap spraying and all those shenanigans). It comes down to finding something that can perform code execution in their internal API (here: "electronSafeIpc") and then finding a way to get there (here: angular escape bypass/not-properly-sanitized user provided data) and you can do both in javascript and don't have to read tons of machine code.

Given that Teams is a great target because of it's large and often corporate user base, I'd be surprised if none of the usual industrial espionage suspects (e.g. China, NSA, etc) had a look at Teams before. And I'd think the chance of them having found the same bug, or a related bug, once they looked is pretty good too.

From what I am hearing even the (US) military uses Teams sometimes... If that isn't incentive to look at this thing for "interested parties", then I don't know.


> This thing sounds like it is mostly pretty straight forward to find once you start looking

Most security bugs with 20/20 hindsight are "obvious" when explained well. Personally, I think that is an insulting and immature thing to say IMHO.


please check out how much code MS Teams actually has, before statements like this :)

(it’s more than 30MB of compressed JS)


I didn't want to belittle your work, if you think that was the case. It's still outstanding to find things like that on your own, and a lot of work goes into it. Sorry if I gave the wrong impression.

I have analyzed foreign code bases of similar dimensions in the past myself and found critical bugs. The size doesn't say much, it comes down to identifying the "interesting" bits (like the electronSafeRpc in this case), which can be hard and tedious, but greatly reduces the code you have to look at in detail. My assertion is that if your name is e.g. China then you will not be turned off by that.


that electronSafeIpc API is actually not that interesting and a completely standard way to do things for ElectronJS apps.

No, I do agree - from my perspective C/C++ class bugs are more difficult. Maybe they see this as magic as well.

Still, it was painstaking work and in either case CountryX will easily surpass those difficulties.


30MB of hand-written JS? For what's basically a glorified chat client?

With that much code I'd expect an AI to talk to people so I don't have to.


Yeah, that will show 'em...

Then people will move to some understuffed FOSS alternative with 5 people working part-time on it, with as severe bugs that nobody notices (remember Heartbleed and countless others?)...


Imagine thinking people move to FOSS alternatives.

Imagine thinking PHBs at most companies even care about security.


If the bounty money borders on insignificant, there's always public shaming. Demo the exploit in a controlled environment, and let the media cycle go.


why controlled? Last time some dude got frustrated and started dropping zero days pretty much weekly Microsoft finally hired him to make it stop.


So the people / companies who would be hacked and have their data / systems destroyed are what? Acceptable collateral damage?


While I get your sentiment, I must disagree.

Profiting from the very likely unethical use of the exploit would be unethical.

Instead this mishandling by M$ should rather cause researchers to publicly announce the vulnerabilities which would hopefully cause M$ to change their ways in future dealings.

It is ofcourse easy for me to say this, not being a researcher who lives off of the discoveries made.


Participating in a system that exploits researchers for free labour using societal guilt-tripping is the unethical move here. That means you.


I see you completely missed my point.

My point is that in the case of M$ the defects could be publicly announced to all parties at once as a way of making M$ realize that how bad their handling is/was. In all likelyhood this shouldn't have to happen for too long before they would realize their mistake.

Many other corporations do indeed value the discoveries of researchers and do pay accordingly for being notified. Never did I suggest that this should become the industry norm (i.e not paying for private disclosures).

Now what ever your personal feelings on that idea is, it does not change the fact that selling exploits to other parties would be unethical.

Furthermore, participating in a system that promotes assumptions and flawed reading comprehension is not conductive to a good discourse. That means you.


you could be a grey hat if you averaged out one exploit turned in to the proper group to one exploit sold to the highest bidder. Flip a coin to be a real greyhat


"Locks can be picked so everyone should break into homes to proved a point"

Lol, no.


That most locks are pickable is common knowledge and that is why high-risk targets invest in additional security beyond locks.

That crufty electron apps are a security risk is not. So yes, you do need someone to run out into the streets and yell that the emperor has no clothes. Otherwise common knowledge will not be established.


Not selling this is the real crime here. Microsoft's conduct in this case deserves much worse than just that.

Hoping for a reward now is obviously not going to happen - the best you can hope for as a response to an act like this is legal action. In a vindictive way, you can definitely hope they will get significantly damaged by this and in that way learn their lesson, but I doubt it.


Sorry if I am just obtuse but I don’t see a timeline in the linked report on GitHub. All I can see is that you tested against a version of Teams from 2020-08-31. Being able to see the complete timeline of communication with MS from discovery to public disclosure is not necessary but would give a more complete picture of how this went down, and I’d like to see it too if it’s not such a hassle.


There is no timeline besides when I reported it and now minus 2wks. They never told me when the fix was deployed.

There is little value in going through the email chains to note each date:(. Final decision was made 2020-11-19


Could you put that in the README, is what we're asking, as vague as it may be.

At the moment the 'has been fixed' is the only clue to this in terms of resolution, and it's tucked away; without it it looks like most of the README is attempting to capitalize on the shock/outrage factor.

Edit: Thanks, author has added some dates.

https://github.com/oskarsve/ms-teams-rce/commit/35eac619fdef...


Have you been tempted to build a worm and click send? not to brake anything, just a text popup with an optimistic optimistic quote.


only as a thought exercise. the ability to 'switch off the internet' (115 million daily active big corp users) is tempting, but no, not really :)


That's one way to force them to not make bug like that "important, spoofing" and "out of scope".


Google Robert Morris to find out how that goes.


Wikipedia:

In 1989, Morris was indicted for violating United States Code Title 18 (18 U.S.C. § 1030), the Computer Fraud and Abuse Act (CFAA).[2] He was the first person to be indicted under this act. In December 1990, he was sentenced to three years of probation, 400 hours of community service, and a fine of $10,050 plus the costs of his supervision. He appealed, but the motion was rejected the following March.[4] Morris' stated motive during the trial was "to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects [he] had discovered."[2] He completed his sentence as of 1994.


In case people don't know already, he's one of the YC founders: https://www.ycombinator.com/people/


From his wikipedia:

He is a longtime friend and collaborator of Paul Graham. Graham dedicated his book ANSI Common Lisp to Morris. Graham lists Morris as one of his personal heroes, saying "he's never wrong."

to be friends with Paul Graham, i should make a worm. Got it.


Ehh in 1988 that worm was like an alien artifact from the cyberpunk future.

First "real" worm code, multi-platform, multiple payloads, "staging", first practical buffer overflow exploit and it does credential brute-forcing.

Heck it was not until nearly a decade later that people were really doing buffer overflows, and there were a LOT of easy overflows to be found.

I'd make the case rtm didn't just "make a worm" he foreshadowed the next few decades of computer exploitation.

Took a whole bunch of research and ideas, synthesised them, built an actual working "product" a decade or two ahead of its time and released it in a transgressive way.

If you are the kind of person who can do that I'm sure lots of people would like to be friends with you.


or Samy Kamkar.


Samy is my hero


It's one thing to find a security issue, it's another thing to exploit it and easily leads to jail time even if it's harmless.


Maybe I missed it but I do not understand why injecting a null byte allowed you to bypass Angukar's protections. Is that a bug in Angular and if do is it fixed?


Is there any tell-tale sign this happened to you? I had a really weird experience on Mac last week: I opened up my machine and when I focused on teams I got a security alert saying something called Endgame from Elastico was demanding permissions. Never downloaded it but there it was in Applications.


It is technically never possible to guarantee tell-tale signs of an RCE. At the point where you're running compromised code, that code could in most cases be constructed as to erase its own tracks. There might be some visible sign at the moment of exploitation, but after that it's kinda over.

(Yes this assumes the RCE escalates to a reasonably high privilege, but that's just a matter of chaining. You can try to go for things like sealed logs, but ultimately arbitrary code can put your machine in an arbitrary state.)

Particularly insidious for this would be the case of data theft. The RCE might load some code to upload your company secrets and keep itself strictly in RAM, and then erase itself when done. With enough blackhat craftiness you'd never be able to pinpoint the exact location of the leak.


If you're using an employer provided computer then they've likely installed Endgame[0] which is an endpoint (it runs on each device) security tool. Endgame was acquired by Elastic[1] last year

[0] https://en.wikipedia.org/wiki/Endgame,_Inc.

[1] https://en.wikipedia.org/wiki/Elastic_NV



Is this a work Mac? If so then it is likely managed through some kind of MDM system (JAMF etc), and it wouldn't be unreasonable for the owner of the hardware to be pushing down an endpoint agent like Elastic Endgame. Check in with your security team and ask them.


no, as you can see in the first demo it could be completely silent.

not saying you are safe - I don’t know :)


Thank you for making the internet slightly better.


There is, however, some consolation in the fact that only an individual who is already connected to you in Teams can run this.

That's not to say - of course - it's not abuse-able, it just gives some context to the fact threat MS calls this "Spoofing", since presumably, your Teams contact is someone you trust. So the bad actor is "spoofing" as someone trustable within your org (or outside it). But is does prob need some social-engineering for a bad actor to truly exploit this.

But the threat is still sever since the above logic only holds up to the point-of-entry, once the worm has infected someone the people forwarding it around are truly trusted.


One of my health care providers use Microsoft Teams as their telehealth solution. My city government uses Microsoft Teams for some public meetings. The idea that folks are only using Teams to connect with other trusted parties is comforting, but false.


> Microsoft Teams as their telehealth solution

That sounds..interesting.

I suspect with the on-going pandemic lots of tools are getting used in interesting ways they where never really designed for just to keep things going.


Microsoft advertises Teams for telehealth:

https://www.microsoft.com/en-us/microsoft-365/microsoft-team...


It’s bad, but it’s mostly bad because Teams is bad. It’s still better than Amwell, which somehow manages to have multi-second latencies and requires me to manually mute my video preview to stop it looping back my own audio.

The old P2P Skype had better video quality and latency, even when talking to people 4000 miles away, than every video product I’ve used in the last year. Probably not coincidentally, every video product I’ve used in the last year has been web-based. WebRTC is an enormous disappointment.


Teams as their telehealth solution? What is wrong with Doxy.me? It is HIPAA compliant and privacy-orientated for telehealth than Teams.


believe Teams is also used for the NBA virtual fan thing, so there are... a lot of people connecting there...


That’s pretty scary tbh. All you need is a single employee to fall for a phishing attack or other social hacking attempt and that’s game over. Everyone from the CEO down is compromised. Zero click wormability with remote code execution on a platform the entire company uses gives the exploit unlimited reach within a company. This makes this one of the most effective hacking/corporate espionage tools I’ve heard of.


Imagine a bad actor starting work at large corp having all confidential information up for grabs from colleagues on Teams. It is especially scary during these times where a lot of companies moved completely to working from home. Some health organisations also use Teams for group support meetings. Imagine someone being able to rummage through your documents during an appointment.


sure, add guest accounts to that and we are almost on the same page.

I can’t call this “spoofing” as there are many many things you can do wih it


I wrote that exploit & report. Just some thoughts on comments here.

Sure the bounty is low, but ultimately it's their money and their decision. They will deal with the 'consequences' of others skipping their program and some public shaming.

I find everyone talking about black markets etc. kind of ridiculous. Really? You would sell something like this, so someone can be spied upon or maybe literally chopped to pieces? Jesus, not everything is about money - it was a fun challenge to chain it all together and I learned a lot from it.

The most outrageous part for me was the blog post I discovered by accident - it included no references or mentions (check archive.org). Both of the code snippets there are from my RCE reports. At the same time they were denying my requests for disclosure.

Of course, I understand that coordination mistakes like this happen, so I accept their apology and move on!

Evidence - original RCE video with huge CSS injection overlay: https://www.dropbox.com/s/11pv2ghdkw5g84b/css-rce-overlay.mo...


> You would sell something like this, so someone can be spied upon or maybe literally chopped to pieces? Jesus, not everything is about money

If you haven't had food for a few days everything is indeed about money. Either you reward someone properly for the work that they can do or they'll find someone else who does. I doubt most people get fuzzy warm feelings helping a big US corporation that's too greedy to actually pay independent researchers properly.

Edit: That's not to say your work wasn't cool btw. It's very admirable for you to view it the way you do.


Technically true, but kind of ridiculous. How many people can't get food, but have a computer, electricity, internet connection, a reasonably quiet place to work, deep knowledge of web technology, and enough free time and mental energy to try to build exploits of computer software against an uncertain and distant bug bounty payout? If you're really desperate for food, you should be looking for a salaried position or something more immediate and certain.

More importantly, human history shows that ethics really are important. If you ignore ethics in the name of people starving, you build a society where even more people suffer and starve. If you want to build a society where everybody is safe and healthy, you need to pay attention to ethics now, not "someday".


Lots. Many more than you’d expect. To believe otherwise is privilege.

It took many years to understand this.


Dude, if somebody out there somewhere is seriously doing that, they really need some education in effective careers to pursue. That's a lot more likely to improve their lives than complaints about the social effects of the size of bug bounty payouts.

Speaking of privilege, how much privilege is there in believing that ethics aren't important, because you don't know what it's like to live in a place that never even pretended to care about it, and get robbed on a routine basis, because a bunch of other people around you don't care about ethics either, and would rather form a gang and smash anybody who has something they want than work to build a marketable skill?

That is the world you build when you advocate for people not paying attention to the harms of releasing exploits into the wild, because it might pay better than doing the right thing.


I'm sure you didn't mean to but telling people who are doing the best they can with the tools that they have that they "really need some education" comes across as incredibly condescending. It's been my experience that you will have a hard time convincing other people if you tell them things that way.


> If you haven't had food for a few days everything is indeed about money

I doubt anybody capable of finding an exploit like this is in that situation


I've met plenty of self-taught hackers in developing countries who were barely employed due to general economic dysfunction. Spend a month or two in Venezuela and you'll find plenty of qualified folks who have no steady job and are scraping by, how do you think people get into crime to begin with?


>> how do you think people get into crime to begin with?

lack of opportunity, lack of skills and lack of work ethic. As in it's easy to do, no barrier to entry and always availble.

Most crimes don't actually pay very well and have poor return if you've got any sort of marketable skills. Armed robbery of a bank will get you on average $1200 and 15-20 years.


I would add poor impulse control


I suggest you try and peek outside your bubble then. Software Engineering isn't free money everywhere.


You seem to be arguing against a straw man. Nobody said software engineering is free money, I said that a software engineer with the knowledge, skills and tools necessary to find an exploit like this is definitely not starving. In pretty much every country in the world, someone with those skills will be better off than 90% of the population


This is simply wrong. The fact that it is impossible for you to believe otherwise should inform you that you do indeed live inside a bubble.


So many comments to this saying it's possible to be broke as a software developer. No one is arguing that. There are tons of people in every career path that don't make much due to a variety of reasons.

But pretending software development isn't a well paying career path, in general, is a statistically incorrect statement


I'm very capable of finding exploits in what can only be described as terrible living conditions and I've done so while being categorically incapable of finding food anywhere. That's not the environment I live in today (and I'm happy about it), but it really doesn't require a nice warm home with a stable internet connection to find some glaring holes in an application.


Most software is made entirely free with no source of income. The job market for software is terrible, and those people work entirely seperate jobs from it. Many program on a very minimum life expenditure.


"Most software is made entirely free with no source of income"

No. Most software that is actually used, is not made 'for free'.


https://levels.fyi disagrees. I can confirm the offers on there are real


That's very simplistic. Not everybody wants to work for US corporations or live in the US.


Does that mean they automatically work for almost nothing? This is so different from what I’ve observed. I would love to see where people are getting this opinion from.


You replied to a claim about “most software” with a site that compares big tech companies, and only their US offices. The world is much bigger than your bubble.


Please omit swipes like "your bubble" from HN comments. They're against the site guidelines because they degrade the container.

https://news.ycombinator.com/newsguidelines.html


Fair, but what do you mean by “degrade the container”?


I mean that they poison the conditions for community. Does that make sense?


Do you have any data the counters what I’m saying? I know people in other countries don’t make the same salaries but they are “mostly” doing pretty well for their region


> I know people in other countries don’t make the same salaries but they are “mostly” doing pretty well for their region

here's some job postings for software engineer in Bordeaux, France: https://www.indeed.fr/Bordeaux-(33)-Emplois-Ingenieur-Inform...

It's around three times less.


How does it compare to the local economy?


>Do you have any data the counters what I’m saying?

Prove me wrong is bad argumentation.

>I know people in other countries don’t make the same salaries but they are “mostly” doing pretty well for their region.

The burden of proof is on the person making the claim. Do you have any data to backup your claim?


I gave some proof and I’m speaking from experience. I grant that my perspective my be biased so if there is any data to the contrary then I would love to be enlightened. My goal isn’t to point out if someone is wrong for the sake of it, I hope to teach, learn or both. This was such a shocking revelation to me that I was hoping for some data.


> I doubt anybody capable of finding an exploit like this is in that situation

Yet the vast amount of hacks or attempts typically originate from China or North Korea...


And? If they’re hacking for the DPRK they’re probably in the 1% most privileged of the country, they’re definitely not going to be the ones starving.


They can be when they try to live off of bug bounties alone.

There are a lot of young folks that try to make this their full time job after some success, then get into a dry spell. The panic robs them of the lateral thinking that brought them to the dance to begin with, and they get into spirals of ravenously hunting simple bugs that end up as dupes and out of scope.


> They can be when they try to live off of bug bounties alone.

I think that's the problem. You shouldn't be entirely dependent on bounty money, because sooner or later you will find a bug that is worth 10x or 1000x on the black market.

I have seen white hat bounty hunters go rouge in such situations and entirely blame it on the cheap ass companies that won't offer the "right" amount.

Nobody owns you anything, you are doing this mostly for fun. The bounty is just a bonus.


> Nobody owns you anything, you are doing this mostly for fun. The bounty is just a bonus.

That's missing a key point of the bounty system. Slack and its users are better off that this bug was 1: discovered and 2: responsibly reported. The bounty increases the number of eyes looking, but also incentivizes folks to look into weird crashes or fight through the drudgery of triaging odd behavior.

The bug value also shows how much Slack here values their security, and makes me wary of them if I was in the place to be a customer of theirs.


> The bug value also shows how much Slack here values their security, and makes me wary of them if I was in the place to be a customer of theirs.

Most directly it shows how they value a bug bounty program. There are companies that spend hundreds of millions of dollars per year and have thousands of people in their infosec program that don’t have bug bounty programs.

You can extrapolate that to how they value security but that’s not necessarily directly correlated.


>There are companies that spend hundreds of millions of dollars per year and have thousands of people in their infosec program that don’t have bug bounty programs.

Such as?


Large banks in the US.


Totally agree with you. I’m waiting for this to start going the way of Uber.


If you haven't had food in a few days, there are many better ways to get food on the table than trying to find exploitable vulnerabilities and sell them for tens of thousands of dollars, including

- Work on a bounty program that rewards mitigations instead of exploits (e.g., https://www.google.com/about/appsecurity/patch-rewards/). Those are much more deterministic. (But there's no black market for them.)

- Get a conventional job (possibly in software, possibly not), which pays you on a schedule.

I get the argument you're making about money, but I'm having trouble believing that going after bug bounties ever makes sense to someone in that situation, given how non-deterministic it is to find a bug.

Also (as this bug shows), it typically takes a long time between reporting a bug and having the responding team decide that it merits a bounty. In this case it took a month. (And then there's logistics about actually getting you the money at that point.) Are people who haven't eaten for a few days really going to be happy not eating for another month, even if they get a hundred thousand dollars then?


Are you seriously telling people who are starving to "get a [conventional or not] job"? I'm struggling to understand your point of view, this is almost a caricature.


I'm fairly certain that everyone in the vicinity of a bug bounty program is aware that interest in a program can be dialed up by simply adjusting award amounts. If you look here, Slack just recently increased theirs:

https://hackerone.com/slack/bounty_table_versions?type=team&...


> I find everyone talking about black markets etc. kind of ridiculous. Really? You would sell something like this, so someone can be spied upon or maybe literally chopped to pieces?

I work with some security engineers who in previous jobs used to write exploits for the highest bidder. Their stuff ended up being used for exactly this. One of them even told me quite proudly, you know that exploit that was in the news, that was mine.

The lack of any ethical framework other than "I want to make as much money as possible" viscerally disgusts me. And there is far too much of this in our industry, it's rife with this sort of ingrained dollar-chasing selfishness with not a care of the consequences.

Good on you for taking a positive ethical stand against this. It's very refreshing to hear.


> being used for exactly this

That refers to "spied upon" or sth like "chopped to pieces"?

In which continent?


I really hope they amend the bounty paid to actually compensate you for the find.

As a slack user, seeing them pay < $2K for RCE report does not make me feel safe. Next person finding something similar might be looking into this and saying "$3K? no thank you, I take the risk of getting caught but being paid fairly."

To be clear I am not advocating for this, but it makes me concerned as a user "some people" will be more likely to do it.


The point is: you don't really need black market or doing anything illegal to being paid fairly for such research. There are plenty of absolutely legal security companies that will pay you 10x for exploit like that and then just gonna sell it to highest bidder (read: all kind of government entities).

And yeah those companies in term work for 3-letter agencies and foreign governments. Of course many would consider selling to them unethical, but that would be absolutely legal.


Another likely outcome is that folks aren't going to look at all, or only at a surface level. This leaves low hanging bugs for those with malicious intent to find easily.


I haven't said anything about black markets but:

>You would sell something like this, so someone can be spied upon or maybe literally chopped to pieces? Jesus, not everything is about money

Not me, not you, but many people make it all about money. I don't think it's ridiculous to think that people can have absolutely zero ethics.


Sure, absolutely they exist. But in my opinion they are the absolute minority. I've been in security for long enough to know that most people are good, otherwise we'd have major problems every day.

99% of people saying something about black markets or govt agencies have never really faced this decision or thought about it for more than 5 minutes. So it was a question - have you REALLY thought about it?


I'd hypothesize that people are more willing to entertain the profiteering fantasy when they aren't realistically facing the consequences. Also, that people are more willing to be jerks under cloak of anonymity. As you note, perhaps only 1% of people with the drive to find these sploits are going to do something bad with them. That means the extra volume is folks who wish they had such a product to sell on the black market are just jealous wannabes. You can ignore them.


I haven't done any security research for decade, but it was my hobby long ago. While it's not true in every case sometimes finding worthy bug and then successfully exploiting it can literally take weeks of work. Like 14 hours a day work with break for sleep in attempt to solve some puzzle. Usually without any payoff.

This is profession where your actual skills mean very little until you do something exceptional to have portfolio or become famous some other way. It's very easy to talk about ethics for people who live in western countries and have easy access to well-paid jobs, but a lot of people didn't have such options.

I don't try to justify actual criminals here, but don't be surprised when people sell 0-days to some Israeli companies or NSA-contractors.


I don't live in a 'western country' nor do I make anything near a Silicon Valley salary


Then I can just state huge respect to your moral standards and hope you getting paid well enough to continue doing what you do.

There still are a lot of people who are not gonna be okay with said situation for long. Anyone can get more cynical and cruel / indifferent with age due to bad experiences: not getting paid well for reported issues, being cheated or getting into legal trouble for "doing the right thing". Some of us really love security research and want to make it their profession, but it's really easy to end up both without stable income or in some kind of trouble.

So I think it's important to raise awareness about it in developer community since many people don't understand how much effort is going into being white hat. It's just like the story with OpenSSL before Heartbleed: half of the world used software, but there wasn't even enough funding to pay properly even for single developer.


Read your report and the way you handled things both on technical and human perspective was perfect. Sorry that they made it so difficult to disclose. We are hiring if you ever need a job! https://serpapi.com/team


thank you, appreciate some positivity :)


It’s well deserved! :) Feel to email me directly if you have any question. julien _at_ serpapi.com


Out of curiosity, what do you feel a competitive bug bounty would be for this type of report?

It would be interesting if security reporters had a habit of ending their reports with what they feel is the fair market rate.


high 4, low 5 figures

depends on exploit, program, company etc


In my opinion about 10k feels right for this one.


I'm so sorry this happened, the CSO reached out and acknowledged the issue which was.. The minimum, but I'd be doing an internal RCA at Slack for how that post made it public without any acknowledgement.

Just sucks - marketing, legal, the engineer and peers who reviewed it, security..


> Sure the bounty is low, but ultimately it's their money and their decision.

Uh lol.

Bug bounties gravitate to their market value by showing companies how valuable they actually are and forcing them to learn.


Do you have more info on the javascript piece? I cant find docs for those object properties like delegate anywhere


The app has been updated multiple times since, but you can debug Slack and other Electron apps to see the context they are running with. Electron apps merge desktop functionality with web and sometimes it's possible to find abusable functions - e.g. filesystem, leaking dangerous Electron objects etc.

In this case it was possible to abuse lack of context isolation to overwrite functionality (first part of the JS exploit). This changed function behaviour to return (leak) a BrowserWindow class (https://www.electronjs.org/docs/api/browser-window) when calling window.open(). A BrowserWindow class allows to instantiate a new window with your own security settings :)

Some of the current non-standard functions in Slack: https://imgur.com/a/OSjS0kJ

More info: https://www.electronjs.org/docs/tutorial/security


your response wrt black markets strikes me as incredibly naive knowing all the crime, murder, gross negligence causing death and corruption there is and has been literally everywhere on the planet, since forever, for money


Unfortunately, we live in a world governed by money as a motivator. While you might not be in it for the money, many people are, to a certain degree (you know, to make a living and to be able to afford a decent life). If companies are unwilling to pay anything remotely close to what researchers' time is worth, then they shouldn't wonder when people prefer to sell the exploits that they find to those who do value their work appropriately.

And frankly, we shouldn't be giving companies a pass for being cheap because "reporting it responsibly" is the right thing to do. These companies are benefiting to a great degree by offloading vital security research onto unaffiliated and unknown third-parties. Your time, as well as the time of any other hacker or researcher, is valuable and needs to be compensated. I don't see why it's fair to any of us that we should have to work for free or for low pay-outs just because we might be doing the right thing. Same goes for any other career that is badly paid just because "they're helping people".


I agree with you. It's super low, but I and others will just ignore it in the future and ultimately they lose.

However, bug bounties are not a job. Nobody is forced or obligated to do anything. I'm giving them 'a pass' in the future :) It's great people are discussing this and surely it will improve things for future researchers.

I consider bug bounties like competitions. The 'prize money' is defined beforehand. You don't have to compete if you don't want it. You can also compete for the 'notoriety'. Knowing the stakes, do you complain after getting 'first place'?

Everything you own or do is only worth as much as someone is willing to pay for it, everything else is just speculation.


In my country there is a sort of obligation to get 10% of value in case you find something valuable but is more applied to found money. Many times people just return what they have found without taking any reward. This could be extrapolated to bug bounties as well. How much would Slack or its clients potentially loose, if this bug was exploited? I think that everybody could agree on some sum, lets say 200k USD. In that case 20k should be paid.

Another approach is to take invoice for last security audit and simply pay the whole amount of that invoice to the researcher. If none was ever done (good God!), just some usual quote for pen testing the targeted application could be applied.

HackerOne could also enforce minimum payouts per exploit category.


What you do, though, is objectively more valuable to Slack than you were paid. They have reframed security as the competition you mention, but the stakes are much higher and they're sidestepping with this issue of "responsible reporting".


> What you do, though, is objectively more valuable to Slack than you were paid.

This is a meaningless statement.

Obviously all work is more valuable to the company than what they pay you to do the work... otherwise they wouldn't pay you would they? Because they'd get nothing out of it.

If your work generates £5 for a company, then why would they pay you £5 or £6 for it? What's in it for them?


Obviously the point is that the gap between how much the person deserves and how much they're paid is particularly significant in this case


Payments from a company are subjective not objective. There is a single purchaser, in this case Slack, and the researcher already said that he wouldn't engage in unethical behaviour to make more money. Just sell the vulnerability to Slack, and be done with it.

Business owners of failing businesses, when they go to sell, many times think, "I've put in a million hours for this, so I need a million dollars." But, that will never happen.


> However, bug bounties are not a job. Nobody is forced or obligated to do anything. I'm giving them 'a pass' in the future :) It's great people are discussing this and surely it will improve things for future researchers.

Shouldn't people like you be able to do this for a living if you want to? It's valuable work. It has real market value. It seems like you're doing this for fun and genuine interest and I do admire that. Maybe you don't want to taint your motivation with the idea of "how much money can I get for this?" I get that too. But as an outsider, I see this low pay-out and I see exploitation under the guise of "doing the right thing". I genuinely want you to be paid more. You deserve it.

I feel like the only way this kind of thing will change is if people are more vocal about how inappropriate the low compensation is for a company like Slack. Public criticism is necessary and, unfortunately, the only tool we have nowadays to effect change. I understand if this isn't a hill you want to die on, but I hope that other people (particularly people who aren't in bug hunting) are willing to pressure Slack to reconsider its policies.

The problem with "others will ignore it in the future and ultimately they lose" is that it's a passive signal that is too easily overlooked and ignored. It never reaches anybody with any kind of influence who can make changes. If a big exploit happens and somebody does a root cause analysis, it's never going to lead to the conclusion that "well, it's because we haven't been paying enough in our bug bounty program, we need to change that", if only because there's no data about how many people passed on helping them out because of the low payouts.


Yes they should and I think I could. This exploit was more of a fun challenge.

I support and agree to everything you are saying. I love the community response. I too loathe the bug bounty asymmetry in power between corporations and reporters, but it exists.. by design. How do you imagine a researcher can 'demand' more money in this situation? They can choose the amounts arbitrarily and there is nothing legal or ethical you can do about it.

I haven't seen any proposals for real solutions - how would you ask this? How do you decide the amount for each company? Solutions, which do not bypass ethics or laws. I hope that 'the market' will solve this eventually and I think I at least raised awareness.


How much time did you spend on this?

Would you have done without excepting any rewards, i.e. just for fun?


Context matters. In this case it was a challenge because of previous research and I would've done it just for fun and the experience. I'm lucky I can afford to do that. Doesn't mean I don't value compensation.

In other cases maybe yes, maybe no - for some nonprofit, maybe someone needs help? are they a business and can they afford to compensate this kind of work? maybe it is some prominent product? there is no simple answer


Vulnerability researchers with track records make more than software developers do. This whole thread is pretty weird.


So, what is the right thing to do if you find a vulnerability in Slack?


There are western vulnerability brokers that sell advance warning of exploits to clients like large corporations and governments so they can protect themselves, then presumably handle notifying the company in question so the bug can get fixed. Of course, one problem is that their clients are free to abuse the exploits, and another problem is there's no guarantee they'll make sure the exploits get fixed... but that's certainly an option for you if you aren't comfortable using HackerOne.

Another option is to just disclose it to the public a set number of days after notifying them, like Project Zero.


I think the key thing is that there's a wide range in the amount of effort someone will put into looking for bugs/exploits, guided by a number of factors, like how fun the bug is to work on, the monetary reward, and any prestige from being the one to find it.

If an obvious vuln appears, obviously report it. But, these reports require a lot of work. It'd also be perfectly ok if the researcher reported whatever obscure behaviour they found initially, and went to go look at other targets with better bounties, played with their dog, etc.


Open disclosure on day 0 it would seem.


This might be unpopular, but if you don't feel like the compensation adequately reflects your effort, then you're free to do whatever you think is fair. It's your work. Slack isn't entitled to that work. Ideally, you'd check beforehand what a bug bounty program usually pays out and then decide whether to work on some other company's product that pays better. But you're always going to have people who are interested in doing this stuff and you're always going to have people who will look for the best pay-out for the work they've done.

The problem with starting with the baseline of "the right thing to do is always to disclose the vulnerability to Slack regardless of how little they pay" is that it perpetuates the exploitation of legitimate and important work by skilled workers. The onus should be on Slack to provide fair compensation, not on people doing this important work to "do it out of the good of their hearts".

Slack as a company had a revenue of $401 million last year and the average payout in their bug bounty program is $1376 (https://github.blog/2018-03-14-four-years-of-bug-bounty/). That's just disgusting.


> Slack isn't entitled to that work.

Sure, but that isn’t the user’s fault, and they’re the ones who are going to get attacked. I don’t disagree with your other points but I don’t think selling an exploit on the black market is the right solution.

Perhaps the best compromise, as I think about it, is to just make the exploit public with no prior warning to the vendor. That’s not great for users either, but at least they’re informed, and the vendor will be left scrambling. But in that case, the researcher gets paid nothing at all.


> Sure, but that isn’t the user’s fault, and they’re the ones who are going to get attacked.

This is true, but the responsibility to protect these users is ultimately on Slack, not the researcher. If Slack's bounties are nowhere near competitive with black market prices, they are failing to protect their users and should be called out on it.


> whatever you think is fair

Please give us some examples of what you would consider fair in this situation.


Hours worked for the exploit * 50$ should be enough.


That's silly.

If someone spends 100 hours coming up with, say a clickjacking vuln, it does not magically make it worth $5000. If someone spends 6 minutes coming up with zero-click sandbox bypass in chrome, its not just worth $5.

Severity matters not time, especially in a bug bounty. If you want the stability (and assurance) of actually getting paid reasonsbly and consistently for this you should get a job as a pentester.


That's kind bad - first of all 50$ can be really low depending on the region, but more importantly this disregards the time spend on looking for exploits that don't pan out.

So I would multiply that 50$ by at least 4.

But still like the other said bugs should pay by severity not by time spent.


The researcher would probably get paid even less, if that is the case.

The value of an exploit has nothing to do with the development time.


> I find everyone talking about black markets etc. kind of ridiculous. Really? You would sell something like this, so someone can be spied upon or maybe literally chopped to pieces? Jesus, not everything is about money - it was a fun challenge to chain it all together and I learned a lot from it.

Slack is directly taking advantage of that being the only alternative. You can do whatever you want with the money. However, having a robust bug bounty program ensures a wide range of people are both willing and able to look for and report vulnerabilities. This needs to be a requirement for any large successful company handling a large amount of user data. Slack can definitely afford it, and this can be used against them the next time they report a breach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You