this. First, the thing that none of the decentralize advocates recognize is that outside of HN and the tech community, no one cares! When you mention the word "decentralized" to Joe Schmo of the world, you already lost him. He has no idea what you're talking about and doesn't care.
Second, there is no way that a decentralized social network is going to have the quality and feature development that a centralized company will have. There is a reason people build great things in this world from buildings, to widgets, to software etc. It's called incentive. Capitalism drives innovation and it's a requirement to have the quality of product that is needed to draw eyeballs off existing platforms.
Lastly, I don't buy the argument that centralized companies can't be responsible with data. Do we trust banks to hold our money? The future is not decentralized social networks.
Is anyone talking about the harmful effects on startup companies that may want to create new social platforms to compete against the incumbent players? All the talk about regulating facebook, twitter, etc are actually great for those companies because they can afford compliance. But it raises the bar of entry so high that new companies wouldn't be able to compete since with limited resources they wouldn't be able to focus on the critical period of acquiring users and instead would be forced into building compliance features.
I firmly believe that the majority of people still don't care about their privacy in the first place or they wouldn't use such platforms. IMO this is government overreach and anti-competitive.
The GDPR makes some things easier for start ups. Users now have a right to their personal data in a "commonly used" digital file. Now the start up can have a "Import your Facebook data" feature.
Currently a provacy conscious start up is competing with those who aren't, making it harder. But with this law, you won't have as many shady companies like Facebook.
Storing less private data makes you less liable to get hacked and get bad PR.
Would this data file include the user's friend connections? In other words, if I exported my data, and my "Facebook friend" also expected their data, would it be possible to determine from the two files that the two users are friends?
I don't think GDPR compliance is as onerous as you seem to think it is, but even if it were, would it matter? We don't give special provisions to start ups writing safety critical code or developing new health care technology, why would this be any different?
There's nothing inherently wrong with a high bar to entry if that bar exists for a very good reason. If it were hard to break into this space due to regulation (I don't believe it is or will be) then yes, competition will be less, but the alternative is worse.
> We don't give special provisions to start ups writing safety critical code or developing new health care technology, why would this be any different?
Safety critical code and health care technology are life and death situations.
It's also important to understand that the regulations in those sectors have destroyed (or deterred) an incredibly large number of startups, and the net lives saved as a result is quite likely negative because the value of life-saving technological advances generally exceeds the cost of mistakes in developing them.
People have severe emotional reactions to this. A doctor's experiment may kill fifty already-terminal patients but uncover a cure that goes on to save five million. But the families of the fifty dead patients can blame a specific person for their deaths while the five million aren't even aware what they lost, so the regulations are biased against progress.
This is obviously not a good template for making decisions in other industries where emotions don't run so high.
> People's personal info can be a matter of life or death too.
That's the point. If we pass regulations that result in continued and increased centralization because only large organizations can afford compliance, that is not advantage to the people whose lives are at risk.
If you're a homosexual in Russia or a democracy activist in China or an advocate for womens' education in parts of the middle east or a Jew in WWII Germany, "privacy laws" can't save you. A company's fear of the state can't protect anyone from a corrupt state. But structural and technological privacy protections might. Which are the things hamfisted regulations inhibit.
I don't think GDPR compliance is as onerous as you seem to think it is, but even if it were, would it matter?
The answer is yes, it is onerous. And yes, it does matter.
Regulations always start as an idea that sounds good. The companies most impacted are then motivated to gain control of the regulations. Once they do, then they happily add on to regulations because that becomes a barrier to entry for new competitors, but do so in a way that ceases to be a problem for themselves. In the end the regulatory framework stops working and we have the very disaster that we were trying to block.
This is called regulatory capture. It is very, very common.
In the case of Facebook, here is the problem. The regulators are controlled by politicians who wish to remain in power. If Facebook breaks the rules in favor of those politicians, it becomes easier for the politicians to remain in power. The incentive is therefore for the politicians to become complicit in letting Facebook break the rules. However no new startup can provide the politicians with an incentive that matters - only Facebook, Google, and other similarly large players can bribe politicians in back room deals.
The payback for Facebook is that they get to solve their biggest existential crisis. The barrier to entry for a new social network just aren't as big as it seems. They can keep milking more from their users and buying up the Instagrams for only so long until something like Snapchat or Discord or someone not yet thought of succeeds. If Facebook is to avoid being replaced in the way that they replaced MySpace, and MySpace replaced Friendster, they need a new barrier to entry.
Regulation provides that for them. In public they will get chastised. You'll get speeches that you love. In private, they will happily become part of an effective surveillance state for those already in power in return for a blind eye being turned to their ongoing transgressions.
The result? The regulation that you are cheering won't accomplish the causes that you want. And if history is a guide, the very politicians whose speeches are the most to your taste will tend to be the ones who behind closed doors are selling you out. With their public speeches being nothing more than bargaining chips for private deals.
And for the record, I grew up in Canada. I am not opposed to the idea of regulation in principle. However every approach has failure modes. And regulation works a lot better in practice when you exercise skepticism about the actual aim as opposed to the stated one.
If you wish to build your skills at skepticism, I highly recommend watching the series Yes, Minister. It is from the UK in the 1980s. However the lessons about how bureaucrats manage to get their way while pretending to listen to politicians are timeless. It also came out much later that it is less fiction than it first appears - most episodes were based on actual incidents. And some were downright prophetic - compare https://www.youtube.com/watch?v=37iHSwA1SwE with actual British policy towards the EU since.
I have no reason to believe that the picture painted then of the bureaucracy in Whitehall is significantly better than the bureaucracy that has sprung up in the EU.
Based on the first link, that letter scares me a lot. I have a feeling that this level of regulation will destroy any social startup. You'd need a compliance department larger than engineering just to remain legal. This is clearly a win to Facebook.
Or you just build your permissions and opt-in platform as a base for the social app.
We wouldn't let a self driving startup ignore traffic laws because it's "too hard". Likewise we shouldn't let a social startup ignore privacy laws and auditing.
At least on the surface it doesn't seem that bad. You just have an opt-in data collection with (type-of-data, purpose-of-data) tuples and let users actually delete data on request.
Allow Socially to collect the following information for the purposes of providing you service:
- Minimal Account Information: email address and password
To prevent spam if you don't provide additional profile information you will be required to verify your account with a valid government ID. Only the expiration date will be stored.
- Information posted to your timeline.
Without this you will be unable to post updates.
- Messages sent to others.
Without this you will be unable to send messages.
- Profile Information: Name, Address ...
Allow Socially to collect the following information for the purposes of protecting your account:
- Network Addresses used to access the service.
- Login location
- Login times
After a short time using the service if we see a login that doesn't match the information on record we will notify the primary email for approval.
- Links to other sites you click.
We will check links you click against our list of known phishing sites and scams and warn you before redirecting you.
Allow Socially to collect the following information for running internal studies and improving our service.
- Features you use.
- Posts you read.
- Links to other sites you click.
Allow Socially to collect the following information to help make ads more relevant to you:
I would strongly recommend reading what Pagefair has been putting. They have been one of the few sources I've found that is take GDPR literally. It isn't even clear what level Google's compliance will be - https://pagefair.com/blog/2018/googles-nonpersonal-ads/
There are a lot of extremely serious questions that arise regarding network security, anti-fraud, and anti-abuse measures. Just looking at basic bot detection measures, all of the sophisticated methods are now illegal. It certainly requires a major re-think of how websites serve content as well as the sustainability of advertising as a revenue channel. I can't even wrap my head around how someone would run a GDPR-compliant dating website/app.
If you think Pagefair's interpretations of the GDPR are correct then Google and others are calling the EU's bluff. They are implementing part of the GDPR strictly but the parts which invalidate their business models are being interpreted more liberally or ignored altogether.
I'm not saying that the GDPR is a good idea, bad idea, morally right or wrong. Rather, a lot of things we have come to view as a given -- such as how we detects bots, fraud, and abuse -- are no longer valid. Infrastructure, both technical and business, will need to be re-designed either to comply with the GDPR or evade it.
I kind of feel like every question in the first link is entirely reasonable and people _should_ be able to get those answers, though. Nothing in there is onerous if you're following good practices anyway.
I really feel like the answers to all of those questions are going to be basically identical between people, and all you really need to do is be able to export whatever data you have on somebody quickly in order to be able to respond to that email in under quarter of an hour.
I guess it could make a decent DoS tactic against a small company, but lots of other things would too.
> respond to that email in under quarter of an hour.
Let's take an app like Instagram as an example. Instagram had over 1 million users within two months and 10 million within a year, and no profits. You're running on a shoestring trying to keep servers online without any serious budget to speak of. It's probably you and a few friends/associates working closely together.
All of a sudden with GDPR, you have to pay a lawyer to help you understand what you need to do to comply with the regulations. You also have to spend engineering time developing solutions to enable the queries in that letter, enable purging records from long-term backups, etc. And people have to spend the 15 minutes responding to each request.
Now, let's say each request does only take 15 minutes like you suggest (which I find highly unlikely). If a small fraction like 0.5% of your customer base sends such a letter, then that's 50,000 letters. At 15 minutes each, that's 12,500 hours which is over 6 full-time employees. Many small business don't even have 6 employees to conduct the entirety of their business right now!
If the concern is that business owners can no longer cut costs by being lax with people's data... isn't that the whole point of the GDPR? That we've collectively decided that letting people cut those costs is having too many negative concequences too often and that we need to stop?
thanks, wow responding to a letter like your first link could significantly bog down resources for a young company... you can imagine if you launched and even received moderate user growth early on, but then started receiving such letters, your productivity could go down the tubes.
I disagree. Here's an outline of what a response to the letter in that first link should look like for a small, well-meaning* startup:
The letter is nicely formatted into 9 bullets. All are optional for small companies, and all can be automated - the answer should be the same for all users.
1. This is a "yes" or "no" question. If the answer is "no", you can ignore the rest of the letter. If yes, the answer is the same for all users.
2. Simple, short, same for all users.
3. You can avoid doing if you want. If you are doing this, you're signing up to take on this additional burden of informing your users. Consider this when making this decision. This is the only bullet in the list that is in any way burdensome as you will need to update this text in your automated response whenever you take on 3rd-parties (if at all).
4. Simple, short, same for all users.
5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
* by "well-meaning" I basically mean "not selling all of your users personal data to myriad nefarious 3rd-parties"
> 3. You can avoid doing if you want. If you are doing this, you're signing up to take on this additional burden of informing your users. Consider this when making this decision. This is the only bullet in the list that is in any way burdensome as you will need to update this text in your automated response whenever you take on 3rd-parties (if at all).
Pretty much everyone is going to. Google Analytics, Zendesk, Salesforce, and more all qualify. Hell, even AWS qualifies...
> 5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
Why do you say that? Given that we're discussing technical companies, I fully expect that automated decisions will be made.
> 7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
> 8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
> Pretty much everyone is going to [...] even AWS qualifies...
I worded this badly. This is optional on a case by case basis, i.e. there's a cost-benefit to using each 3rd-party, and this burden is worth considering for each. It's still not a massively onerous burden tbh if you do use a lot of 3rd parties.
> And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
> It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
Exactly. And unlikely to be more knowledgeable if they're reading misleading scaremongering articles like this on LinkedIn!
> I worded this badly. This is optional on a case by case basis, i.e. there's a cost-benefit to using each 3rd-party, and this burden is worth considering for each. It's still not a massively onerous burden tbh if you do use a lot of 3rd parties.
I'm up close and personal with a vendor assurance process right now. It's often a non-trivial amount of time for any given vendor.
> I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
I'm sorry, I worded this badly. I'm saying that small startups have a tendency to prioritize getting a product working and seeing if it's worth investing heavily in before standing up a strong information security unit. You're absolutely, completely, 100% correct that there should be incentives to be very careful with user data.
I think it's possible to see where some people might find the level of expense and expertise required to be appropriately careful somewhat scary. I can even see where some people might decide to not create a social media startup to challenge Facebook because of this fear.
Honestly, those questions should be pretty easy to answer especially if your company is small. If as a business you can’t answer these basic questions about the data you want to collect from me, I’m going to be hesitant to share it.
People keep sharing that “nightmare letter” link but won’t point out which question gives them nightmares and why.
A couple of things stand out to me as potentially scary. First, the hard one-month timeline. For a brand new baby startup, a month is a lot of time and any distraction potentially killer.
Second, a list of everything across all types of storage in any and all systems stands out. Even large companies often lack the ability to search ZenDesk, Salesforce, email, AWS S3, and Slack logs all at once.
Third, there's a clause that asks quite specifically for a thorough list of any and all potential future plans. That's a lot, especially given how startups are subject to pivoting.
Fourth, the section about third parties is essentially asking for the outcome of a vendor assurance process. A lot of small companies can't pass a reasonable vendor assurance process. They often can't afford the time and assurance specialists to manage one for their vendors. Even large companies often have trouble maintaining the level of control required for thorough vendor assurance. The bit about legal reasoning implies the involvement of a lawyer as well.
Fifth, there's a strong implication that no matter what you might say in response, it's not going to be good enough. There's always something that can be pointed to as not enough.
With all of the above combined, I can see where some might view GDPR as intimidating and favoring big companies over small ones through sheer costs.
> People keep sharing that “nightmare letter” link but won’t point out which question gives them nightmares and why.
There is a standard way in which "reasonable" regulations kill small companies. It works like this. You impose some small burden, something like an hour of labor a week. That won't destroy a small company, but that is not the only rule in the world. That rule takes an hour, another rule an hour and a half, a third rule a half hour. By the 60th rule, a two person company is past sunk. Even if every individual rule is nominally reasonable, the combination is hopelessly destructive.
The problem with tech companies is the rules don't just add together, they get multiplied by the user base, and it's entirely common for a very small company to have ten million users.
So you take a letter like that. The first time you get one it will take you a week to figure it out, but over time you get the response time down to an hour. Only with 10 million users, if 0.1% of the users make such a request per year, you're looking at 27 of those every day. That's more than three full time employees doing nothing but that. For this one "reasonable" regulation.
I'll point out which question gives me nightmares, as the founder of a EU startup:
- the requirement to have a DPO. Based on the requirements for the DPO, no one in the company can fill the role (conflict of interest), so we must hire an employee or consultant (expensive either way for a small startup)
- one month to respond. That's a lot of informations to collect the first time, and I might have other fires to put out (or I have to be pro-active and have a prepared respond, which has the take the place of something else important to do)
- the sheer amount of informations to collect. In the age of plug and play solutions, that's a LOT of things to audit (Mailchimp, AWS, GA, Heroku, various Wordpress plugins, logging solution I don't even remember the name, just to name a few)
- tracking every single PI of a user. If your systems are not built for this, it's going to be lengthy. If you were created before the GDPR, they are probably not.
- tracking down the usage of those PI may be complicated depending of the expected scope and usage you do (fortunately for me, there is no ad nor data resell, so really only the scope is the problem)
- some process asked for have a serious implication you should have some and do some sort of things. This is not feasible for a small startup.
It boils down to: it takes time, and time is something I'd rather use for something else, and it also requires to do things that have huge fixed cost that the size of a small company can't absorb (at least not until there is a ready-made solution).
I define small startup as startups with less than 20 employees, that might have received Seed funding but not more. Those points might not all be applicable to a new startup created with GDPR in mind.
Simply build a secure and private platform, don't be reckless with user data. Health startups already deal with this through HIPPA and it isn't really a big deal, just common sense practices for security and privacy
I'm going to be honest: I have no clue about social. I operate in socio-medical domain, we don't share by default.
We are mostly fine with the spirit of the GDPR, it's the work we have to do to follow it to the letter which is a problem (and the lack of process internally).
I equate the events right now surrounding Facebook to Upton Sinclair's book "The Jungle", and GDPR being the privacy-analogue to the creation of the FDA.
The FDA makes the medical field hard to break into for startups, but for good reason. New medical devices need to go through rigorous verification and validation to show that they work as intended. If a company making pacemakers had the same "move fast and break things" attitude as most of SV seems to have, I might never trust medical companies again. As a consumer, I'm extremely content with the quality of pharmaceuticals and devices, and I wish I could trust Facebook or Google as much as I trust Medtronic or Philips Healthcare.
As someone who works in a startup in the healthcare space, I will point out that nobody lets health startups off the hook for HIPAA. You don’t get to be sloppy with people’s protected health information just because it makes your life easier.
I'm sorry but I don't see a comparison between what people *willingly post online to public forums compared to their personal health ledger... it's not apples to apples
The content of the data is not the issue. The point is that society has decided to pass a law stating that certain data needs to be treated a certain way or there are serious penalties because of past abuses. We in the US take for granted that this law exists, but there was much complaining in the medical establishment about how burdensome it is to them conducting their work because of all the extra protections it required. This was especially true in biomedical research where patient data was pretty carelessly treated in many cases. Not because the people involved were bad people, but because society as a whole had not thought through the consequences of walking around with an unencrypted list of cancer patients on a floppy disk.
I don't see a comparison between what people *willingly post online to public forums compared to their personal health ledger
There is, or at least there was a Facebook project for exactly that [1]
The thing is that none of those "anonymized" subjects would have ever been asked for consent if they really knew about the consequences.
Such behavior has really, really bad real world implications: When I got a knee operated one of the questions on the questionaire you need to fill is if you agree that your data can be shared in anonymous form for research. At that point (and given that this was a fairly
benign condition) I didn't see a problem with consenting.
After that revelation about what Facebook was up to my answer in the future is a clear NO!
Facebook handling medical data. What could ever go wrong with that?
>> with limited resources they wouldn't be able to focus on the critical period of acquiring users and instead would be forced into building compliance features
If you cannot comply with privacy rules you should not do social media, whatever you growth phase
It depends on what part of the GDPR you're against. I'm generally in favor of a lot of the GDPR's goals, but the execution is pretty clumsy and a few of the provisions are at best useless and impose unnecessary costs.
I wonder which ones specifically? I am reading into it because I am onto implementing it in our small company.
Everything is as in citation from GDPR:
"Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes
... implement appropriate technical and organisational measures ..."
1. Most things fall into this category: Lack of clarity in the law (and a remaining lack of clarity from WP29 and the Commission) about dozens of issues. The Privacy Professional community has been proactive about trying to get info on a lot of these items, but there's just not much coming, and in a few cases what has come out has either departed from what seemed like more obvious meanings or in some cases has muddied the waters further.
2. The essential ban on offering services, downloads, etc. in exchange for consent to use data reduces consumer autonomy and will decrease the availability of free resources.
3. It will be extremely easy to use SARs maliciously, and the law includes NO check whatsoever on this. All it would take to cripple many SMBs is for some jerk to spin up a website that provides a nasty SAR template (that the users don't even realize is such a burden) that random people on the Internet can auto-send to every business they've ever used under some innocuous-sounding reason like "See what information businesses have on you!" 99% aren't using data against subjects' interests, so the net effect of this alone (in the way it is designed) is potentially-immense costs for small benefits.
As a recommendation, the $250 my company spent on buying me a membership to the IAPP has been one of the highest ROI decisions in recent memory. It has saved me a ton of time and effort (and the company quite a bit of money) from the member resources available, and the members listserv is essentially free light consulting from people who have already dug into everything.
The more noise I hear those who work in "ad tech" and other fields that have been marching towards the destruction of privacy making about GDPR, the more confident I become that it might actually help.
You're right! All the stuff about right to be forgotten, right to view, right to make corrections, and so on should be very straightforward and easy for any company of any size interested in being honest. Especially for new players, who don't have ugly legacy systems to wrangle.
Yet... I've read through GDPR. All ninety-nine articles are chock full of "reasonable measures" and similar verbiage. Unless you can afford a compliance specialist - which isn't automatic for a new player - it's intimidating as all hell. What are reasonable security measures, as seen from by a careerist somewhere in Brussels? The text is silent on what exactly that means.
It's possible that respecting users and having good intentions may not be enough...
I think "reasonable measures" is pretty typical language when talking about compliance. I don't know GDPR regulation very well but I know FDA regulation reasonably well and I imagine compliance will be similar, and much easier for the new GDPR.
Most important is to document everything. Have a design history file that you can show in case you get audited. When you design your software, save your designs in the DHF. When you update or make changes to the design, put that in your DHF too.
For each GDPR article where it makes sense, have it written somewhere how you are compliant with what they ask for (you probably don't need to demonstrate compliance with Article 4 [1] but you should have it written somewhere how you are compliant with all the points in Article 5 [2]. When it says "Personal data shall be: (b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes" You should be able to procure a document that lists the various kinds of personal data collected and how each is used; e.g. "Username: The username serves to associate a person's login id to their profile. [... other details] Profile Picture: The profile picture serves to display an image of the user. [... other details]."
When it tells you to have reasonable security measures, then document what your security measures are. "This data is encrypted" or "This data is saved on an external server disconnected from the internet and only accessible by someone with a dongle". If you're still worried that your user data could be insecure, then it might be worth hiring a security specialist to check it out.
With all that said, my point was that it's not obvious what is and isn't reasonable. Hiring a security specialist won't necessarily help you understand what bureaucrats will or won't deem reasonable, especially when there's no history to provide context.
You're right about that. I guess for a rule of thumb, imagine what the reaction on HN will be if your system gets hacked, and assume bureaucrats will say the same thing. Will they be criticizing "Everyones' credit card information was saved in plain text" or will it be "Even though this disgruntled employee uploaded everyone's usernames to [untrustworthysite], most of that information is still encrypted and the company made a public announcement about it hours later".
It's your best guess what is and isn't reasonable. As long as you've documented what you did and why you did it, then you've satisfied that requirement. If an auditor finds what you've done to be insufficient, you'll probably get a warning but you'll still be considered compliant for having done something.
I know it's not a satisfying answer and I'm sorry that I don't have a better one, but complying with regulation is not as definite a "yes/no" as programming.
My knowledge is with the FDA so I'll give an example I'm familiar with. I worked with CT scanners and we needed to do verification/validation. The FDA requirement was to the effect of "must define reasonable requirements for the device" and "must set up testing procedures that reasonably demonstrate that a device can meet its requirements" and so the team I worked with set requirements like "radiation dose: <20rad when run on [x] setting" and then tested it at [x] setting 5-20 times, then documented "passes radiation dose test with 99% certainty, which exceeds our cutoff for passing which is 95%".
CT is an old industry so there was a bit more to it than that, but we were still following requirements that we had written, and testing them in with procedures we had made. The point is the requirements even in the health industry can be vague, so you really just have to do your best to come up with something reasonable.
And because it's vague, that's also why it's so important to document everything.
What about deleting data in backups for an EU resident who submitted a request for data deletion? If a company is using mysqldump or equivalent it seems difficult to just drop certain records from those .sql files.
Have a reasonable retention policy on these backups. Backups are a "legitimate business interest" and you don't need to purge "right to be forgotten" requests from your backups if you stick to a reasonable and publicly-documented retention policy. This is advice that I've received from counsel. However, I am not a lawyer, and this in no way should be taken as legal advice.
Of course. For us, it came down to cost. EU customers make up <1% of our revenue. Implementing this non trivial feature didn’t make sense for us financially. So, we decided to drop all EU customers entirely.
But how can you have enough user-growth to get vast amount of money from investor if you can't play fast and loose with the data you collect on your users?
Well the world has changed clearly. When facebook/myspace started out, would they have been able to achieve success if they were bogged down with data privacy compliance?
A lot of people don’t care about fire safety either (until their house is burning down) which is why we have regulations, building codes, mandatory sprinklers in offices, etc.
I am starting to look at privacy like it should be treated as a public safety concern, since it’s invisible to people until it’s not.
can social media kill you though? I mean all this talk of regulating social networks is under the assumption that it's something you need to have. I would argue that safe shelter is a true human need, but posting cat gifs or pictures of drunken escapades or political musings does not seem equally comparable and thus I do not see how regulation does anything other than hamper competition.
Yeah, it can and it did. Not only did the Ashley Madison leak led to a few deaths, check out what happens in countries where homosexuality is punished by death when private information goes public...
It can be both. Responsibility is not a conserved quantity. Entities who take on private data should be considering the effects of that data becoming released, including what others may use it for.
“It could kill” is a pretty high bar for accepting that something needs to be regulated. We accept regulations on other data collection / storage activities, like financial data and health data. We also have licensure for occupations where public safety or wellbeing is a concern but lives are not necessarily on the line.
I don't think it's an equal comparison... the effects of social media on societies is a somewhat subjective matter. It's more likely that social media is just another tool that exposes the underlying human nature.
That said, if a fire occurs in my shelter and I don't have sprinklers installed, I could die.
> A lot of people don’t care about fire safety either (until their house is burning down) which is why we have regulations, building codes, mandatory sprinklers in offices, etc.
We don't have mandatory sprinklers in home offices and undeveloped land and buildings that are still under construction.
The problem with the equivalent distinction in software is that there is no clear point that software is "finished" like a building is. The architect doesn't come back and make changes a year after the occupants move into a building.
If there is no exception for new code still under testing then there is no way to test new code. But if there is, everyone will live their lives inside of it.
A big part of the problem is we have a brand new set of very vaguely written rules with no case law. Given time, I expect we should see case law and software change to be more GDPR friendly.
I am very curious to see what happens to EU ad revenue after GDPR. If it doesn't drop (outside of Google & Facebook's internal platforms), I'm guessing there isn't much GDPR compliance going on.
It’s not really as bad as that. Practically, the EU lawyers are not going to prosecute some dumbass no-revenue “Tinder for cocktails” or similar. They are out for money and only going after the guys who are big enough to pay but haven’t complied yet.
I read this, as a recovering addict that went through my own hell mostly from alcoholism, and wondered where the "science" part was?
For instance this sentence "More than 13 percent of its participants died after treatment,1 mainly of overdoses that could potentially have been prevented with evidence-based care." The argument is that they "could" have been prevented but how do they really know that? How many people in general die of an overdose after X number of stays in a rehab? Through my own experiences going to rehab, I have known of many people who die. The fact is you can't predict which people will "get" it any better than you can predict if it will rain next month.
I also disagree with this notion that being on suboxone or methadone indefinitely is a legitimate solution. Like what? You're advocating to stay on a drug the rest of your life? That isn't recovery at all, it's a band-aid that will likely lead to relapse and promotes a perpetual notion of being sick.
I also disagree that their supposed evidence that CRAFT gets twice as many people into rehab has much relevance. How many of those people relapsed? How many of the people that didn't go to rehab ended up overdosing OR recovering? We don't get the whole picture so the "evidence" is mute.
They knock 12-step which is fine but it turns out that it works for millions of people.
There is no silver bullet here.
While I have mixed feeling about the "tough love" approach, I can tell you from personal experience that the only reason I'm not buried right now is because at a certain point the floor dropped out too low, my family and friends abandoned me, I lost everything for a moment, and the pain and horror reached a level that finally I had a change of psyche on my OWN and realized I wanted to get better.
In my own humble opinion the only "science" that matters on this subject are the opinions of those whom have lived it and recovered. Go survey the opiate addicts that didn't end up dead and find out what worked for them.
> In the U.K., researchers looked at data from more than 150,000 people treated for opioid addiction from 2005 to 2009 and found that those on buprenorphine or methadone had half the death rate compared with those who engaged in any type of abstinence-oriented treatment.
That's some pretty scary data. Half the death rate... that's a very significant number.
> You're advocating to stay on a drug the rest of your life?
Why not? It saves lives (hello insulin). And here's a telling bit from the article:
> When patients take a stable, regular and appropriate dose, maintenance medications don’t cause impairment, and the patient can work, love and drive. In essence, what maintenance does is replace addiction — which, remember, is defined as compulsive use despite consequences — with physiological dependence, which, as noted above, is not harmful in and of itself.
Certainly it would be nice to have an effective solution that didn't require daily use... but we don't. What we have is a safe drug that cuts the death rate by half.
If there were 0 people who recovered from opiate addiction without being hooked on a maintenance drug for the rest of their lives then it'd have more credence. If you give an alcoholic xanax for the rest of their life because it affects the same area of the brain but they aren't drunk anymore, are they recovered?
Not to mention, that seems like a pretty horrible and bleak outlook to make people believe they can't make a full recovery without being medicated for the rest of their life.
Again, I've known many people who have taken these drugs and a large number of them relapse bad. In fact, from what I hear the withdrawl from suboxone is 10x worse than from heroin.
If someone replaces a drug that's doing them very serious harm with a drug that doesn't do them harm, I don't see the problem.[1] I take doctor prescribed medication daily that I could live without but it significantly increases my quality of life, I don't see a difference between me and a person who is on methadone but otherwise well in life - has job, money, clothes, friends, community, etc.
[1] Xanax has a lot of problems with long term use though, so Xanax, specifically, would likely not be a good candidate.
My MIL replaced alcohol and cocaine addictions with a coffee addiction. It's been over a decade and she doesn't seem to have any significant negative outcomes with the coffee and coffee is cheap and readily available almost everywhere. From what I see, I believe the coffee is more of a psychological crutch for her - but it's not harming her.
First of all, thank you for sharing your experience. It was insightful, and I completely agree with your criticism of the article.
I would like to make a small, and perhaps somewhat pedantic comment regarding your last statement:
>In my own humble opinion the only "science" that matters on this subject are the opinions of those whom have lived it and recovered. Go survey the opiate addicts that didn't end up dead and find out what worked for them.
There is a problem of silent evidence and survivor bias here. What is important is not what they did that led to their recovery, but what they did differently (or, more generally, what was different in their circumstances) from those that tried to recover, but didn't.
So, IMHO, what is needed is not _just_ the opinions of those that recovered, but a longitudinal study to identify which, out the many factors that were involved in the recovery process, have been the most instrumental.
> There is a problem of silent evidence and survivor bias here. What is important is not what they did that led to their recovery, but what they did differently (or, more generally, what was different in their circumstances) from those that tried to recover, but didn't.
That's assuming they did anything different at all. It could just be that there is not a one size fits all treatment for this problem, and part of the solution is to match the right treatment for each particular addict.
While I agree with your points about collecting empirical and unbiased data, I want to point out that when it comes to opioids, "Science" is moving the goal posts. They are measuring social acceptability of a subject while under the influence of doctor-prescribed dope, while ignoring the numerous addicts who maintain similar levels of social acceptability while using Street dope, then declaring their method a "success".
Yes, fair point. This is a common problem for social studies. One must keep in mind and be explicit about the population the study sample is drawn from; and very cautious about extrapolating the findings to other populations.
Not only what they did differently. It might be something they didn't do as well. Since data is laking any personal effort might be completely irrelevant and only the environment might make the difference. I presume there is a large personal effort involved but we don't know.
On survivorship bias: the B 17:s in 2nd world war that generally are used as the practical example of this principle had bullet holes exactly in those parts of the plane that were fine. The parts that had not taken a beating in the survived planes were the ones that needed more armourplating.
So, where are the psychological bullet holes in those who've not beaten addiction?
I have a couple of friends on methadone, they are cordial, relatively together and in a totally safer place than prior. I would rather that than the other so it's legitimate approach in my eyes, they have the rest of their lives to figure out when they can stop.
"Detoxifications and drug free modalities, although appealing to an understandable desire for recovery without medications, produces only 5-10% success rate. Methadone maintenance is associated with success rates ranging from 60 - 90%. The longer the people are in this modality the greater their chances are of achieving stable long-term abstinence."
> I have a couple of friends on methadone, they are cordial, relatively together and in a totally safer place than prior.
From my work with injecting drug users about a decade ago: None prefers methadone, they only take it because, when in a treatment program, they get it for free and unadulterated. When it comes to actually ceasing consumption, at least on the mindvox drug users list, the consensus seemed to be, that is is easier when first switching back to heroin.
Ask your friends whether it is the methadone as a substance that helped them or the decriminalization and steady supply they don't have to worry about. From experience, I bet on the latter.
We now have a methadone clinic in our local shopping center, right next to a large childcare facility. Since then, crime has gone way up. The grocery store now has armed guards.
I'm not necessarily a supporter of suboxone / methadone / whatever , but this NIMBY rhetoric isn't helpful, either.
It's rare for the down-turn of an area to have one extraordinary cause.
I guess that the suggestion is that methadone clinics create crime where there otherwise wasn't -- but I don't see it that way in my community.
What I see these clinics provide is a centralization of potentially bad actors for authorities to keep tabs on while they seek guidance or pay court-ordered time to the system otherwise.
What's the alternative here? Cease these communal style clinics? If one believes in these treatment options whatsoever then it must be realized that cessation of these clinics would take that care option away from many people who may find legitimate use.
I don't have alternatives to the clinics , but I do have insight into what one should pay attention to when an area begins to struggle:
Income, education, and general upward mobility within society.
> It's rare for the down-turn of an area to have one extraordinary cause.
no it's not. he's talking about a single store. in my town it's the homeless shelter and halfway house that causes a 5 block radius around it to be a terrible place to live.
we're not talking about building more housing, or zoning for high density commercial/residential mixes, or building public transit, or eliminating cars from downtown cores, or building more bus routes, or bikeshares, or uber, or any number of things that people actually want. we're talking about methadone clinics and halfway houses next to where affluent people live.
if you pretend like it's hard to understand why people don't want those things in their residential/shopping neighborhoods, you're just going to alienate everyone you communicate with. you can't just invoke the magic 'nimby' and get people to change their minds. __they don't want these things next to where they live__.
why should it be a problem? in the UK, methadone is normally dispensed from a regular pharmacy, just like any other prescription you might get from your doctor. I wonder if what you are seeing is that the clinic has been placed in a neighbourhood that already had high crime/homelessness/etc. since that is where the addicts who need its services are? it just seems odd that a clinic would be placed in an affluent area, where property costs are probably high, and the service users have to travel to get there, but maybe it's an american thing?
> they have the rest of their lives to figure out when they can stop.
The answer is: never.
Only if they get into a program that guarantees they will be detoxified in 6 months(or so). This is rare unfortunately because it needs the addict's consent, which is rarely given. They usually choose the "open-ended" program because it's easier.
They only use methadone because it's a legal(but controlled) drug. It's not that much better than heroin.
never and alive is just fine, methadone satiates the addiction though does not really deliver a 'high' anymore so these friends can fill their lives with more interesting things, at some point methadone might get de-prioritised, in the meantime, I support them with love and encouragement, they recognised they were in a situation and took intelligent pro-active steps to move their lives forward, I'm not them, I'm not inside their mind, I don't know how hard it is for them, I'm not here to judge.
No one is judging no one. And you should keep supporting them any way you can.
But I am just stating facts.
> at some point methadone might get de-prioritised
But that is wishful thinking, not reality. Open-ended programs do not work. Period.
By far most people in such programs relapse multiple times. You can't expect from the addict to kick the drug off out of sheer will and good intentions.
> took intelligent pro-active steps to move their lives forward
You don't understand addiction and how it works. It has nothing to with intelligence or logic. When someone chooses an open-ended program is because the people around them force them to take some action and they choose the easiest one.
Again, no one is judging and no one said those people do not need love, quite the opposite. But open-ended programs destroy their lives.
You're talking in absolutes, with no references, do you have any supporting evidence, did you read the linked article, I seem to some good science on my side.
"For opioid addiction itself, however, the best treatment is indefinite, possibly lifelong maintenance with either methadone or buprenorphine (Suboxone). That is the conclusion of every expert panel and systematic review that has considered the question — including the World Health Organization, the Institute of Medicine, the National Institute on Drug Abuse and the Office of National Drug Control Policy."
I'm talking with more than 20 years of experience being close to people who follow these treatments.
One has to wonder why would we not choose a treatment(3-12 months, depending on the person/situation) since it can demonstrably help people recover completely from opioids and instead choose life-long dependence on particular drugs.
Are these the same scientists that think prescribing Vicodin, as if it's aspirin, is a good thing?
> One has to wonder why would we not choose a treatment(3-12 months, depending on the person/situation) since it can demonstrably help people recover completely from opioids and instead choose life-long dependence on particular drugs.
Who cares? Nobody talks this way about a person who has to take heart medicine every day to survive, and not that many talk this way about me when I take Lexapro every day to not get depressed. What makes methadone so different?
You can survive without it for starters. Not the case with heart medicine I guess?
If you saw people, with the same condition as yours, having a good quality of life without medicine wouldn't you not wonder if that could apply to you too?
In the case of drug addiction, I've seen many people successfully kicking off the habit in months and living a perfectly healthy life, as if almost nothing happened.
Life-long methadone users? Not so much. I can tell that people really think it's different from heroin but it's not really. It's just regulated. Think about it ;)
> I'm talking with more than 20 years of experience
Experience is valuable but for driving systematic changes, unreliable. Its too easy to color personal experience with bias of many kinds. That's the entire point of science, to eliminate those biases and document the underlying evidence to support a claim like: "Methadone does not work."
Scientific fact 1: methadone doesn't alleviate the addiction. It actually feeds it to the point that it increases the patient's tolerance at which point you have to increase the dosage.
Scientific fact 2: methadone doesn't provide the "high". Totally true. That's why the vast majority of patients seek it elsewhere(alcohol, cannabis, etc).
Scientific fact 3: Life-long methadone users have relapses more than once. Most clinics/doctors supporting methadone brush it off as "it's quite normal and logical".
Scientific fact 4: innumerable people have been able to kick off opioid addiction by following long-term(but not life-long) therapies.
General fact: You'd be hard pressed to find a drug, other than methadone, that has been so controversial in its usage.
More facts: these panels of "experts" don't provide evidence that long-term recovery doesn't work, but they opine that "hey, methadone is the best, true story.". Show me data, show me science ;)
Isn't it way too convenient for the drug industry? Has it ever happened before, I wonder?
Isn't it weird that heroin addiction in US has increased because of the gratuitous opioid prescriptions?
My read is that the science was principally this 'graph:
For opioid addiction itself, however, the best treatment is indefinite, possibly lifelong maintenance with either methadone or buprenorphine (Suboxone). That is the conclusion of every expert panel and systematic review that has considered the question — including the World Health Organization, the Institute of Medicine, the National Institute on Drug Abuse and the Office of National Drug Control Policy.
As I said in another comment, alcohol is an odd drug. It's mechanisms work very differently from opioids and needs very different behaviors and strategies to combat.
In my opinion you can't simply discard the debate as comparing apples to oranges... There are many similarities. It's a drug addiction at the end of the day.
> Just because viruses and bacteria are both harmful does not mean we should use antibiotics for both
But getting rest and good nutrition will help the body fight both. So they work differently but some treatment is the same. Not unlike what gp was pointing out.
But for me, I think that the issue is less the program than it is the default option, often ordered by the courts. 12 step therapy probably does work with certain personalities. For others, it will do nothing, or maybe even make things worse.
This is probably the same for "tough love" type rehab programs. Some people would be okay with this. Others would react better with other options (your CBTs, BCTs, etc.)
I will say this from a "non-expert" perspective: suboxone honestly is what you want to do from a chemical perspective for opioids. Suboxone is a combination of a mild opiate (buprenorphine) and naloxone, a μ-opioid antagonist that's there mainly to prevent abuse.
For an addiction, chemically, moving to something milder seems like a great intermediate step, akin to some methods of getting off of nicotine (eg, slowly decreasing mg of patches etc). The nicotine patches don't work too well -- it seems like the effectiveness rate is only about 17% (http://www.mdedge.com/jfponline/article/60156/addiction-medi...). But this is a significant increase from placebo (another study -- https://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0010505/ -- says 50-70% more likely).
Agree that there is no silver bullet; probably the best route out is a combination of something to handle the chemical side with some form of therapy to handle the other end. The type of therapy would probably have to heavily depend on the person, given that the therapy must be something the person is willing to commit to in order to work.
Is it me or have we reached the end of the road of mobile phone innovation? Is this going to be Apple's downfall as something like 70% of their sales are from iphone? We've had many years now of mobile phone enhancements and features but it seems like there isn't a lot more we can do on a phone. The next breakthrough ipod/iphone isn't going to be a phone at all. How many more times can we put a better chip or a better screen or a new camera into a phone at this point and have consumers seriously give a crap? Feels like things are going stale.
Congrats on 3 years! If I don't pick up a drink by Jan I'll have two years myself.
At the end of my twenties the floor literally dropped out from under me. I had a good job as a software dev at fairly large media company, had nice apartment, ambitions etc. But I had deep dark secrets that I kept hidden and throughout my twenties I drank and used drugs (coke/pills) over. I considered my upbringing to be pretty normal in a middle-class neighborhood on the West Cost, and I started drinking and partying in high school like everyone else. However I found myself drinking progressively more towards the end of my twenties and no longer a party it was more to cope with stress, anxiety, and life in general.
I ended up losing multiple high-paying jobs, flying around the country trying to restart, going to about 6 rehabs in ~2 years, losing my mind, losing all hope, wanting to die. I almost died a few separate times from acute alcohol withdrawal. I was fired from one job with an internationally recognized media company the morning after I had seizures from coming off alcohol in a rehab. It got to the point where every time I drank alcohol, I ended up detoxing in a hospital. Finally somewhere in that dark period I was able to get honest with a therapist for the first time in my life about some sexual abuse that had happened when I was younger, and about the other addictions like pornography that plagued my life in my twenties. I got sober for 1.5 years and relapsed one more time, this time it was the final wake up call I needed.
Fast forward and I'm the ceo of a startup company prepping to launch an amazing product, I've been a successful consultant helping build another product that is currently in operational use processing millions of dollars in financial transactions, I feel completely resurrected in mind, body and soul. I have a mens meeting I go to weekly, I go to AA, I workout 4-5 days a week, I eat healthier than ever in my life, and I get regular sleep.
The urge to drink or use drugs has completely left my body and mind. I have traveled all over, spent some of the best time with my friends and family, started my life in a new city, made amends, and found tools to help deal with life on lifes terms.
I hated 12 steps and tried everything possible including drinking to work around it but in the end I'm thankful it's there and I go to meetings regularly.
The most shocking thing to me now is both when I think about how far I've come, and how lucky I am to be alive.
No one ever tells you when you're young that you can live a perfectly normal, fulfilling, and happy life without using drugs or alcohol!
thanks for the hope because I can completely relate.
So glad I asked this question here, I have no reason to feel like a fuck up because there is still time to change. I am 27 and improving each day/week/month, I just get hung up on 'what if' sometimes.
> There isn't going to be another big Reddit or Digg like community.
What makes you so sure of this? I would argue that this is wildly inaccurate as there will almost certainly be another platform (probably many as you look outward into the future). Imzy was poorly designed, branded and executed (internet safe space?). While their effort and aspirations were righteous, they didn't deliver something people wanted. Honestly I watched that platform since the day it launched and knew it wouldn't last. Digg is old news. I've seen others like topick.com that tanked as they weren't innovative enough. Gab.ai is gaining some traction but it's product isn't innovative and it's too political so it won't scale.
The barrier of entry isn't that high for a new social platform, it's only a matter of time before something newer, less corporate, less "reddit" comes along and gets people's attention. The same could be said for facebook, twitter, and others. No one has a monopoly on ideas.
Second, there is no way that a decentralized social network is going to have the quality and feature development that a centralized company will have. There is a reason people build great things in this world from buildings, to widgets, to software etc. It's called incentive. Capitalism drives innovation and it's a requirement to have the quality of product that is needed to draw eyeballs off existing platforms.
Lastly, I don't buy the argument that centralized companies can't be responsible with data. Do we trust banks to hold our money? The future is not decentralized social networks.