It's important to note that this new ad slot "will allow advertisers to promote their apps across the whole network, rather than in response to specific searches." This is exactly Facebook's game, and the underlying technology they use to target these ads could easily be repurposed for oh, I don't know...a new search engine?
Your comment implies that Apple somehow stopped Facebook from allowing network-wide ads on the Facebook app with the privacy changes, which is not true.
Coincidentally, iOS 13 updated iBeacon in a way that also made it a lot less useful to developers (although there was a clear privacy benefit for users.)
I was doing academic research on this exact topic and Chris' is the only book that I found that breaks down the techniques one at a time this way. This book is nothing short of brilliant.
Agreed, this is really a lovely, helpful book that's clear and easy to read, and introduces a lot of useful ideas in enough detail to decide which one to use in a given situation.
To be fair linear models are statistical models and many of those are explainable.
Random Forest is not explainable per say... A decision tree is explainable but an ensemble of it is not to say explainable in at least statistical models sense.
In linear models the coefficients give you a linear numerical sense and also linear associations where as Random Forest give you OOB and feature importance which isn't as clear cut. It also really dependent on the quality of data and most machine learning that aren't statistical models depend heavily on a large amount of data to overcome the model's weakness (random forest being selection bias).
Statistic, to toot my horn a little, have very vast breath and depth in inferences/explainable with huge literatures on it including different fields (e.g. econometrics with time series, biostatistic with longitudinal, survival analysis, etc..). There are also techniques on building parsimonious models too like logistic regression with purposeful selection by Dr. Lemeshow and Dr. Hosmer. And different ways to build inference models versus predictive/forecast model.
I think many people know a general idea but not in depth because I mean there's a field with many deep rabbit holes.
Right now I code two days a week and do academic/journalistic writing the other three days a week. The work styles are totally different.
With programming, I work at nearly full capacity until I'm almost done with the task, and then get sort of distracted during that last little bit of testing and documenting.
Writing I find far less conducive to a "flow" state though. I alternate pretty much the entire day between 30-60 minute chunks of working and 10-20 minute chunks walking outside or checking HN.
In the rare instances when I have to switch between the two modes, my productivity basically collapses.
Lot of people talking about third-party application development in this thread. It's also worth considering the opposite side of that, data portability. GDPR already requires companies allow users to easily export all their data [1], and if the US did the same it could really help mitigate the network effect social media companies use to hold onto their dominance (i.e. Facebook's properties)
Competitors to Facebook really need to be aware of the power of this law. In particular, it says:
"The data subject shall have the right to ... transmit those data to another controller without hindrance from the controller to which the personal data have been provided, where ... the processing is carried out by automated means... [, and] shall have the right to have the personal data transmitted directly from one controller to another, where technically feasible."
If the data transfer is automated and direct, then a Facebook user should be able to request that their posts on Facebook be continually synched over to their Mastodon account (for example).
As Facebook is already a member of the Data Transfer Project [1], it would look very bad for them if they only allowed data to be automatically synched one way, especially if most Fediverse implementations supported two way synch.
Of course, once your data is fully synched, you'd never need to actually browse Facebook itself again, and your friends on both Facebook and Mastodon would be able to read your Mastodon posts. (I suppose Facebook would similarly cite "data protection reasons" for why they can't synch your friend's Facebook posts out to you on another site, though).
Anyway, I'm sure an EU anti-trust investigator would enjoy the opportunity to examine claims against a big US social media company like Facebook, especially if it were a small EU social media provider bringing the complaint.
This view of "traditional monopolies" isn't actually that traditional. From the end of the Gilded Age to through the LBJ administration, antitrust law was focused on reducing market concentration, barriers of entry, and anti-competitive behavior. Only in the 60's did Robert Bork and the Chicago School start to chip away at this way of thinking. By the end of the Reagan administration, Bork's idea that a firm is only a monopoly if it reduces consumer welfare (aka raises prices.) [0]
Google and Facebook have taken full advantage of this new definition. They provide their products for free, acquire competitors without a peep from the FCC (Waze, Instagram, WhatsApp), invest heavily in corporate lobbyists, and have the fastest connections by colocating their data centers and building fiber optic cables. How can a new competitor be expected to lay undersea fiber optic cables?
[0] Tim Wu, "The Curse of Bigness: Antitrust in the New Gilded Age", 2018
"Microsoft spent more than a decade, investing tens of billions of dollars and absorbing an astonishing $12.4 billion in cumulative losses, to establish Bing as a credible competitor. It is cheaper and easier to build a manned space program than it is to build a modern search engine."
- Matthew Hindman, "The Internet Trap", 2018 (p. 174)
Right, the issue was to make it credible. It was never difficult for someone to say "screw this, I'll use a different search engine instead" and be using it in minutes. Google would even find it for them!
The last sentence of the quote is false; only a small fraction of that money was spent on "building the search engine". Again, the relevant point is whether you can offer an alternative, which MS (and several others) have done with a fraction of that $12 billion. If it's difficult to offer something better because the top provider is legitimately the best at it, that's not the worrisome monopoly we should worry about.
Imagine the comparable investment to make a different OS that runs MS Word and getting a home user to use it instead of Windows/Word. Or running an alternate rail line from San Jose to San Francisco.
The problem with this is twofold:
1. How do we define the "best" search engine?
2. How can we know that the only reason Google can keep being the best out of incumbents is because of past network value. e.g. if a competitor had access to the same amount of historical sea h data as Google, could they use it in a better way?
The argument for this point of view is that regulations like GDPR (or other things like corporate taxes) are easier for companies with gobs of capital and armies of lawyers/accounts to accommodate than companies lacking those resources.
A tall person may not appreciate flood waters, but a short person appreciates them even less.