It isn't a good point. This was explicitly controlled for in the study. People are just making wild guesses about methodological limitations that don't exist.
>The effect size of surgeons’ birthday observed in our analysis (1.3 percentage point increase or a 23% increase in mortality), though substantial, is comparable to the impact of other events, including holidays (eg, Christmas and New Year) and weekends, which have been argued to affect the quality of patient care.
There is no such thing as "true probability" in Bayesian interpretation, it only exists in frequentist world.
Notice that it is possible to build a robot which flips a coin in such a way that it's always heads - sure you might need to build a different robot if the coin is "biased"(you probably mean its weight distribution is uneven) but it's still possible.
I wouldn't say that's the core of the paradox. The thing is that people don't have a good understanding what probability is. We think that when we choose a gate at the beginning, with a chance of winning being 1/3 it means that the universe "rolls" a dice and decides whether we win or not. It is then very confusing to realize that someone else, just by revealing the content of one of the other gates - after our decision has been made - can somehow make the universe roll a different dice with 2/3 odds.
The paradox disappears when you think about probability as a tool for reasoning from incomplete information, not as anything to do with "physical" property of the system under investigation. It then makes perfect sense that after receiving new information from the host we should reassign our probabilities.
How about an approach where the agent's reward is not the predictability itself but the first derivative of it. This way the agent will be attracted to the parts of environment where it can improve and will avoid white-noise parts since its model of the world doesn't generalize on these.
"This drive maximizes interestingness, the
first derivative of subjective beauty or compressibility, that is, the steepness of the
learning curve. It motivates exploring infants, pure mathematicians, composers,
artists, dancers, comedians, yourself, and (since 1990) artificial systems."
Can someone tell me if I understand this correctly. So Bell famously proved that it is not the case that the correlation we see between 2 entangled particles A and B is because of some common cause C. Therefore we concluded that it must be that A => B or B => A and we called it a spooky action at a distance. But aren't we forgetting that there's a one more way to get a correlation between A and B without resorting to spooky action at a distance - conditioning on a common effect, aka collider:
http://www.the100.ci/2017/03/14/that-one-weird-third-variabl... Has anyone proven that this is not the case?
Bell inequality proves (or is used to show) that the shared/correlated state of "entangled" particles cannot be fixed before the act of measurement. It has nothing to say about "A => B" or the likes. If there exists a third "C" it'll still have to set the state of A and B, at the moment of measurement, regardless of distance.
Yes, this is more or less what Bell's theorem disproves, although it's kind of a poor metaphor, because entanglement deals with indeterminate states, not just unknown ones.
It is a common misconception that Chaos theory and Quantum Mechanics show that universe is random. Chaos theory is a perfectly deterministic theory that basically means fragility to initial conditions. Quantum Mechanics, at least in Copenhagen interpretation does imply that the universe is random. But there at least 2 more, perfectly deterministic interpretations that make exactly the same predictions as the non-deterministic one. These are Many Worlds Interpretation and Bohmian Mechanics.
Chaos theory doesn't imply randomness, it implies unpredictability. Hyper sensitivity to initial conditions means you can't reliably predict the exact outcome, because it's impossible to observe all the relevant initial conditions accurately enough. I can't, for example, predict the trajectory of a actual double-rod pendulum except at fairly short time intervals even though the governing equations are known, because practically I cannot measure the initial conditions precisely enough. I would argue that even if you could write a master equation to predict human behavior perfectly it'd likely either be so sensitive to initial conditions it'd be useless or have so many initial conditions that you wouldn't be able to determine them all.
I guess I'm not super expert on QM or some of its different interpretations, but from our point of view there's not a whole lot of difference between multiple universes and actual randomness is there?
I get the feeling that some pieces of software constitute 100% billable / work time. Sharepoint would probably be pretty high on that list. I do confess to studying BizTalk Orchestrations on my own time, but that was for research into the area not the product specifically.
Hmm, why is it surprising that AI is good at poker? The way I see the game is that a bad poker player will just hold a model of his hand in mind. A slightly better one will also hold a model of his opponent's hand. Even better one will also model his opponent's model of himself... and so on recursively. And who's really good at recursion? Computers.