The intended audience of AMS's Graduate Student Section is mathematics graduate students. Nevertheless, it's hard to say whether symplectic manifolds has any easily explained analogy for lay audiences. The subject originally arose from more abstract formulations of classical mechanics.
I'm sorry that I can't find the reference for this right now, but I read a paper a while back showing that (particularly stochastic) gradient descent usually finds good solutions (and that these solutions are very close to each other), although not absolute extrema, to highly dimensional neural networks. Whats more, you usually want to avoid finding absolute extrema solutions because they often suffer from overfitting behaviors.
Ah, if I misread then from the results section of the linked paper:
For the malignancy prediction objective, the algorithm obtained an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.89, 0.93), with specificity of 77.3% (95% CI: 69.2%, 85.4%) at a sensitivity of 87%.
I haven't read the papers methods, but the data set size is small-ish for this sort of analysis.
The value of the false positive rate is that it lets you know the probability of a true-miss. Depending on the classification exercise, you may be concerned with false positives, where the consequence of a missed call is significantly greater than an unwarranted checkup from a human doctor.
How the flying heck did your psychiatrist let you stop taking your medication cold turkey? There's usually a weaning off process for any sort of psychiatric drug.
Ketamine is usually prescribed to people with "treatment"-resistant severe depression, and these "treatments" include other types of medication, exercise, and meditation. That aside, often, severely depressed people can't do even the basic things like showering, going to work, or making dinner. Ketamine treatment offers a severely depressed person to bootstrap themselves into a state where they _can_ start doing things that will allow themselves to support themselves.
Few seem to understand how debilitating treatment-resistant depression can be. Source: wife's cousin, who when s(he)'s ok, is super smart, capable, and lives a good life. When s(he)'s in the depression hole, s(he)'s nearly comatose. Ketamine has helped greatly in his/her case.
It's just that the ghost of my 5th grade English teacher would have emerged from the mists to point out that "they" is plural, whereas I was referring to an individual. Can't win.
pronoun, possessive their or theirs, objective them.
1 nominative plural of he, she, and it.
2 people in general:
They say he's rich.
3 (used with a singular indefinite pronoun or singular noun antecedent in place of the definite masculine he or the definite feminine she ):
Whoever is of voting age, whether they are interested in politics or not, should vote. A person may apply only if they are over 21. They have been an actor since childhood.
"The non-recursive way is more efficient as the CLR does not have to keep pushing and popping its call stack (which is quite slow). The non-recursive way is unfortunately harder to code. So the challenge was on!"
In this dissertation I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn what there is to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also touches on modern-day speculation about a fully automatized data-driven science.
I investigate, in particular, a specific mathematical definition of a universal prediction method, that goes back to the early days of artificial intelligence and that has a direct line to modern developments in machine learning. This definition essentially aims to combine all possible prediction algorithms. An alternative interpretation is that this definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations.
The conclusions of my investigation are negative. I show that the proposed definition cannot be interpreted as a universal prediction method, as turns out to be exposed by a mathematical argument that it was actually intended to overcome. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is problematic itself.