I read that article earlier today, and gotta say I was pretty disappointed that Vox chose the title "Obama: The internet is “the single biggest threat to our democracy”" based on this exchange:
> Obama: Now you have a situation in which large swaths of the country genuinely believe that the Democratic Party is a front for a pedophile ring...I was talking to a volunteer who was going door-to-door in Philadelphia in low-income African American communities, and was getting questions about QAnon conspiracy theories.
> Goldberg: Is this new malevolent information architecture bending the moral arc away from justice?
> Obama: I think it is the single biggest threat to our democracy.
The second paragraph in the article is more in line with what he actually said:
> Now he worries that the internet and social media have helped create “the single biggest threat to our democracy.”
There's a big difference between saying that the internet is the single biggest threat to society, and saying that it (and more specifically, social media) has helped some people communicate disinformation and "fake news" - which is what Obama see as the threat to society in this context.
> I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).
Notwithstanding the fact that you do have a point here, polemically phrased as it may somewhat ironically be, I just want to point out that the Paul Graham reference is probably not the best example of the "tech nerd community" trait you're describing. At least this particular community doesn't quite believe that everything Paul Graham say is true; a couple of examples:
I could share a lot more HN discussions, and some of his essays where he pretty much describe the trait you're taking issue with here -- but I'm already dangerously close to inadvertently becoming an example of a tech nerd that believe "everything Paul Graham has said, ever", is absolutely true ;) I don't, and I know for a fact that he doesn't think so either (there's an essay about that too).
Grandparent doesn't understand information theory. True superresolution is impossible. ML hallucination is a guess, not actual information recovery. Recovering the information from nowhere breaks the First Law of Thermodynamics. If grandparent can do it, he/she will be immediately awarded the Shannon Award, the Turing Award, and the Nobel Prize for Physics.
True superresolution is impossible, but a heck of a lot of resolution is hidden in video, without resorting to guesses and hallucination.
Tiny camera shakes on a static scene gives away more information about that scene, it's effectively multisampling of the static scene. (If I have to hunch it, any regular video can "easily" be upscaled 50% without resorting to interpolation or hallucination.)
Our wetware image processing does the same - look at a movie shot at the regular 24fps where people walk around. Their faces look normal. But pause any given frame, and it's likely a blur. (But our wetware image processing likely does hallucination too, so it's maybe not a fair comparison.)
It's not temporal interpolation. It's using data from different frame(s) to fill in the current frame. It's not interpolation at all. It's using a different accurate data source to augment another.
Super resolution can and does work in some circumstances.
By introducing a new source of information (the memory of the NN) it can reconstruct things it has seen before, and generalise this to new data.
In some cases this means hallucinations, true. But in other times (eg text where the NN has seen the font) it is reconstructing what that font is from memory.
But the thing is, in that case the information contained in the images was actually much less than what we are meant to make believe.
So if we are reconstructing letters from a known font we essentially are extracting 8 bits of information from the image. I'm pretty certain that if you distort the image to an SNR equivalent of below 8 bits you will not be able to extract the information.
Lossy image compression creates artifacts, which are in a way of form of falsely reconstructed information - information which wasn't there in the original image. Lossless compression algorithms work by reducing redundancy, but don't create information where it wasn't there (thus being very different from super-resolution algorithms).
Not if it’s written text and you are selecting between 26 different letters. It’s a probabilistic reconstruction, but that’s very different to a hallucination.
> I wasn't going to start with ML - just send the audio if it was over a certain threshold for a certain amount of time (say, 5 or 10 seconds of noise that's above the normal ambient background for that room).
In my experience this might very well be sufficient. I have a baby monitor (eufy spaceview) that does just that, and as long as you can set a volume threshold to account for expected noise (e.g. a white noise device) I doubt an ML-powered version will add much value for this use case.
I also have an eufy Indoor Security Cam, which does some more intelligent audio processing, such as detecting when someone is crying, and is able to send notifications based on those alerts. It has worked flawlessly in my experience, but also hasn't provided additional value over the baby monitor that only detects noise over a certain volume.
In the end it doesn't really matter what kind of excessive noise you're hearing from your baby's room. If its loud enough to trigger our baby monitor I'd like an alert even if it doesn't sound like crying.
The security camera we have also doesn't trigger an alarm (on Apple devices, besides from the notification sound), so that's another thing to take in to consideration.
I'm sure GP would agree, as that seems to be the reason for mentioning that the title was editorialized in the first place (and probably also why OP chose the most inflammatory quote in the source rather than the more neutral document title)
Inflammatory, click-baity titles can be "catchy" for sure, but that doesn't mean we have to use them here. The fact that it uses "wording from the legal document" doesn't make it a good title, regardless of how catchy it is.
> Dumping/delegating mundane tasks onto a high income earner doesn't jive with common sense
From the article: "Caring for your child, therefore, produces not only a strong bond but a neurochemical reward, inducing feelings of happiness, contentment and warmth"
I suspect that some of the "mundane tasks" you have in mind are exactly the ones that produce this bond - like changing a diaper when you hear your baby crying at night. That's probably difficult if you're using earplugs, and I guess such tasks might not make sense for some high income earners, though.
> Obama: Now you have a situation in which large swaths of the country genuinely believe that the Democratic Party is a front for a pedophile ring...I was talking to a volunteer who was going door-to-door in Philadelphia in low-income African American communities, and was getting questions about QAnon conspiracy theories.
> Goldberg: Is this new malevolent information architecture bending the moral arc away from justice?
> Obama: I think it is the single biggest threat to our democracy.
The second paragraph in the article is more in line with what he actually said:
> Now he worries that the internet and social media have helped create “the single biggest threat to our democracy.”
There's a big difference between saying that the internet is the single biggest threat to society, and saying that it (and more specifically, social media) has helped some people communicate disinformation and "fake news" - which is what Obama see as the threat to society in this context.