For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more NotSwift's commentsregister

This is worrying. The only reasonable explanation for such a high infection rate is that some animals are spreading it to other animals. So even when we are able to vaccinate all humans, we will not be able to eradicate COVID because there is still a reservoir in other animals.


I am a bit disappointed that no one has mentioned a big problem with all these machines: security. Most of these items were developed by companies with only limited experience in security. They offer a tempting attack surface for attackers. Once they are compromised they can be used to attack other devices in your network. Most of them run proprietary software, so it is not possible to verify its code. They require that the network is correctly configured which is not at all easy. There have already been a lot of incidents where surveillance cameras were exposed on the internet.


Even if they do have security experience (i.e Google), I would expect that for many vendors security is WAY, WAY down the list. And security doesn't end when they ship the device, it also must be present throughout all updates, even when the device is no longer supported by the vendor.


As we have seen with the recent heat wave in Canada, some global circulation patterns are surprisingly sensitive to disruption.


It can be very difficult to determine from which country the attack originates. For example, we know that North Korea does a lot of hacking, but has almost no internet connections. Clever hackers will always try to route their attack through some other country.


It is pretty strong evidence that the step from going from other molecules to RNA/DNA only happened once. Our molecules definitely changed their environment, so it is probable that they out-competed their mirror molecules.


This is sound advice. But I would suggest that you choose words for which you can find some mnemonic to remember them, e.g. by combining them in a nonsensical sentence.


I have usually also used one made up word derived from existing word along with other words, just in case, you know... If someone was to bruteforce 3 word combinations it would make the attack so much weaker.


Adding feature upon feature to a language that does not have a good foundation generally leads to disappointing results. My favorite examples are Javascript, Perl and C++.


It's especially a problem with dynamic languages, the lack of safe refactoring makes changing the language very difficult.

Everybody learned the wrong lesson from the Python 3 debacle. The lesson everybody learned was "Big language changes are bad, don't do them". Whereas the lesson everybody should have learned was "Writing code in a dynamically typed language is like writing it in stone, it's really expensive to upgrade it later".


You should probably not be very honest. Any criticism will likely on put you in a negative light, and you might want to have some references from that company at a later point in time.


These are some shocking differences. How did you come up with this expression?


After looking for a text generator for grammars I found this https://github.com/dmbaturin/bnfgen and then to try understand the underlying algorithm I came out with a C simple program for a simple expr grammar see it here https://github.com/dmbaturin/bnfgen/issues/2#issuecomment-89... and also a thread on sqlite forum here https://sqlite.org/forum/forumpost/6885cf9e21


The input space for these neural networks is huge, it is roughly the number of colors to the power of the number of pixels. What neural networks do is subdivide the input space and assign a label to it. Because of the high dimension of the input space it is very likely that it is possible to find images that are on the boundary between two labels. Using more advanced techniques might make it more difficult for an adversary to find such examples, but it does not eliminate their existence.

One of the big problems with neural networks (and other AI techniques as well) is that they cannot explain their classifications, which makes it difficult to determine whether a classification is correct. Most people seriously underestimate how difficult this task is. Humans can do it quite easily because our hardware has been optimized by eons of evolution. Neural networks are only in their infancy.


The problem goes much deeper than these adversarial examples. The main issue is Solomonoff Uncomputability (or the No Free Lunch in Search and Optimization theorem, or any of the other hard limiting theorems).

In short, it’s not only that you can devise adversarial examples that find the blindspots of the function approximator and fool it into misprediction, it’s that for any learning optimization algorithm you can abuse its priors and biases and create an environment in which it will perform terribly. This is a fundamental and inherent feature of how we go about machine learning — equating it with optimizing functions — and we will need a paradigm shift to go around it.

It’s curious to me how most of these results are known for decades, yet most researchers seem dead set on ignoring them.


I think machine learning researchers are well aware that successful optimisation is only possible using the right priors. This is explicit in bayesian machine learning but also implicit in neural networks in the choice of the architecture, optimisation algorithm and hyper parameters. It's a well discussed problem and a lot of researchers have a serious background in optimisation, theoretical machine learning and other related areas.


What exactly are the right priors for general intelligence? And keep in mind, whichever prior you choose, I can design learning problem where it will lead you astray.

This paper provides some interesting results on the weakness inherent in universal priors: https://arxiv.org/abs/1510.04931


Related question: What are the adversarial examples for human intelligence? We know some for the visual and auditory systems, but what about the arguably general intelligence of humans?

Maybe we can work our way backwards from the adversarial examples to the inductive biases?


'Thinking Fast and Slow' is basically all about the rough edges of human thinking.

The interesting tradeoff with ML systems is that you trade lots of individual human crap for one big pile of machine crap. The advantage of the machine crap is that you can actually go in and find systemic problems and work on fixing them at a 'global' level. On the human side, you're always going to be stuck with an unknown array of individual human biases which are incredibly difficult to correct.


I think fractional reserve banking has done a pretty good job of fooling everyone.


That's for reinforcement learning, right? What is the adversarial learning problem in say, classification based on Solomonoff?

If hypercomputation is possible, then anything based on Kolmogorov complexity would be SOL, but if not... is Solomonoff induction just too expensive in practice?


Regarding: "What neural networks do is subdivide the input space and assign a label to it."

I've made such plots when the input is 2d, breaking the input space into discrete chunks/pixels, having the net classify, and then coloring that pixel according to the classification, and what usually happens is something like what an SVM would produce: large contiguous regions of the same class.

But when the input space is high dimension, and the net is super deep, who is to say what this classification looks like... My guess is it looks less like oil and water carefully poured in a bottle, and more like oil and water shaken vigorously in a bottle.

Do you have any citations about how NNs subdivide the input space, or how regular it is?

The way I have thought of it so far is that we humans subdivide the input space, then stick those blocks into a NN that could have huge Lipschitz bound, and observe the output of a highly irregular function.

When you say "What neural networks do is subdivide the input space and assign a label to it." It sounds more like subdividing the input space helps solve the NNs problem (minimizing the loss). But, it seems to me that that is not so related to minimizing the loss. (Partly because the NN never sees most of the input space during training, and neither is it relevant to what humans want: generalization)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You