I used LastPass for a while. Then it filled in my username and password (correctly) on a website without my having authenticated... It looks like there's an unencrypted local cache which is not flushed when your authentication expires or you log out. I wasn't able to reproduce it but I was sufficiently spooked to stop using it after that.
Perhaps I missed it, but are all the ants alive in this raft? Seems tricky to manage without some of them being submerged underwater for an extended period of time..
Some of the ants will drown. But they use the arrangement of bodies to trap air bubbles, keeping the colony as a whole afloat and alive. Its an amazing trick to have figured out.
I've recently started using this option in gnuplot using the epslatex terminal [1]. Makes for very attractive plots and is relatively simple to use. For those looking for a Matplotlib alternative, I highly recommend it.
For context, it should be noted that this song was adopted following the participation of the 1st REP (1st Foreign Parachute Regiment) in the 1961 putsch against the French government. President Charles de Gaulle had embarked on a policy of self-determination for Algeria (which would culminate in independence). This was met by disbelief within army which had been fighting the Algerian rebels for the past seven years. As a result, there was widespread dissent, culminating in the failed 1961 putsch. Several of the generals were caught and court-martialed, while others went underground and joined the OAS terrorist group.
This stuff is incredibly useful when dealing with large matrices. The idea is that an n-by-n matrix often doesn't contain n^2 pieces of independent information, but can be written a product of matrices of size at most n-by-r (for r << n). A famous example of this is the Netflix recommendation matrix. In this case, you can often avoid O(n^2) complexity by only dealing with such low-rank approximations.
It should be noted that this overview dates from 2013 and that a lot of new results have appeared since then. The author gives some good references in the abstract.
That's more related to 'stochastic gradient descent' for 'matrix completion'. The key difference is that Simon Funk's algorithm doesn't treat missing entries as a zero, whereas using linear algebra based techniques on the observed data matrix (formed by putting zeros for unobserved entries) would try to predict the missing entries as zero exactly.
Also related is the 'alternating least squares' algorithm.