Did anyone else notice the claim that “The computers had never been connected to the Internet,” followed in the next paragraph by “On another screen, Obermayer opened iHub, the encrypted Facebook-like forum that the ICIJ created to make collaboration easier across borders”? Something doesn’t add up here...
Those mining memory-bandwidth-hard cryptocurrencies, like zcash, may consider evaluating these. According to the article, they’ll have 1200 GB/s, vs 900 GB/s for the top nVidia Volta card. (Of course, it’s quite likely that this increase in memory bandwidth isn’t worth it for reasons of cost, ISA suitability for the particularities of Equihash, etc., but hard to say without a lot of thinking-through.)
I’m embodying an archetype here, but I can’t resist.
The use of QTermWidget makes this almost entirely an exercise in building a hello-world Qt app and very little about building a terminal. That is all very well if you are looking to learn Qt—which is by no means an unworthy aim—but I wouldn’t use the adjective “minimal” to describe any Qt app, no matter how short its main().
If you are actually looking for a minimal terminal, whose source code is intelligible and readily customizable, look no further than https://st.suckless.org/
Good suggestion! I found st a joy to read. Only a couple of C files, few frills and a straightforward coding style. I can see what the suckless folks are getting at.
Slightly OT here but I was thinking about what a suckless UI system would be like - one without excessive layering and as few moving parts as possible. I'm fairly sure it should be possible by avoiding truetype fonts and using the framebuffer directly. That would preclude ubiquitous animation but smooth scrolling and dragging is perfectly doable even with a single buffer.
What would be the win in avoiding TrueType? Seems like we could easily convert any TTF files to PostScript Type 1 if there were an advantage to be gained.
This might be a misconception but I was under the impression that truetype fonts are a complexity and security minefield. In particular I noticed an st developer complaining (on the mailing list, probably) about the lack of a suckless font library when I concluded that it might be better to just do without them.
There is libdrw in suckless now, which still uses xft and fontconfig. Fontconfig and xft are ugly and require too much internal knowledge to be useful. The next logical layer in Linux evolved as pango and cairo. Both of course added HTML formatting and vector drawing. This is not needed to simply draw some text somewhere. And this is what a suckless font rendering library should do: Give it a font string and render at some position the given font without having to care about font specifics.
Yeah, I too was hoping for something substantital, e.g. a description of how to allocate and manage a pseudo-terminal (especially since I'll be needing that for a project of mine quite soon). I'll probably be looking at the st(1) source code, then.
I'm not at all an experienced Unix programmer though so take it with a grain of salt. Would be happy to hear about any defects or possible improvements.
It's Qt. The author writes it both ways, but lowercase 't' is correct. It's a tad bit weird to see it all uppercase. It's kinda like "Build your own WebApp using NODE".
“Everything being subsumed by TensorFlow” doesn’t seem to me like a particularly worrisome outcome. The status quo is “everything being a Python library,” and people generally seem to view that as positive. The PyData movement brought another level of interoperability to the ecosystem, so you can combine elements from Pandas, SciPy, sklearn, etc with good runtime performance and low cognitive load. Bringing everything into TensorFlow abstractions makes more optimizations possible (especially on heterogeneous architectures), and perhaps even more importantly, makes it easy to run end-to-end gradient descent on compositions of parts from across the ecosystem.
From what I can tell, it's worse than just Intel being a gatekeeper - every execution of "remote attestation" essentially relies upon the Intel Attestation Service to actually perform verification (or at least as the certificate authority). In a (hypothetical) world where all of Intel's security features are owned by the US intelligence community, this type of pattern seems like an awesome vector for deception ("false sense of security"), where surveillance groups have a large supply of Intel-certified EPID keys, which they can use to arbitrarily fool remote-attestation clients. It's concerning to me that the OP article doesn't even mention Intel's highly trusted role in this process.
What's the situation with control of TrustZone on this chip? In particular, are there any manufacturer-fused keys, and are there any user-fusable signing keys?
Unfortunately, the AM335x in the OSD335x-SM is general purpose only (i.e. only the non-secure side of the processor is exposed). In our development board (https://octavosystems.com/octavo_products/osd3358-sm-red/) for the OSD335x-SM, we have added a TPM and Secure NOR to allow customers to perform secure boot and have secure key storage. Please contact us (https://octavosystems.com/contact/) if you have any questions about this.
I have many problems with this analysis, perhaps enough to write my own blog post, but I am lazy so I will just outline my disagreements here (at least for now).
First: I claim that what we seek in a movie rating is information about whether we will like the movie, and that this can be formalized as the expected KL-divergence (information gain) between the Bayesian posterior distribution (probability of enjoying the movie conditional on its rating) and the prior distribution (probability you would enjoy a randomly selected movie). Of course, this will depend on your taste in movies, especially how much it correlates with others. But, we can _bound_ it by taking the Shannon entropy of the rating distribution: there is no way we can get more information from a rating than this! It is this bound that allows us to penalize the distributions that are heavily biased towards one side of a discrete scale, like Fandango. However, the "ideal" shape in this context is far from a Gaussian - it is uniform! The uniform distribution can also be justified as being calibrated such that the quantile function is linear - a score of 90/100 from a uniform distribution means a 90th-%ile movie. Determining a quantile is often a transform we try to perform intuitively on ratings so such a transform being trivial seems useful.
Second: The Gaussian distribution does not have bounded support! That is, a rating scheme with what you claim as the "ideal" distribution would have _some_ ratings with values that are negative or otherwise "off the scale". Not so ideal! If you wanted to model movie-goodness on an unbounded scale such that a Gaussian would have sense, then you should transform that scale into a bounded scale, eg with a logistic function, yielding an "ideal" shape of a logitnormal distribution, which incidentally can fit the strange bimodal Tomatometer distribution quite well. Even if you specifically wanted a unimodal, bell-shaped distribution, at least pick a bounded one like the beta distribution.
Third: setting aside which distribution you want to penalize distance from or why, dividing the space into three arbitrary intervals to facilitate the comparison seems ridiculous. There is already a perfectly good metric on probability distributions, the mutual information.
Along the lines you suggest, a while ago I took IMDB's ratings and used their emperical cumulative distribution function to "flatten" them into something more useful, percentile scores:
This was about a decade ago, so I'd expect the resulting decoder ring to be somewhat miscalibrated for today's movie ratings. But the same process would be straightforward to apply to a more up-to-date data set of ratings.
I do wish energy were priced per megajoule. But I suppose this is probably a symptom, like miles-per-hour vs meters-per-second, of the non-metrication of time units at the relevant scales. (It takes order of an hour to drive somewhere, and order of an hour to run a laundry machine.) If we had a metric hour of a thousand seconds, a kWh would just be a MJ, no problem. And if we had a "metric year" of a million seconds, a "TWh/yr" would just be a GW. Unfortunately, it's pretty important to humans to track time in a way that lines up with earth's rotation and revolution, and there's no way to make that metric. (If French Revolutionary Time caught on, a second would be 1e-5 days instead of 1/86400 and an hour would be 1/10 of a day, and that would be a start, but the year/day ratio is pretty intractable.)