Exactly this. Linker is threading given blocks together with fixups for position-independent code - this can be called rule application. Assembler is pattern matching.
This explanation confused me too:
Each individual iteration: around 4x slower (register spilling)
Cache pressure: around 2-3x additional penalty (instructions do not fit in L1/L2 cache)
Combined over a billion iterations: 158,000x total slowdown
If each iteration is X percent slower, then a billion iterations will also be X percent slower. I wonder what is actually going on.
Aside from keyboard not being quite at HP level (but then which one is? And it is still good), DM42 is entirely better version tbh. Even works with printer!
Weird obscure fact: The printer head for HP calculator printer was still available as new parts as late as 2019, coz it was used in variety of devices, I bought cheap non-working one and just replaced the head
I feel like it won't matter. FF translation usage statistics will be a small fraction of Google's, and I can't imagine them even being interested at that kind of data. Since it's an add on (vs. Chrome auto translation by default), it probably won't be adopted by much of the browser userbase.
Still impressive. It works pretty well and without that cloud that Google likes to tell us we really need.
Google is a bit better of course with many common expressions but I'm sure that can run locally too if they'd want to. Mozilla just has don't catching up to do because they don't monetize our data. So less budget to work with.
> Still impressive. It works pretty well and without that cloud that Google likes to tell us we really need.
This is still using Google's cloud to host the models and your browser has to repeatedly download them on demand. We shouldn't need to depend on Google at all, but with Firefox Translations we still do and they're still collecting data about us.
I think this comment is the prime example of Firefox being unable to do an objectively and unqualified Good Thing without a million people showering hate into the comments.
It's not just that I have high expectations of firefox, they claim to have high expectations of themselves. They heavily market themselves as being privacy friendly and often they have been, but they aren't always.
In this case, I agree that this is, largely, a "Good Thing" although not unqualified since some number of users who wouldn't have otherwise will end up repeatedly sending data to Google, probably without even being aware of it. The data they'd give up is (to me at least) small compared to the data they would have been surrendering to online translation services, but that's not really the point.
It just don't understand how they stared from "Protect your privacy from sites like translate.google.com by using this add-on to translate webpages locally!" and ended up at "Let's make firefox users connect to Google's servers every time they use this feature!" If you're creating a product designed for people concerned about their privacy, it should beyond obvious that making your users send data to Google is a problem.
It's not like they couldn't host those files themselves at mozilla.org or (as others have pointed out) just keep them locally and avoid making a bunch of unnecessary connections to a remote host entirely. If they'd done that it would also allow Firefox Translations to work when you aren't connected to the internet.
It's really not hate though. It's love and concern. I love Firefox, and I want it to do better!
>It just don't understand how they stared from "Protect your privacy from sites like translate.google.com by using this add-on to translate webpages locally!" and ended up at "Let's make firefox users connect to Google's servers every time they use this feature!" If you're creating a product designed for people concerned about their privacy, it should beyond obvious that making your users send data to Google is a problem.
Don't you think that except for the PII data which shouldn't be used for training at all those (training) datasets can be stored at any place and it does not make a difference from the privacy point of view? Or I wrongly interpret their purpose...
Models are downloaded only once and then cached, and not repeatedly like the OP mentioned. Source: Me. I've developed it. If you disagree, are seeing a different behavior or have further questions, please reach out in the repo: github.com/mozilla/firefox-translations/
Thanks once for the response, and eleventy times for actually developing a non-cloud translation thingy. As for the caching thing I was really hoping this was the case so I guess that makes it three.
Good to know! I still hope you can find a better place to host the files, but it's nice knowing the problem only happens once per file (so long as the cache remains anyway)
Yeah they should just use another cloud to serve the files. Using your main competitor is really disingenous, because they can glance all kinds of usage data from it (if not more)
I'm not sure why this is done because this kind of filehosting is easily replaced by something more privacy-friendly.
We retrain models as we get new datasets and only if they improve, which is not common. So far we haven't updated any model. When it's time, then yes, they will be updated, but it's definitely not a frequent process.
> But does Google upload what you translate later?
If you use Google Translate, of course it does because everything is done on their servers
> Would be cool to have Firefox Translations integrated into TOR.
Tor Browser is just a forked firefox so this should not be too difficult. I believe they disable addons by default because they can leak data and they can't check all addons for this. Not sure if you can switch it back on though. I suppose they could validate this one as it's so important. I would recommend submitting a feature request to the tor project.
>> But does Google upload what you translate later?
> If you use Google Translate, of course it does because everything is done on their servers
As mentioned by GGP, the Google Translate app for Android (at least) allows you to download the model for a given language (pair?), after which you no longer need any kind of Internet connection to translate. That implies everything is done locally, not on Google’s servers. GP’s question was whether the app will still save your queries and submit them once a connection becomes available just to scratch that data collection itch.
> GP’s question was whether the app will still save your queries and submit them once a connection becomes available just to scratch that data collection itch.
disclaimer: googler
This can be tested. Translate shows up in your Google 'My Activity' page, so you can do some offline translations, then switch the network back on, and see if the translations show up in My Activity. Assuming you can trust the My Activity page to be complete and accurate (my opinion is you can, but i would say that)
and FTR: I've actually just tried it and offline translations do not show up in my activity so I highly doubt they're being surreptitiously uploaded.
>As mentioned by GGP, the Google Translate app for Android (at least) allows you to download the model for a given language (pair?), after which you no longer need any kind of Internet connection to translate.
This isn't true. Google claims this, but it just doesn't work that way: I've had many, many cases of trying to translate stuff with a bad cellular data connection and it doesn't work, even though I have the language pack downloaded.
I don't think offline translation kicks in automatically when you have a bad (as opposed to no) connection. You can easily verify that it can translate without any connection (both on iOS and Android) by downloading the language and putting the phone in airplane mode. (At least, the basic text translation works fine. The more advanced features, such as speech and image translation, don't.)
Also, Microsoft's Translator app can do the same (offline translation for text) and IME is about on par with Google).
>Also, Microsoft's Translator app can do the same (offline translation for text) and IME is about on par with Google)
Interesting, I'll have to try this.
Well, I tried installing the app and using image translate mode on some Japanese and the results were not very good, not nearly as good as Google Translate. I'll try it out later with regular text.
I also looked at the phrasebook feature. That's a pretty neat idea actually. However, for some really strange reason it defaulted to showing me phrases in Spanish. I have no idea why it thinks I would want to speak Spanish (My system language is English, and I live in Japan, so obviously I want to convert to Japanese. No one speaks Spanish here.)
> using image translate mode on some Japanese and the results were not very good,
I think the honest truth is that Japanese is the ultimate challenge of any translation too.
My Japanese friends tell me that DeepL is about as close as you will ever get to a passable translation quality.
But DeepL does not do image translation.
On a recent trip to Japan I installed six image translation apps on my phone.
None were perfect, I found Naver Papago to be the most consistently usable (although it was far from perfect).
Interesting observations I made during the extensive testing:
1) The majority of image translation apps don't like Japanese when written vertically, I found they perform best with horizontally written Japanese.
2) All image translation apps *REALLY* don't like hand-written Japanese. Some of them *MIGHT* translate *SOME* of the text. But really all of them only really work consistently with machine-printed text.
The other issue with deepl is that it has limited language pairs. I wonder what limits it. The language I'd like should have enough of a corpus of text.
That’s just bad programming. Turn on Airplane Mode and it will work. A bunch of apps won’t even try to use offline data when they’re “online”, even if the connection is 1 byte/second.
It’s not bad programming if the server has a bigger better model, thus gives better results, and the local model is just a lower quality but smaller portable model.
That said, let my give my HN 2c and say that Google Translate is pretty bad these days. It’s community/user adjustments, for example, are guaranteed to be bad. In Spanish, you instantly know you’re looking at a user “correction” because the translation has no accents. “como estas”. It’s bad in 100% of cases, every time I see that “user verified” symbol.
I think the offline model doesn’t have the user adjustments, but the offline model also seems to be lower quality. Back when I translated a lot, I used to know when my internet was offline mid session because of the difference in translation quality.
So I ask for a translation and it fails because it times out, giving me an error. And you call that good programming?
I get it that the server translations are better, but currently I’m not seeing any translation at all. You, Google Translate developer, should catch the error and show the offline translation instead.
Oh, I see. By “doesn’t work” I thought they (and you) just meant it still hits the server even though you have a model downloaded.
Yeah, on a spotty mobile connection, most services tend to be optimistic that it’s better to wait than to assume your internet is down. iOS online/offline callback is very optimistic, probably because for most services, trying something in a degraded 20b/s conn is better than giving up and going “sorry, no internet.” (Funnily enough, the iOS App Store gives up way too soon)
So I agree. I think the right thing to do is to do an instant translation with the local model, when available. Maybe a cherry on top is to see if the server has a better translation in the background.
According to Gödel, it's probable that there are true mathematical formulas that cannot be proven.
So from our point of view, we cannot know given a statement whether it can be proven or not.
So all true statements in pure mathematics that we know are a posteriori true, since before we had the experience of proving it, we could not know if it's a statement that could be proven or disproven.
The Incompleteness Theorem is a technical result about formal systems. It says nothing about the provability of mathematical formulae, since math isn't a formal system.
If you accept the a priori/posteriori distinction, you're abusing words to claim that pure mathematics - the classic example of a priori knowledge! - is a posteriori. A priori knowledge isn't knowledge that we come to know is true through an experience (like proving something), it's knowledge that is true regardless of any particular experience.
Of course math is a formal system. If you take Zermelo-Frankel Set Theory as the foundation of math, it has a very clear, formal set of axioms.
And we do not know for certain that math is objective truth. If we did, there would be no philosophy of mathematics.
Reasoning within mathematics is objective, because math is a formal system. But to think that we know anything about anything is frankly pure arrogance. We don’t know why we are here or what our universe even is at a fundamental level. Math is a human-imposed construct that we use to try and make sense of it all.
> But to think that we know anything about anything is frankly pure arrogance.
It would be arrogant to conflate a model and the subject being modeled, but we have some pretty successful models for physical reality, and you won't bring me to say we don't know anything about it.
That's actually a really interesting philosophical question!
Formal proofs are, of course, objects in a formal system. But as anyone who's tried to express a mathematician's proof of any real complexity in Coq can attest, they're a very small subset of the sum total of human proofs.
The proofs that mathematicians have been doing for thousands of years, by contrast, aren't formal (in the strict sense). They're arguments that appeal to the human sense of deductive reason, in a phrase. And yes, formal logic is an attempt to codify and mechanize that as an object for study, but...well, like I said, it's a big philosophical or even neurological question as to how well it does that, or can do that; to what degree that's possible.
While not true of thousands of years, I'm (and I think most of the mathematical community) pretty confident all of the mathematics of the last 200 can be formalized. There isn't really much of a question of whether we can formalize modern informal, rigorous proofs. Indeed a benchmark of whether a proof is rigorous or not is whether there's a clear path towards formalization (similar to how we might use as an informal benchmark whether an algorithm is well-specified by whether we have a clear path to implementing it in a real programming language).
As such mechanizing mathematics into formal proofs has yet to meet any fundamental difficulties I'm aware of. The main thing is it's just a slog and few people are working on it because it's such a slog. It can usually take orders of magnitude more time to formalize a proof than to come up with its informal, but rigorous proof. But the process doesn't really require deep insights, at least not any more than translating an algorithm from a CS textbook or paper into real code. It mainly is just because there's reams and reams of tedium that can be encapsulated in a single "mutatis mutandis" or the like.
At this point Mizar has formalized essentially all of undergraduate mathematics and is gradually working its way into graduate mathematics. It's yet to meet any deep roadblocks and it's not anticipated to meet any.
Either all mathematics is formalizable and so we're not going to run into any Incompleteness Theorem-related issues and it's irrelevant...or we run into mathematics that can't be formalized and therefore mathematics isn't a fully formalizable system and so Gödel is still irrelevant :)
Although I understand that you mean to say Godel's Incompleteness Theorems aren't relevant, this is not because of anything related to formalization. Godel's Incompleteness Theorems don't say any particular theorem can't be formalized. All they say is that there will always be a theorem whose proof requires the assumption of an additional axiom. Very different things.
A lack of a full formalization for mathematics is more akin to a failure of Godel's Completeness Theorem (which roughly states our mathematical abstractions precisely capture the commonality between different situations we choose to abstract over). The Completeness Theorem however is proven to hold in modern mathematics (e.g. first order logic which includes ZFC).
Indeed the choice of whether to use a system fulfilling the Completeness Theorem is a conscious and true choice (unlike say Godel's incompleteness theorems whose avoidance requires giving up arithmetic), and a choice most logicians and the overwhelming majority of mathematicians decide to take. Even in higher order logics where it would appear the Completeness theorem fails, it can be recovered by appealing to a first-order understanding of its semantics. It's so much a choice that arguably you are leaving the bounds of mathematics proper and entering the wider field of philosophy if you give up the Completeness Theorem in the foundations of your mathematical system (as opposed to locally giving it up in a subsystem and analyzing it in a semantically complete metatheory).
Put another way, mathematics is precisely the study of formal objects and its standards of rigor entail that its own proofs are themselves formal objects that can be studied in their own right.
Arguing against the formalization of modern mathematics is akin to arguing that there are certain CS algorithms which cannot be turned into code. While there are certainly conceivable processes which cannot be implemented in code, algorithms in CS are essentially definitionally required to be implementable in code. Anything that is not is outside the purview of algorithms as understood by CS.
If we find a proof which essentially cannot be formalized the reaction of the mathematical community will not be to reject formalization, but rather to question the rigor of that proof. This is similar to the reaction that the computing community would have if it were to find a purported specification of an algorithm that turned out to be essentially unimplementable.
Godel's incompleteness theorems are really syntactic results rather than semantic results. The main consequence they have for modern mathematics is that there is no "one axiom system to rule them all," since you can always extend many systems with a new axiom, which doesn't change much, since mathematics since time immemorial has been engaged in the practice of tweaking various axioms and seeing what consequences emerge.
Indeed Godel's completeness (not incompleteness) theorem (which applies to most mathematical settings such as anything that uses ZFC) is a strong indication that everything mathematicians would care to prove is in fact provable.
It's been a long time since I studied this stuff properly, but is it true to say that you can tell that a statement is provable (or otherwise) if it can be couched in the language of first order logic?
Not quite. One way of thinking about Godel's Completeness Theorem is to couch in the language of abstractions. Every formal theory is an abstraction that seeks to abstract over many different concrete use cases. In the process we know that we give up some specificity for this generality, that is we know that there are aspects of these concrete use cases that are "forgotten" by the abstract formal theory. In return we get greater reusability of our formal theory and the consequences we can prove using it.
However, the question arises whether the abstract formal theory "forgets too much." That is, we know we must trade some specificity for increased generality, but is it possible we've traded too much? Are there attributes shared in common by all the concrete use cases to which our formal theory is applicable to but to which our abstract formal theory is blind to?
That is perhaps our formal theory is applicable to humans, cows, mathematical sets, etc., and so it necessary forgets about e.g. human-specific features, but there may nonetheless be some statement A that is true about all these things which our formal theory is unable to prove?
Godel's Completeness Theorem states that the answer is negative. Our formal theory and that which we can prove using it are exactly the attributes that are held in common across all its use cases, no more no less. That is our abstraction "leaks" no more than is strictly necessary.
So if we cannot prove or disprove a given statement in our formal theory, that is because there is at least one concrete use case to which our formal theory applies but the statement is false and one concrete use case to which our formal theory applies but the statement is true. That is unprovability is exactly synonymous with independence.
Many first-order theories, such as the theory that consists entirely of the sentence "there exists an element which is Special" can have sentences which cannot be proved or disproved in the theory (e.g. "all elements are Special."). But by the Completeness Theorem that just means there are use cases where sometimes the sentence is true and sometimes it isn't (the theory could be equally applied to a set of elements only one of which is marked as "Special" or where all the elements are marked "Special").
Godel's Completeness Theorem and Incompleteness Theorems are independent phenomena. Not only is it possible for both to hold, only one to hold, or for none to hold, they also apply to different domains of discourse. The Completeness Theorem applies to entire logical systems (such as first-order logic), while the Incompleteness Theorems refer to specific theories (such as a single given theory in first-order logic).
The two taken together mean that e.g. for any first-order theory capable of expressing arithmetic there is always more than one concrete mathematical object that satisfies that theory. That is the abstract theory is incapable of uniquely specifying a single mathematical object. Stated equivalently any abstract theory containing arithmetic can always be indefinitely extended by new axioms.
This is a common theme of first-order logic and there are related results that show in other ways how first-order logic sometimes has problems pinning down unique objects (e.g. the Downward and Upward Lowenheim-Skolem Theorems).
Afrikaans Albanian Amharic Arabic Armenian Asturian Azerbaijani Bashkir Basque Belarusian Bengali Bosnian Breton Bulgarian Burmese Catalan Cebuano Chinese Croatian Czech Danish Dutch Eastern Punjabi English Estonian Finnish French Fulah Galician Ganda Georgian German Greek Gujarati Haitian Hausa Hebrew Hindi Hungarian Icelandic Igbo Ilokano Indonesian Irish Italian Japanese Javanese Kannada Kazakh Khmer Korean Lao Latvian Lingala Lithuanian Luxembourgish Macedonian Malagasy Malay Malayalam Marathi Mongolian Nepali Northern Sotho Norwegian (Bokmål) Occitan Oriya Oromo Pashto Persian Polish Portuguese Romanian Russian Scottish Gaelic Serbian Sindhi Sinhalese Slovak Slovenian Somali Spanish Sundanese Swahili Swati Swedish Tagalog Tamil Thai Tswana Turkish Ukrainian Urdu Uzbek Vietnamese Welsh West Frisian Wolof Xhosa Yiddish Yoruba Zulu
natmaka, are you aware that those websites preach racism?
linked from 'about' page:
https://www.counter-currents.com/2012/05/new-right-vs-old-ri...
quote: "Second, because of the leading role of the organized Jewish community in engineering the destruction of European peoples, and because the United States is the citadel of Jewish power in the world today, the North American New Right must deal straightforwardly with the Jewish Question."
No, I wasn't aware, nor do I share the opinions expressed in the documents you point to, but if I find a pertinent and interesting document I point towards it whatever the website publishes otherwise.
Even if I knew I would have proposed this link to a document in my opinion pertinent and where I don't find any racism-preaching (this particular website/the Web/the universe all have their share of shit).
I'm all for exposing and debunking dumb and dangerous doctrines instead of feigning not to know about them and letting some think that they have to be true because no one criticizes them, therefore I don't refrain to directly link to such material, while expressing at least that (and, better, why) I don't agree (as you just did).
The author discussed in the documents I linked to is Dominique Venner. He wrote a book titled "Histoire et Tradition des Européens: 30 000 ans d'identité" which seems pertinent to me in this thread because it exposes a thesis about the relative importance in European cultures of Homer's literary work. Venner was considered far-right, I don't know he was nor if it extended to sheer racism and such fuckery, but it may explain why one may find many material from/about him on such websites but not much elsewhere as, IMHO sadly, censorship and ad hominem are far too common. Racism is as dumb and dangerous as censorship.
This explanation confused me too:
If each iteration is X percent slower, then a billion iterations will also be X percent slower. I wonder what is actually going on.