For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more housecarpenter's commentsregister

The writing system of English is far more phonetic than logographic writing systems such as that of Chinese. That's the relevant difference here. It is true that English's writing system is one of the least phonetic alphabets, but alphabets are always, by definition, quite phonetic. Most writing systems are alphabetic and not logographic, so if you rank them all ordinally, English is going to have a similar rank to Chinese, but the absolute difference between English and Chinese is quite large. Here is a very impressionistic assignment of percentages of phoneticity to various systems, which will hopefully make clear what I'm getting at:

Phoneticity 20% (Chinese)

--

--

--

Phoneticity 75% (English)

Phoneticity 80% (French)

Phoneticity 90% (Spanish, probably most other writing systems)


You have to distinguish different types of phoneticity. Can you pronounce what you read vs. spell what you hear? Can you pronounce an unfamiliar character?

With very few exceptions, Chinese characters are always pronounced the same way in the various Chinese languages (but not in Japanese), regardless of context, but if you're unfamiliar with a character, you'll have at best only a vague idea how to pronounce it. If you hear a Chinese syllable, you'll have several characters to choose from, and can only disambiguate them from context.


I expect Google is stripping the ⌊ ⌋ brackets out as punctuation in the search, so that you're effectively only searching for "x2", hence the "x squared" results.


Some accents don't distinguish those two vowels in unstressed positions.


Like the Kiwi accent?! Didn't think of that!

Thanks.


Perhaps originally, but the reason it persists as a term even when used by people who don't think medieval people were relatively unenlightened is that it can also be construed as referring to the lack of documents. (And for those who do regard medieval people as relatively unelightened, they might regard the lack of documents as something intrinsically associated with that.)


Writing my blog (https://thehousecarpenter.wordpress.com/) has helped me learn things because it provides me with a concrete motivation: instead of having the rather vague and amorphous goal of "learn about X", I can think of my goal as "learn enough about X to write a blog post about it". That's always been the idea behind having the blog, and it's worked decently for that purpose. It hasn't helped me at all with my career or with connecting with people, but that's unsurprising as I've never cared to optimize for those goals.


The reason sets are important is that they correspond to (Boolean-valued) properties. To each set, there corresponds the property of belonging to that set. To each property, there corresponds the set of all things with that property. I think this is the key reason why the relational algebra is a good foundation for a query language. When I'm writing a query, I'm thinking of some property P, such that I want the results of the query to be all records with property P. By utilizing the correspondence between properties and sets I can translate that property fairly directly into an expression in the relational algebra, and then the magic of a relational query language is that that expression is all I need to write to carry out the query. That's the sense in which relational query languages are declarative. I just write down the property of the results I want, and I get those results automatically without having to specify how to collect those results.

Having queries return multisets rather than sets "breaks" the relational algebra in the sense that it breaks this correspondence. Results of queries no longer correspond one-to-one to properties, since properties have no multiplicity. To be fair, you can identify the property being true or the element belonging to the set with having multiplicity > 0, and the property being false or the element not belonging to the set with having multiplicity 0, and by doing this you can think of SQL queries as corresponding to sets/properties most of the time. But if you're going to think of them that way, you might as well just have them be sets in the first place. The multisets are just a needless complication. Thinking about SQL queries in terms of multisets seems to only be compatible with a more imperative, non-relational approach to the language, where you still have to think algorithmically about how to assemble the collection of results that you want, rather than just directly characterizing the results in terms of a property.


What's sad about it?


I agree that infinity doesn't exist in the universe, but this isn't a problem for mathematics. Mathematics isn't about what exists in the universe, it's about what exists in the realm of concepts. Some concepts are more interesting than others, sure. Cantor's work made it clear that infinite sets are pretty interesting.


The Flying Spaghetti Monster is also an interesting concept.

It can even be useful as a concept, in certain discussions/contexts.

Should we take the Flying Spaghetti Monster as a core axiom of our theories of physics?

Wouldn't this lead to us logically concluding for example, that there has to exist something else beyond the universe that we know about, even if this is not really true?

I mean, I know that I'm exaggerating and that the analogy is not necessarily very good.

But I'm also not entirely sure how different is the Flying Spaghetti Monster from the infinite objects that mathematicians talk about and that lead us to logically arrive at certain conclusions (that I would argue might not really be true, in terms of things we can understand).

I'm not saying that I'm right and that most mathematicians are wrong, necessarily. Perhaps it's just a linguistic issue, I don't know.


I don't think Jech is using "naturally" as in "natural choice"; he's just using it as a synonym for "obviously". If X is non-empty set then there exists an x in X, because that's what it means for X to be non-empty. The choice is arbitrary: if X is the set of all functions from R to R then you could take the exponential function, the x^2 function, etc., whatever you like; the point is that it is certainly possible to make some choice.

Doubting the axiom of choice means thinking that when you have infinitely many sets to choose from, it is possible that not only is there is no natural choice function, but there is no choice function at all.


But why do we need the axiom of choice at all? By this logic I have any family of non-empty sets then I’m good? But surely there’s a distinction here


I'm late replying to this, but---you need the axiom of choice once you have a family of non-empty sets rather than a single non-empty set.

At some point you have to go into the formalism to really understand it. It's really about an interchange of quantifiers. If you have a non-empty set X that it is true that

[a] (exists x)(x in X)

If you have a family F of non-empty sets X then it is true that

[b] (forall X in F)(exists x)(x in X)

But to say that you have a choice function f for F is to say that

[c] (exists f)(forall X in F)(f(X) in X)

It turns out that the laws of first-order predicate logic and the axioms of ZF set theory do not allow you to come up with a formal proof that [b] implies [c]. You can try to find one, and you will fail. It's like the parallel postulate in Euclidean geometry. There is actually a proof that the axiom of choice is not provable from the other axioms, but it's a pretty deep result. Historically, there was quite a long time between when the set-theoretical foundations for mathematics were developed at the start of the 20th century, which is when mathematicians realized that the axiom of choice was a principle that needed justification, and when Paul Cohen proved that the axiom of choice wasn't provable from the other axioms, in the 1960s.

So if you believe that the implication from [b] to [c] is self-evident, you have to assume

[d] (forall X in F)(exists x)(x in X) => (exists f)(forall X in F)(f(X) in X)

as an axiom, which is the axiom of choice.

I think most people are inclined to agree that [d] is self-evident---that's why it's generally accepted as an axiom, with only a minority of people objecting. However, in the larger context of set theory it makes a lot of sense to minimize the amount of existential assumptions we are making. Historically, set theorists assumed that any set given by a definable formula would exist, but this lead to contradictions (Russell's paradox). So there's some reasonable paranoia about using additional axioms when we don't need to, in case it somehow leads to another inconsistency.


It's true that any real number can be written with a finite number of symbols. In fact, I can write any real number x with exactly one symbol, namely x, if I define x to mean that particular real number.

Now if you fix a particular formal language for defining real numbers, with a finite alphabet, then the language only has countably many words, hence there are reals not definable in the language. But the notion of "definability" here is not independent of the choice of formal language.

So "choosing an undefinable real number" amounts to "choosing a real number not definable in L", where L is some fixed formal language---and this isn't particularly hard to imagine; given a specific L, you can probably quite concretely construct a real number not definable in L.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You