> It's not a "property" it's an attribute/field/member/key/column/variable/getter/function/procedure.
For what it's worth, to a researcher in the field of programming languages (like the author of the post), these all have distinct unambiguous meanings. At least as far as PL goes, almost every term has a well-defined meaning, but as those terms were adopted into less-than-academic contexts, the meanings have diluted.
"Property" is such a term in the context of programming languages research, and, in particular, it is a very specifically defined term in the realm of property-based testing (no surprise).
> Even the constants are variables from the viewpoint of the CPU that has to load it in its registers.
No; this is not what "variable" means. Registers are properties of the processor, i.e., they are implementation details; variables are an abstract concept from the domain of the formal language specification.
> Sometime along the way we decided that "syntax sugar" means "it means the same thing as" but except for (<cast OtherType>obj).foo(), which means that the semantics of "syntax sugar" don't mean it's simpler than the phrase it was supposed to replace.
No; this is not what "syntax sugar" means. If a language defines some syntax f and it "expands to" some other syntax g, then f is syntax sugar for g. This is well defined in Felleisen's "On the Expressive Power of Programming Languages" [0]. For example, Python's addition operator `+` is implemented in terms of a method `__add__`; therefore, `a + b` is syntax sugar for `a.__add__(b)`, because the former syntax is built on top of the latter.
Notably, syntax sugar has nothing to do with casts; casts are semantic, not syntactic. There are also no promises about whether syntax sugar makes something "easier"; it's simply the ability to syntactically express something in multiple ways.
I'd also like to add that, since immediate-operand instructions exist, constants are absolutely not the same as variables at the machine level, since immediates will never be stored in a register (typically, e.g. "move immediate" will obviously store it in one, and I'm sure there are architectures that use an internal/hidden register that's populated during instruction decode).
Also, in Harvard-architecture systems, the constants, being part of the instruction itself, might not even be in the same memory or even address space as variables ([EEP]ROM/Flash vs RAM).
The problem is that the same word is used for different things.
The comment you are responding to was correct in what "property" means in some settings.
The article itself says:
> A property is a universally quantified computation that must hold for all possible inputs.
But, as you say,
> but as those terms were adopted into less-than-academic contexts, the meanings have diluted.
And, in fact, this meaning has been diluted. And is simply wrong from the perspective of what it originally meant in math.
You are right that a CPU register is a property of the CPU. But the mathematical term for what the article is discussing is invariant, not property.
Feel free to call invariants properties; idgaf. But don't shit all over somebody by claiming to have the intellectual high ground, because there's always a higher ground. And... you're not standing on it.
My point was not that there exists some supreme truth about what words mean and that either you use words "correctly" or you're an idiot.
Yes, words have different meanings in different settings, but that's not the dilution I was referring to. It's absolutely fine that a word can be used differently in different places.
The "problem", such as it is, is that there are people who use terms from programming languages research to discuss programming languages and they use these terms inaccurately for their context, leading to a dilution in common understanding. For example, there is a definitive difference between a "function" and a "method", and so it is inaccurate to refer to functions generally as "methods". However, I see people gripe about interactions where these things are treated separately, and that is what I am addressing.
The parent comment to mine tried to offer some examples of such terms within the context of programming languages, so my corrections were constrained to that context. But your correction of my point is, I think, incorrect, because the meaning you are trying to use against me is one from a different context than the one we're all talking about.
There's no intellectual high ground here; my point was not to elevate myself above the parent comment. My point was to explain to them that they were, from the point of view of people like the author of the post (I assume), simply incorrect. There's nothing wrong with being wrong from time to time.
He is a bit offensive towards traditional academia that favors BNF and parser generators. It's been a while since a read it but I remember e.g. a rhetoric question (not exactly cited but by meaning): "Has anyone learned a programming language by reading the BNF?"
The style is very good and fun to read for someone who also reads other more boring papers.
I cannot say what this person means, and I have never read this paper before, but just the fourth paragraph of the paper has piqued my interest and I will read it all.
I haven't seen the associated talk, but (a) I would imagine the author chuckled while reading this, because it's sort of a joke among scholars, and (b) the point is likely focused much more on the context of presenting research (e.g., at conferences) rather than a blanket ironclad rule for all presentations you ever make ever.
While I think there's some validity to your point that the author's presentation suffers excess verbosity, I'm not too worried about it because the linked slides seem more meant to act as a reference document than an example of a good presentation, and the level of text is just fine for that purpose.
Yeah I’m the author, this was a joke. I also wanted to convey to the students in the room that this was not a high quality presentation, more so text just converted into presentation form.
FWIW, I clicked the link, scanned the SO thread, then scanned the HN thread. The "bunch of important words taken out" is exactly the service I paid AI for.
"I didn't have time to write you a short letter, so I wrote you a long one." is real.
Indeed. Are you verifying that they are correct, or are you glancing at the output and seeing something that seems plausible enough and then not really scrutinizing? Because the latter is how LLMs often propagate errors: through humans choosing to trust the fancy predictive text engine, abdicating their own responsibility in the process.
As a consumer of an API, I would much rather have static types and nothing else than incorrect LLM-generated prosaic documentation.
> Can you provide examples in the wild of LLMs creating bad descriptions of code? Has it ever happened to you?
Yes. Docs it produces are generally very generic, like it could be the docs for anything, with project-specifics sprinkled in, and pieces that are definitely incorrect about how the code works.
> for some stuff we have to trust LLMs to be correct 99% of the time
The above post is an example of the LLM providing a bad description of the code. "Local first" with its default support being for OpenAI and Anthropic models... that makes it local... third?
Can you provide examples in the wild of LLMs creating good descriptions of code?
>Somehow I doubt at this point in time they can even fail at something so simple.
I think it depends on your expectations. Writing good documentation is not simple.
Good API documentation should explain how to combine the functions of the API to achieve specific goals. It should warn of incorrect assumptions and potential mistakes that might easily happen. It should explain how potentially problematic edge cases are handled.
And second, good API documentation should avoid committing to implementation details. Simply verbalising the code is the opposite of that. Where the function signatures do not formally and exhaustively define everything the API promises, documentation should fill in the gaps.
This happens to me all the time. I always ask claude to re-check the generated docs and test each example/snippet, sometimes more than once; more often than not, there are issues.
For what it's worth, to a researcher in the field of programming languages (like the author of the post), these all have distinct unambiguous meanings. At least as far as PL goes, almost every term has a well-defined meaning, but as those terms were adopted into less-than-academic contexts, the meanings have diluted.
"Property" is such a term in the context of programming languages research, and, in particular, it is a very specifically defined term in the realm of property-based testing (no surprise).
> Even the constants are variables from the viewpoint of the CPU that has to load it in its registers.
No; this is not what "variable" means. Registers are properties of the processor, i.e., they are implementation details; variables are an abstract concept from the domain of the formal language specification.
> Sometime along the way we decided that "syntax sugar" means "it means the same thing as" but except for (<cast OtherType>obj).foo(), which means that the semantics of "syntax sugar" don't mean it's simpler than the phrase it was supposed to replace.
No; this is not what "syntax sugar" means. If a language defines some syntax f and it "expands to" some other syntax g, then f is syntax sugar for g. This is well defined in Felleisen's "On the Expressive Power of Programming Languages" [0]. For example, Python's addition operator `+` is implemented in terms of a method `__add__`; therefore, `a + b` is syntax sugar for `a.__add__(b)`, because the former syntax is built on top of the latter.
Notably, syntax sugar has nothing to do with casts; casts are semantic, not syntactic. There are also no promises about whether syntax sugar makes something "easier"; it's simply the ability to syntactically express something in multiple ways.
[0] direct PDF: https://www2.ccs.neu.edu/racket/pubs/scp91-felleisen.pdf
reply