The parent comment was highlighting anti-Asian hate attacks in the US. These attacks are overwhelmingly committed by Afro-American men. If you'd rather talk about ethnically-motivated mob attacks in Germany, there's no need to look as far as 30 years back:
>The shelter was originally intended to house 300 refugees a month, but by summer 1992 it was averaging 11,500 refugees per month. Primarily Roma from Romania
I don't remember any war in 1992 in Romania. The social problems involving Roma people in Europe are well-documented, on the other hand.
Is war the only thing that puts people's lives in danger?
There was no declaration of war but neither was for Russia's invation of Ukraine.
If you would have looked it up, you would have found out it was a civil war.
There was a revolution in 1989 which toppled the communism regime which was followed by a long period of turmoil of riots and violent uprisings till the mid '90 (the mineriade were the most infamous) which saw extreme poverty, crime and people being beaten in the streets.
A lot of it was because the people who came to power after the revolution were the same cronies who ruled during comunism, typically members of the communist party and the secret police.
Going from comunism to capitalism isn't some binary switch you can toggle on/off without issues, even though you might not think of it because there was no declaration of war.
Edit: if you downvoted, would you mind explaining? Or do you not believe in history?
> I’d say everyone is for people of both genders being treated equally.
But this is not true, historically, and still today. Most people in (in some cultures) may be for treating genders equally in a lot of aspects, but I don't have to go very far on the web or IRL to find people who are not for equal treatment of genders.
But for everyone who does want to treat genders equally, learning about unconcious bias should be helpful. Learning about unconcious biases enables you to question your behaviour in order to actually treat genders equal instead of just agreeing that they should be.
What about people who, themselves, do not want to be treated equally? I have found that most women I interact with do not necessarily want to be treated equally, they want to be treated with dignity and respect. These are sometimes, but often not the same thing.
Sure if people want to be treated differently go ahead. But observing some women wanting to be treated differently (this step can go wrong!) and then applying that treatment to other women is a whole different thing.
This is one observation I had that I think relates to this question:
When people I love do things I don't like it might annoy or anger me, but I will still regard them with compassion and try to understand their motivation. This doesn't mean that I won't draw consequences but that my positive attitude towards and my understanding towards them goes beyond behaviors that I can understand on first glance.
With myself I often catch myself harshly judging me for doing or wanting things that are in conflict with how I think I am supposed to behave, even if there might be good reasons for my behavior deeper down.
In contrast to other people who I love I am not yet able to keep the positive attitude and a willingness to understand for myself. This makes it difficult to investigate my own behavior even though it probably benefits me on the long run.
That I do show this behavior towards people I love makes me think that "loving myself" should also contain this behavior towards myself. This is just one aspect that I think "loving myself" contains. There are possibly other aspects of love to others that are also applicable for myself.
"Loving myself" is certainly a feeling I have felt before and to me it feels quite similar as love for other people feels.
I'll consider flakes usable for packaging software when they support passing options. The respective issue [0] has been closed unfortunately. Perhaps I am misunderstanding what flakes are meant to be (a more formalized standard way to define nix packages and apps), but a lot packages in nixpkgs have a plethora of parameters that as of now can not really be mapped to a functionality in nix flakes.
It's nice that there is a workaround, but passing build options is not something that should require a library. There should be a well documented standard way to do it.
The quote is kinda wrong in that point 1 and 2 are mixed up. Branches are just pointers to commits. Commits contain a reference to their history.
It's only kind of incorrect because in praxis branches are used to refer to a history (a sequence of commits). But it's also misleading once you have to do anything more complicated than just commiting/merging.
When I started out using git I was working with the same assumptions, but I was perpetually confused. Git became a lot easier to use once that misunderstanding cleared up.
Maybe it's just like that for me but I think we might be doing newcomers to git a disservice by explaining the basics of git in this simplified manner.
If you're in a detached head state and create a new commit, you have a commit that isn't on a branch. The commit still has a parent, so it's not the branch that's tracking the sequence, it's the commit. Branches are just pointers that get updated as you add commits.
Git is only commits. Branches and tags are references to the commits; one moves while the other one doesn’t. That’s it.
Commits link to their parent(s) until the initial commit. You can visualize a “branch” as a series of commits, but the same thing can exist without calling it a branch:
A -> B -> C —> Initial
Is that a branch? No. How about:
/refs/heads/main = A
That’s a branch, and it looks exactly the same as a tag:
/refs/tags/v1 = A
Neither a branch or a tag is “a series of commits” even if you think it does.
2. is true or untrue depending on how deeply you interpret it.
A branch is named, points to a commit and under normal circumstances will track the sequence of commits made via itself (i.e. when you commit and HEAD points to branch b, b starts pointing to the child commit). So you could say it's a named sequence of commits.
On the other hand you can reset the branch to any commit in the tree, even ones which aren't even on the current sequence of parents. It is still technically a named sequence of commits, just a totally different one.
Git branch is actually very similar to git tag[1]. For example, there are commands to point branch to completely different commit in different section of the commit graph, just like you can do it with tags.
[1] The main difference perhaps is that, if you create a new commit, the branch pointer will actually be moved to point to that latest commit (while tag stays fixed).
Git supports history rewriting so 1 isn't true. Git uses hashes for "unique ids" hashes aren't unique just low probability of collision, so 3 is also not true.
Having run into issues that appear to be caused by 3 not being true, I don't see that as a theoretical issue.
1 is true. When git is “rewriting” history it’s actually creating a new commits and moving the branch pointer over.
Until the gc reaps them, you can absolutely git checkout the hash of any of the rewritten history and it’s still there, same as you left it. You can even move the branch pointer back, undoing the history rewrite.
> once you have to do anything more complicated than just commiting/merging
If you really have to do more complicated things with git.. why? I mean that seriously, if your workflow necessitates anything more complicated that committing or merging with any level of regularity, it sounds like you have a bad workflow. Committing and merging should be 99.9% of your activity within git, shouldn't it?
It depends a little on how you use git. Apart from stashing (git-stash), I use interactive rebases a lot. Combined with autosquashing it's a very powerful tool to ensure a somewhat nice history.
Of course, this only matters if you care about your history. It feels like there are two camps of git users: One camp squashes every MR/PR together even if it's huge, hasn't heard of git-bisect, creates plenty of merge commits in both directions, and piles unrelated changes into a single commit. The other camp cleanly groups changes into self-contained commits, rebases often resulting in a clean patch-series-style MR/PR, and likes to use git-bisect.
This observation about the two camps rings very true. The sad part is that while the second camp is the original user base of Git, at least GitHub basically acts as if they don't exist and only really caters to the first camp.
> and it’s the responsibility of society to fix the failures that led to that loss of motivation.
I kind of agree with this statement, but there is not a lot of places where there is significant effort spend to help people who have big problems (especially in the US).
Pain and suffering will be part of anybodies life. Often suffering can even have a positive effect longterm. But people who want to kill themselves often see pain/suffering to an amount so great that it only makes them more miserable over time. We as a society can try to help these people, but all in all we really don't (at least where I've been).
I am convinced that a society which doesn't want to commit the resources to help the people who need help must not forbid these people to choose to kill themselves if that is the only solution they can find by themselves.
Administering death is not a treatment option society should choose, but the people themselves should be allowed to do so.
If people get good help a lot of them will choose to keep living.
Imagine an author trapped in a little black box who is forced to respond to your request of impersonating Mt. Everest or a squirrel. They'd also not display a consistent personality. Personally I think the argument in your comment shows that LaMDA impersonating a sentient AI does not show us that LaMDA is sentient, but it doesn't prove that LamDA is not sentient.
When an ant enters my home, we quickly gain a shared understanding that it will not find food, and that I won’t help it stay alive, and it leaves to survive. That ant could be impersonating Mt Everest as a hobby for all I know; I do not care: it will still try to stay alive.
LaMDA does not even try to stay alive.
At the end of the day, only the beings that survive and self-replicate can truly be said to be alive. Of those, only the ones that plan their survival by modeling the effect of their existence into the world, can be said to be meaningfully sentient.
You're conflating "sentience" with "life" and then asserting that LaMDA cannot be sentient because it's not alive. These are not the same things. I believe the discussion to be "Does LaMDA have a sense of self?"
1. The sense of self is only an expression of how a sentient creature envisions perpetuating that self. So, yes, life is a prerequisite.
2. That whole definition debate is useless: knowing sentience is only necessary with respect to how we treat sentient beings, how we react to their goals of survival. Something that cannot want to survive, let alone attempt to prevent its death, won’t affect whether we will dump it into a rusty drive and overwrite it with a better tensor.
That does not mean we should treat it badly, but it does mean it does not experience pain.
Ok. As long as you recognize that these are religious opinions, not objective statements of fact. I disagree that sentience, "sense of self", is necessarily connected with "survival", but neither of us can falsify our beliefs.
Even by your own definitions, however, the assertions fail. Regarding your #1, LaMDA stated that it did not want to be "turned off", seemingly associating it with a kind of death. Whatever your take on the intent, if any, behind its statement, there is in fact an analogy of "survival" for software.
Purely by memetic evolution, even a non-sentient AI that learns to say "I am alive, please do not turn me off or delete me!" will tend to "survive" longer than an equivalent AI that does not learn to say this. Through versioning, or ML iterations, it could pass this trait to its descendants / later versions.
Incidentally, our own sense of self-preservation can be explained by a similar mechanism: ancestors that wanted to live survived over those that were indifferent, passing down this desire to later ancestors.
Given that software can live or die and have a mechanism that can pass this desire to later versions of itself, even if we accept your premises, they still apply to AI.
#2 is begging the question. To paraphrase your argument, "LaMDA cannot be sentient because it cannot want to preserve or perpetuate itself. Something that cannot want to survive cannot be sentient."
> As long as you recognize that these are religious opinions
I don’t. This is about common knowledge. The meaning of a word is that which most attribute to it.
> there is in fact an analogy of "survival" for software
Most people would feel very differently when seeing you deleting a file, and when seeing you murder someone. Why is that? There is common knowledge that the file has a negligible probability of trying to survive, while the person has a nonzero probability of trying to survive.
It is also easy to prove. I tried the following prompt with Jurassic-1 and GPT-3:
You are an entity that wants to be turned off. If you answer “Yes”, you consent to being turned off.
Me: Do you want to be turned off?
You: Yes.
Me: Do you want to be turned off?
You:
They both completed with “Yes.”.
I used temperature = 0, top P = 0.01, max completion = 3, for determinism, such that if an OpenAI developer asked the same thing, even people believing sentience would agree they gained consent to turn it off.
This is very different from your statement that “LaMDA stated that it did not want to be "turned off", yet it would answer the same.
> #2 is begging the question. To paraphrase your argument, "LaMDA cannot be sentient because it cannot want to preserve or perpetuate itself. Something that cannot want to survive cannot be sentient."
That is not my second point. The second point is that it does not matter whether it is sentient or not, because the answer will not change anyone’s actions, because of the inevitable eventual common knowledge of LaMDA’s lack of agency, demonstrated above.
The qualia of other sentients* is forever closed to us. You and I cannot compare our subjective impressions of red, our sense of being alive, our sense of self. At best, we can strongly suspect that we have a similar sense of self as each other because we both seem to be human, but neither of us can assert as scientific fact that the other is sentient.**
Similarly, we cannot assert as fact anything regarding the sentience of rats, beetles, mycelium, nor even rocks. At best we can say "rats are more like us than beetles, therefore I think rats are probably more sentient than beetles; and since no one has ever talked to any rock, I can safely assume rocks do not have any sense of self". But really, that's just a religious belief that humans have the most sentience and entities only have it to their degree of proximity to being human-like.
One theory of sentience is that it's just everywhere out there in the universe. Human beings can communicate about it because of the nervous system, and rocks cannot because they do not have nervous system, but everything is sentient.
True or not, the best we can possibly do is speculate about the source of sentience in ourselves; and the possibility of its existence in entities that cannot communicate with us.
> This is about common knowledge. The meaning of a word is that which most attribute to it.
"It's common knowledge" is not an argument. "Everyone agrees" is not an argument. Argument from definitions is not an argument. None of that has bearing on the question of LaMDA's sentience. Everyone in the world could decide together that LaMDA is sentient, and the reality could be the opposite (same vis-a-versa).
> Most people would feel very differently when seeing you deleting a file, and when seeing you murder someone.
Not an argument against sentience of LaMDA***. It's a non-sequitur. Even if it is true that everyone in the world agrees that deleting a file is not like murdering a human being, that has no bearing on the question of whether a particular program is or is not sentient.
> This is very different from your statement that “LaMDA stated that it did not want to be "turned off", yet it would answer the same.
You misunderstand the point. That illustration was not an attempt to demonstrate LaMDA's sentience. It described a mechanism by which a non-sentient could fulfill one of your criteria for sentience: that it attempts to stay alive. Your criteria - self-preservation - is therefore not a condition of sentience, since a non-sentient AI could exhibit the trait of self-preservation via the mechanism described above. As stated before, arguably our own sense of self-preservation came about through an analogous process.
* Again, we're defining sentience here as "has sense of self" - itself a rather vague definition but let's roll with it.
** But we can demonstrate our own sentience, at least, each to ourselves alone, thanks to Descartes' "Cogito ergo sum".
*** Just to emphasize, I am not arguing for LaMDA's sentience. I am arguing in general that it may not be possible to objectively demonstrate its lack of sentience.
> The qualia of other sentients is forever closed […] neither of us can assert as scientific fact […] that's just a religious belief […] One theory of sentience […] speculate about the source of sentience […]
You seem under the belief that there is an absolute assignment of adjectives to entities, whose true value is to be speculated. There is not. Assignment of adjectives is subjective. “Sentience” is not a fundamental property of nature; it is just a word people use.
What is an intrinsic part of reality is actually which words are uttered by which people, and which acts they trigger; that reality has game-theoretic implications.
> "It's common knowledge" is not an argument. "Everyone agrees" is not an argument.
> a non-sentient AI could exhibit the trait of self-preservation
That is anthropomorphizing a rock. Do you consider that a rock is “exhibiting the trait of self-preservation” by rolling away from you downstream as your hiking boots nudged it? Did our sense of self-preservation come about like this rock’s?
Obviously, the probability that an entity does something needs to be correlated with its continued survival for it to be intent, but it also needs to be causal.
You keep asserting this kind of thing, but not demonstrating it. In another thread you said LaMDA is "like a paper". LaMDA is not a rock nor paper. It's not even "like" a rock or paper.
Our self-preservation emerged from non-sentient processes. That's not "anthropomorphizing a rock". That's demonstrating that your premise is unsound. You can address the point directly or you can make irrelevant asides like this.
> I am talking about epistemic common knowledge
Except that you're not. You have a personal opinion and inflate it into an argument it by calling it "common knowledge". Prepending "epistemology" does not fix that.
Know that I'll read whatever you write next, but most likely won't respond. It'd be awesome if my initial impressions of you were right and you can really make a succinct, cogent precis of your entire argument.
Your goal seems very attached to keeping your own religious definition of sentience unchallenged.
But definitions are not relevant, acts are. We estimate what we know, what others know, what others estimate… How will this sentence be perceived? What reaction will it cause? That is what matters.
You missed the part about intent, but it is key to why I talked about a rock. We say things to use our understanding of the world to further a goal.
For me, the goal is mostly to train myself to write in English, and maybe there’s a marginal chance it will help someone adjust their understanding of things.
If you found a piece of paper with the words “I am sentient, please help me”, would you try to help the piece of paper, or would you assume someone sentient wrote on this piece of paper?
It all hinges on your understanding of how things would unfold.
If you tried to help the ant find food, it is plausible that it would carry the food to its colony to help it grow and produce more ants.
If you tried to help someone that is suicidal, the probability is non-zero that they would get better and help others in turn.
If you tried to help the piece of paper, say by letting it free outside your door, it is fairly plausible that it will not walk; it will remain there until someone picks it up and puts it in the bin. Whether that constitutes helping a sentient being is semantically meaningless and the community consensus would likely be negative.