I will never buy another Tesla again for political reasons, but regarding reliability: their new models have always had reliability problems, but then reliability has always gotten much better within a year or so.
I don't know if the Cybertruck will follow the same pattern, or if the whole company has jumped the shark, but if we're looking for non-political opinions I would not necessarily write them off on quality issues alone.
I agree the whole thing was shady, but did they really never care about solar? I thought the solar roof was announced well after the SolarCity deal, and it seemed like they were serious about ramping that up for a while (though obviously that never really went anywhere or fulfilled any of the original promises either).
The article is flat out wrong. The reason purple and violet look similar is not a trick of the brain, and has nothing to do with the color wheel "wrapping around"; it's a natural result of the frequency response curves of the three types of cones in our eyes. The two colors stimulate our cones in the same way, so of course they naturally look similar.
Most diagrams of our cone frequency responses are subtly wrong. Diagrams typically show three separate smooth, overlapping peaks, centered around red, green, and blue. What they leave out is that our L-cones (the "red" cones) also have a separate little sensitivity bump way off in the violet end of the spectrum. So when you see violet light, it's actually stimulating both the cones that are most sensitive to red light and the ones that are sensitive to blue light. This is pretty much the same stimulation pattern you get if you send both pure red and blue light into your eyes together, which is why purple and violet look so similar.
If you Google "cone sensitivity diagram" you'll mostly find the misleading versions of the diagrams, but you can see one that includes the extra bit of high-frequency L-cone sensitivity in this paper, for example: https://hal.science/hal-01565649/file/Vienot_ConeFundamental...
IMO the only thing that can get through is actual personal consequences for the voter themself
Well, yes. And his approval rating has been steadily declining in tandem with the stock market declines he's caused. If/when prices suddenly skyrocket because of tariffs, you can bet his approval ratings will decline further.
What I find hard to imagine is how the app should respond when synchronisation fails after locally committing a bunch of transactions... Manual merging may be the only safe option in many cases.
Yeah, exactly right. This is why CRDTs are popular: they give you well-defined semantics for automatic conflict resolution, and save you from having to implement all that stuff from scratch yourself.
The author writes that CRDTs "don’t generalize to arbitrary data." This is true, and sometimes it may be easier to your own custom app-specific conflict resolution logic than massaging your data to fit within preexisting CRDTs, but doing that is extremely tricky to get right.
It seems like the implied tradeoff being made by Graft is "you can just keep using the same data formats you're already using, and everything just works!" But the real tradeoff is that you're going to have to write a lot of tricky, error-prone conflict resolution logic. There's no such thing as a free lunch, unfortunately.
The problem I have with CRDTs is that while being conflict-free in a technical sense they don't allow me to express application level constraints.
E.g, how do you make sure that a hotel room cannot be booked by more than one person at a time or at least flag this situation as a constraint violation that needs manual intervention?
It's really hard to get anywhere close to the universal usefulness and simplicity of centralised transactions.
Yeah, this is a limitation, but generally if you have hard constraints like that to maintain, then yeah you probably should be using some sort of centralized transactional system to avoid e.g. booking the same hotel room to multiple people in the first place. Even with perfect conflict resolution, you don't want to tell someone their booking is confirmed and then later have to say "oh, sorry, never mind, somebody else booked that room and we just didn't check to verify that at the time."
But this isn't a problem specific to CRDTs, it's a limitation with any database that favors availability over consistency. And there are use cases that don't require these kinds of constraints where these limitations are more manageable.
"How do you make sure that a hotel room cannot be booked by more than one person at a time" Excellent question! You don't. Instead, assuming a globally consistent transaction ordering, eg Spanner's TrueTime, but any uuid scheme suffices, it becomes a tradeoff between reconciliation latency and perceived unreliability. A room may be booked by several persons at a time, but eventually only one of them will win the reconciliation process.
A: T.uuid3712[X] = reserve X
...
B: T.uuid6214[X] = reserve X // eventually loses to A because of uuid ordering
...
A<-T.uuid6214[X]: discard T.uuid6214[X]
...
B<-T.uuid3712[X]: discard T.uuid6214[X], B.notify(cancel T.uuid6214[X])
-----
A wins, B discards
The engineering challenge becomes to reduce the reconciliation latency window to something tolerable to users. If the reconciliation latency is small enough, then a blocking API can completely hide the unreliability from users.
Keep in mind that studies find a strong effect for placebos: the numbers are not saying "these pills do nothing" they're saying "these pills seem to do a lot, but placebos do almost as much".
Obviously the effect feels extremely real to you, but we wouldn't see a strong placebo effect in the numbers if people on placebos didn't genuinely feel much better.
I get that it feels like the second drug worked much better, but expectancy effects and internal narratives are extremely strong, and they're impossible to untangle at the level of an individual.
The whole conversation is a version of "does alcohol help with depression better than a placebo?"
Because just like alcohol - so do SSRIs have a very clear, pronounced psychoactive pharmaceutical effects. It's just that both effects have little to do with "depression".
For example, setraline is one of the most effective drugs for PE(Premature Ejaculation), with easily measurable effect (it can be timed!).
Do depressed people feel better when they are drunk and inebriated? Maybe, probably some do?
There certainly are quite a lot of people that self medicate depression with alcohol!
Do depressed people feel better when they are zombified out of their brain with SSRIs? Probably some do?
From a certain point of view, prescribing SSRIs for depression isn't all that different from prescribing alcohol for depression. Both are hepatotoxic - pretty bad for your liver.
And is just a symptom of stone-age we live in when it comes to medicine and understanding of the human body.
Maybe I just don't understand placebo particularly well, but why would it work on the second drug and not the first?
Separately, I think part of what is missing from this discussion is that we currently have no mechanism for prescribing placebos to a large portion of the population.
Placebo is an expectancy effect. I don't know all the details of OP's story, but there are all kinds of plausible reasons I can imagine that someone might have different expectations for one drug over another.
It might not even have anything to do with the drug itself: mental health issues tend to wax and wane on their own over time, so if someone happens to feel better right after starting a new medication, it's easy to think "oh hey this one must be working" and then that can trigger the placebo effect and turn into a positive feedback cycle.
For depression specifically, I could see placebo having another important effect. Agency. Taking action to resolve your problems feels good. Compare to rain dances, which while ineffective in bring about rain, surely helps reduce anxiety of the performer.
Many double blind studies are completely broken due to side effects triggering a stronger placebo response, and this is an especially huge problem for drugs like SSRIs where a placebo gets you about 80% of the benefit of the actual drug.
Similar to the study you linked, there was a more recent study where they found that for the SSRI escitalopram (aka Lexapro), the benefits disappear when you lie and tell people that they're receiving an active placebo that mimics the side effects of an SSRI. That is, if people don't actually think they're taking an SSRI, they don't get any benefit.
I used to buy into this kind of stuff, but I've become more and more skeptical of the idea that you would still be yourself if your brain could be preserved/emulated/transplanted/whatever.
Our nervous system extends into our bodies. We feel emotions in our bodies. People with certain kinds of brain damage that prevents them from feeling emotions normally also experience trouble making rational decisions.
More recent research has been hinting that we may even hold certain types of memories outside our brains.
Humans have always been drawn to neat, tidy ideas, especially ones that draw clean boundaries: it's an appealing idea that our consciousness lives solely in our brains, and that our brains could function independently of our bodies, but it seems unlikely that it's really that simple.
As a neuroscientist working on brain computer interfaces, it's painfully clear to me that we are absolutely nowhere close to understanding the full complexity of the human brain in a manner required to simulate or reboot someone's consciousness. It's not even clear yet what level of abstraction is required. Do we need to map all of the synapses to get a connection graph, or do we need to map all synapses plus the synaptic proteins to assign connection weights too? This is ignoring other types of connections like gap junctions between cells, ephaptic coupling (the influence of local electric fields on neurons firing), mapping neuormodulator release, etc. On one hand, it feels like irreduceable complexity. On the other hand, however, you can lose about half of your neurons to neurodegenerative diseases before you start noticing a behavioral effect, so clearly not every single details is required to simulate your consciousness. It would be a MAJOR leap forward in neuroscience to even understand what level of abstraction is necessary and which biological details are essential vs. which can be summarized succinctly.
Anyone claiming to take your brain and slice it up and have a working model right now is currently selling snake oil. It's not impossible, but neuroscience has to progress a ways before this is a reasonable proposition. The alternative is to take the brain and preserve it, but even a frozen or perfused brain may have degraded in ways that would make it hard to recover important aspects that we don't yet understand.
It is, however, fascinating to do the research required to answer these questions, and that should be funded and continue, even if just to understand the underlying biology.
In addition to all that we don't know about synapses etc, I've often wondered if even mapping all the "hardware connections" so to speak would even be enough. You'd have everything in the right place, but what about the "signals" running on it? Does a certain amount of constant activity on these circuits constitute signs of a "living" brain vs a dead one? How much of our consciousness is really in the topology of the circuits, and how much of it is simply defined by the constant activity running around in them? I assume neural circuits form loops that consist of synapses that reinforce or surpress activity. If these signals going around and around ever "stop", can they ever be started again with the same "patterns"? What if these patterns, the living "software", are at least partially what define you?
Well anyway that's my airchair crackpot neuroscience theory for the world to consume ;). I'm sure there must already be a name for the idea though.
Six of the sheep were given a single higher dose of ketamine, 24mg/kg. This is at the high end of the anesthetic range. Initially, the same response was seen as with a lower dose. But within two minutes of administering the drug, the brain activity of five of these six sheep stopped completely, one of them for several minutes – a phenomenon that has never been seen before.
“This wasn’t just reduced brain activity. After the high dose of ketamine the brains of these sheep completely stopped. We’ve never seen that before,” said Morton. Although the anesthetized sheep looked as though they were asleep, their brains had switched off. “A few minutes later their brains were functioning normally again – it was as though they had just been switched off and on.”
On one hand, I wonder if a gradual transition would work. Spend enough time over the years mirroring your conscious patterns onto a computational substrate, and they might get used to the lay of the land, the loss of old senses and the appearance of new ones. There might not be an ultimate "stepping in", but something like you might be able to outlive you, on a substrate that it feels happy and comfortable on.
On the other hand, the idea of "simulating your consciousness" raises questions beyond just cognition or personality. A mechanistically perfect simulation of your brain might not be conscious at all. Spooky stuff.
There's gonna be million artificial minds of various levels of capacity before the first human mind is accurately simulated.
At that time we are going to be accustomed to glitching artificial minds creates, modified, bugged, debugged that current moral conundrums "is the copy me or not", "is it ok to create a hobbled copy of someone" are going to be as quaint bit akin to counting angels on a head of the pin. Mangled and molded consciousness will be as mundane as computation itself.
In my PhD work, I helped conduct the human portion of a study on this topic, contributing to some discussions at the FDA [1]. The idea was a bit controversial then, and I've had a few anesthesiologists get mad at me for it, but the general pattern has now been replicated quite a few times now, such that the field has largely moved on from 'Is something bad happening?' to 'Why does it happen, and how do we prevent that bad thing from happening?'[2]. So it has been a gratifying excursion from my typical research before and since then.
Thanks. This is purely anecdotal, but we had a family member whose child was under anesthesia for a severe respiratory infection. He’s been severely developmentally delayed in his first year, and it’s unclear to us what damage done.
Thanks for sharing. It is difficult to know for certain. If the respiratory infection led to hypoxic damage, then that could also contribute. I have not kept up with the field, but generally the most sensitive period for anesthesia was before 4 years or so. As I mentioned briefly, most of my work is in different areas of research so I haven't kept up to date.
Is there any reason to suspect that adults suffer the same effects as infants? (Not asking to be combative, just curious whether children are uniquely affected because their brains are still cooking.)
You and the other commenter bring up good points. Developmental neurotoxicity (with lesser or no effects in older children and young adults) is, I speculate, probably due to differential gene expression during early development versus later when genes related to development are suppressed and genes related to maintenance are more abundantly expressed. The developmental neurotoxicity probably works through different mechanisms than what is termed "postoperative cognitive dysfunction" in the elderly after general anesthesia dysfunction [1][2], which, all I know is that it is a thing. If I were to speculate it would be that in the elderly there are fewer redundant cognitive resources, and so detrimental effects to cognition are magnified. I know that it used to be thought that post-operative dysfunction is temporary, but it seems likely to me (again speculation) that there is both recovery and permanent dysfunction, but the dysfunction becomes a little more difficult to detect. Going back to my paper, where we used a method to disentangle two types of memory processes i.e. recollection (explicit recollection of experiential details) and familiarity (a general feeling of familiarity with things you've seen previously) which contribute to memory performance but tend to be differentially affected by neurodegeneration (recollection is more affected, and generally more hippocampal), so that sometimes, when not accounting for these processes, a memory test will fail to find differences because patients rely on familiarity to answer memory questions.
Why would you want to go on in a world that has either left you behind or keeps making the same mistakes over and over in a cycle and won't listen to you because you're too old to understand?
And conversely, I think Kim Stanley Robinson puts it best in the Mars trilogy. Scientific progress often has to wait for the old guard to die so new ideas can be tried. Sometimes there are actually new things and they need to be allowed to cook.
German physicist Max Planck somewhat cynically declared, science advances one funeral at a time. Planck noted “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it.”
A scientist like Einstein experienced scientific revolutions within his lifetime. That's hardly going to be the norm in the history of science, and also a horrible assumption to think revolutions would endlessly be occurring and reoccurring.
Also, we know when we're on the edge of knowledge, especially in cosmology and physics. We're waiting for revolution there. There's dark energy and dark matter. It doesn't matter if you're old or young, you knew that your theories isn't good enough to explain whatever these are.
Scientific knowledge don't get swept away especially if they're rock solid. Newtonian physics still has a lot of relevance after all. It's just that relativity is even more accurate.
Just imagine someone who died 50 years ago coming back and hearing skibidi toilet, no cap, ohio, etc. Then not being allowed to board a plane without a body scan, and not having money for a plane anyways since bread was dime and a gallon of gas was a quarter last you checked. You can't even get a job you're just a brain and all the knowledge work you could do is 50 years out of date.
There’s a short story about uploaded consciousnesses being used as AI slaves. They go bad once enough years have gone by that they can’t speak the modern language anymore. Then they usually lapse into insanity or depression.
>Our nervous system extends into our bodies. We feel emotions in our bodies. People with certain kinds of brain damage that prevents them from feeling emotions normally also experience trouble making rational decisions.
I think that may be true enough, but it doesn't have the upshot you seem to think it does.
It just means that what we need to sustain not just a brain itself but the totality of the environmental conditions on which it depends. No easy task for sure, but not something that presents an in-principle impossibility of preserving brains.
I think there's a major philosophical error here in thinking that the added logistics present this kind of in-principle impossibility.
Also, talking like this starts to play with anti-science speculation a bit. Octopi actually have neurons extending through their limbs. We don't. So when we talk about consciousness being "embodied", I'm sorry, it's an attempt to romanticize the question in a way that loses sight of our scientific understanding. Consciousness happens in the brain.
Sure, the brain needs stimulus from its embodied nervous system, and may even depend on those data and interactions in significant ways, but everything we know about consciousness suggests its in the brain. And so the data from "embodied" nervous systems may be important but there's no in-principle reason why it can't be accounted for in the context of preservation.
I don't think I agree with you. There are multiple examples in society of damaged nervous system connections with the brain, spine cord damage for example, where the personality of the pacient changes little. In the same sense, losing limbs or entire sections of your body (aside from psychological trauma and other psychological consequences) don't affect personality that much
Of course the nervous system is much more complex, but damage to the brain almost always result in some sort of cognitive dysfunction or personality change, see the Phineas Gage case for example.
>In the same sense, losing limbs or entire sections of your body (aside from psychological trauma and other psychological consequences) don't affect personality that much
"There aren't any changes except for all of the changes, but those changes don't count because reasons."
I don't know how many amputees you know; you may know many. I was in the army for 10 years during the height of the global war on terror and know more than most. Not a single one is the same as they were pre-amputation. Could be the trauma that caused the amputation, could be the amputation. I'm not an amputationologist.
I do assert that a holo-techno-brain will need a shit-ton of e-drugs to deal with being amputated from its fucking body.
The bacteria in your butthole are a part of you just like your brain, maybe less, but they ARE a part of you.
> Could be the trauma that caused the amputation, could be the amputation.
Given the personality changes seen in people who go off to fight in the military and who end up coming back fully physically intact, I think it's more likely that the personality changes here were caused by the trauma, not by the amputation.
I'm not saying the latter isn't possible, but absent evidence to the contrary, it doesn't make much sense to assume the personality changes occurred because of the amputation alone.
Also consider that amputation -- even ignoring whatever trauma precipitated it -- is its own sort of trauma. I imagine if someone came up to me, perfectly physically healthy, knocked me out, and cut off my leg, I would wake up and develop emotional trauma that would cause personality changes.
I see what you mean- but consider that the gut does seem to play a significant role in mood and mental health. The enteric nervous system may not hold memories, but it seems to have something to do with personality and digestion issues can have negative cognitive effects.
Agree that discomfort can cause temporary problems, and sometimes chronic problems in parts of the body can cause life long cognitive impairment. But that is not to say that these represent "you" or your personality. You brain could still function perfectly without those body conditions.
And for the gut example the brain actually does work normally, stomach and instetines removal (and other related surgeries) are fairly common procedures and I don't hear of people complaining about personality changes. Of course, those types of procedures are extremely invasive in a sistemic way, and not only your mental state, but multiple other parts of the body need to re-adapt. But I truly believe "you" will be still be "you" inside your brain
PS.: I quoted "you" because discussions about the identity of one-self are much more complex, just regard it as the most high level definition of the concept
The way various hormones influence the brain alone makes it pretty clear to me already that you'd be a completely different person when taken out of your body, and I'm pretty sure that's just the tip of the iceberg.
I consider that I have likely died more than twice in my lifetime already. And before this body gives up, I will have already died more times. Must simply enjoy the present and give gifts to my future self.
There is a bit of research and effort into a head transplants. I wonder if and when that is successful to see how it impacts the individual. Possibly having memories of the body or changing personality.
First Human Head Transplantation: Surgically Challenging, Ethically Controversial and Historically Tempting – an Experimental Endeavor or a Scientific Landmark? (2019)
Transferring our consciousness into "the net", or some other fuzzy concepts are so far removed from reality as to be complete fiction. This includes freezing our brains and reanimating them later to resuscitate our lives.
They not only massively overestimate the functionality of today's tech to receive something like our consciousnesses, but even more so, by orders of magnitude, underestimate just how complex our living bodies are.
We only have the vaguest of ideas about how our physiology works (while we might be able to replicate flesh cells for "fake meat", we have 0 understanding or control over how those cells organize to form macroscopic organs). Applying this to the brain, our understanding is even more primitive. An example would be recent news that perhaps the brain is not sterile, but hosts a microbiome. Whether or not the brain hosts a microbiome is still "controversial".
We're still hundreds of years away from a comprehensive understanding of physiology.
But of course, we're never going to live that long, because we still believe (statistically as a species) in invisible guys in outer space that tell us we need to dismember people who believe in the WRONG invisible guy in outer space.
Our primitive violent ape species will extinct itself long before we ever have a comprehensive grasp of how life works, especially to the level of understanding consciousnesses...
I'm also skeptical of the idea that one can "upload" consciousness and it would still be "you". I suppose this is true in a philosophical sense, but in a practical sense, subjective experience of consciousness rules the roost. It's inevitably going to be a mere copy of you. You don't get to experience any of it. Similar to a software project which is forked, I think it makes more sense to classify it as an entirely different entity at that point.
I suppose there are valid use cases for this, but I'm not that narcissistic to think the world needs eternal copies of me.
The continued subjective experience of the original consciousness is where I believe the real value lies. Digitisation of consciousness, assuming it has any sound scientific basis in the first place, would practically need to look more like the gradual replacement of brain (and bodily) matter with something more durable, enduring, and controllable. A slow process in which carbon is exchanged for silicon, or cellular damage is continuously reversed and aging kept at bay.
> It's inevitably going to be a mere copy of you. You don't get to experience any of it.
You can make the same argument for 'you before you went to sleep' and 'you after you woke up'. The only real link you have to that previous consciousness are memories of experiences, which are all produced by your current body/brain.
Think about this: For every consciousness (including you right now) it is _impossible_ to experience anything other than what the thing producing that consciousness produces (memories, sensations, etc.). It doesn't matter whether the different conscious entities or whatever produces them are separated by time or space. They _will_ be produced, and they _will_ experience exactly what the thing that produces them produces.
With an analogy: If you drop pebbles in either the same pond at different times or in different ponds at the same time, waves will be produced in all cases. From the perspectives of the waves themselves, what they interact with is always _exactly_ the stuff that interacts with the water they're made up of. To them, the question of identity or continuity is fully irrelevant. They're just them.
Similarly, it makes no difference whether you only have the memories of the previous conscious experiences, or if 'you' really experienced them. Those situations are indistinguishable to you. The link to future consciousnesses inhabiting your body is effectively the same.
>> It's inevitably going to be a mere copy of you. You don't get to experience any of it.
> You can make the same argument for 'you before you went to sleep' and 'you after you woke up'. The only real link you have to that previous consciousness are memories of experiences, which are all produced by your current body/brain.
Except I know, empirically, that people go to sleep all the time and wake up, and remain the same person. And I know (for practical purposes) I do the same. I -- my mind/body composite -- lie down, and get up the next morning. I remain the same person.
Simply 'copying' or 'uploading' my consciousness, like a computer file, is impossible even in theory, because I'm not just a conscious mind, but a conscious mind which is also a body. Consciousness cannot be split from the material body, even in theory. Somebody upthread said that he'd seen many amputees undergo personality changes as a result of their operations -- this is an informative (if very sad) example.
> that people go to sleep all the time and wake up, and remain the same person
You have absolutely no way of knowing that last part is true. You can only see their behavior, which is identical whether they are the same consciousness or a different one from the one it was yesterday. You don't even know whether they have any conscious experience at all.
> And I know (for practical purposes) I do the same.
You do not. The "for practical purposes" points at your _body_. There is no evidence that an organic body is in any way special. If you upload your consciousness and the resulting computer 'body' works as a normal body, it _will_ generate a consciousness and that consciousness _will_ feel that it is 'you' (itself). Note that we're talking about hypothetical practically perfect computer bodies (which may be completely virtual, as longs as its sensors and actuators live fully in that virtual world).
You can spin the illusion of a continuous conscious experience every way you want. It is still just that, an illusion.
> You can only see their behavior, which is identical whether they are the same consciousness or a different one from the one it was yesterday.
The first clause of the sentence is true, but the second is not.
You never directly see a thing itself, you only ever see its effects on the world. You rationally postulate the presence of water because of its clear colour, its hydrating effect on you, its tendency to become a gas at 100C, its tendency to dissolve salt, and so on. Similarly, you rationally postulate the presence of the same person, at 10pm on Monday and 7am on Tuesday, because he has the same personality, the same look, the same eye and hair colour, the same body shape, etc. We know about the presence of a thing from the presence of its effects and behaviour. If these remain the same, it is rational to believe that the person remains the same.
> You don't even know whether they have any conscious experience at all.
Again, knowledge of effects leads to knowledge of the thing causing said effects. I am aware of my own conscious experience, I see Bob affects the world in ways that are very similar to myself and other human beings, and so I rationally postulate that he is a conscious, rational being like I am. You never see a person's mind directly, but you see the effects of the person's mind the whole time. You see such effects through their body. If the effects remain the same, or change only in certain, limited ways, it's reasonable to believe that the cause remains the same (and frankly crazy to believe otherwise).
> The "for practical purposes" points at your _body_. There is no evidence that an organic body is in any way special.
The mind and the body form one substance. The idea that "my mind is one thing, my body another" has been tried and failed as a philosophical idea several times in history. It raises far more problems than it purports to solve. I know I continue through time in part because my body continues through time. You say the body isn't "special"; I don't know what this means, but I know my body, and other people's bodies, are different from other things, because they walk, talk, reason, sense, desire, and so on. (Once again, the effects and behaviour of a thing lead us to knowledge of that thing.) The idea that consciousness or self can be 'uploaded', even in theory, is pure fantasy. You and your body are one thing.
> You can spin the illusion of a continuous conscious experience every way you want. It is still just that, an illusion.
If this is so, explain how rational thought is possible. Given that rational thought involves you reasoning from A to B to C through time, how is this possible if there is no continuous 'you' that is going through time?
Very well, you think that preserving the brain, or even preserving the nervous system, is futile. But what of total biostasis, preserving the entire organism, just like the archaebacteria that live for thousands of years in ice or other extreme environments by slowing their metabolisms to a crawl?
To me, excessive negativity about the possibility of immortality smacks of weakness and defeatism. You either love life and want as much of it as possible, which makes you a friend of humanity, or prefer death, which makes you an enemy of humanity. I take a stronger line than the neuroscientist in the article. “Death positivity” like that of Viktor Frankl, anti-natalism, even faith in magical spiritual resurrections—all are anti-human viewpoints, only excusable in the past because they were copes with the inevitability of death. Now that we have reason to believe it can be averted, we owe our potential future selves every possible effort to save them from oblivion.
I’m not sure I actually believe in quantum immortality but I think it is slightly suspicious—out of all the people you could have been be born as, you just happen to be born in a timeframe where brain preservation might be possible before you die?
Most people are alive right now. The population historically has been much lower, so odds are you would be born around the time high technology would support a high population.
> So what are the figures? There are currently seven billion people alive today and the Population Reference Bureau estimates that about 107 billion people have ever lived.
> This means that we are nowhere near close to having more alive than dead. In fact, there are 15 dead people for every person living.
So it is not wildly impossible that you’d be alive now, but it is fairly unlikely.
Also, hard to say what’s in the future of course, but even if population growth levels off, you’d expect to be born in the future, right? Which brings up another question, why not?
If we are going to go along on the fully ridiculous implications here and reinterpret all probabilities as conditioned on your immortality, why weren’t you born in the far future? I’d expect people born in the future to have easier access to immortality.
Maybe birth rates will go way down if we discover immortality (lowering your odds of being born later). Or maybe pre-immortality minds will be seen as more interesting and worth preserving (increasing your odds of being kept around).
> More recent research has been hinting that we may even hold certain types of memories outside our brains.
Not just hinting - the evidence is strong and accumulating rapidly. The gut, in particular, has so many neurons that it is considered the body’s “second brain”, to say nothing about the impact that gut bacteria have on your mind.
If you really wanted to create a copy of your “mind”, you’d have to image every neuron in your body for a thoroughly accurate copy. And then accept the fact that your entire behavioural profile is then missing the input of your gut bacteria, which appears to have a significant and non-trivial impact.
In terms of computing (one that we do not understand), it would be like cloning a live machine by taking the CPU dye only, or maybe the hard drive. How many parts you need to take away from a computer for it to be the same machine? It's easy though with a VM, or a kernel that supports many hardware. Kind of a digress, but I liked this idea.
I don't think this is a great analogy because computers don't have consciousness (yet).
But I usually move the hard drive (or at least its contents) between machines when I get a new computer, and that's enough for me to think of it as the "same", even if I reinstall the OS on the new machine and just copy my home directory onto the new one.
If it was preserving my original brain it would definitely still be me at the core. Would everything be exactly the same? Probably not but that paradigm is more than good enough.
There were customers happily using strong consistency in production, but somehow the idea that it wasn’t “finished” kept getting repeated over and over by management. I was well on my way to solving the biggest rough edge (tombstone reaping in SC buckets) but then I got pulled off to work on the infamous “data platform” and never got to finish that work :-(
I am a bit skeptical of Lucid's ability to grow into profitability, but they do have excellent engineering. Their EVs have some of the best efficiency and range on the market.
I don't know if the Cybertruck will follow the same pattern, or if the whole company has jumped the shark, but if we're looking for non-political opinions I would not necessarily write them off on quality issues alone.