Well, if we all ate meat alternatives most of the time and treated ourselves to real meat much less often, we could reduce the density of animals raised for food so that they could be raised humanely. I love meat and especially bacon, but I'll have sea-crunch (or whatever) most of the time if it means we don't have to torture animals all their lives.
Thanks for the pointer -- looks like (OCaml/F#)++ with an effect system and some other niceties. I guess I'd rather go that way then enter the Scala effects mire.
There are some obvious issues I didn't see up front (unless I missed them) -
- How do content addressable fns and their codedabase integrate w/git (seems like it really has to for the time-being)
- The usual: What's the execution model & perf characteristics?
- I see it's written in Haskell, but don't know if it's interpreted or further transpiled, etc.
- Debugger? Interop?
- Type and efficiency of GC
- Impl. of collection types - simple conses vs. HMAT or r/b trees, etc.
A strongly-typed pluggable effect system like this should really replace in a more structured and powerful way what Python does today, which is really just to proxy out to efficient external libs (often written in native code, like Numpy or various AI engines, etc.) If Python is used for actual compute, you're adding several 0's to your runtime.
So I'm curious if Unison is of that ilk, or can be used dependency-free for CPU-intensive (and in my case, non-numeric) workloads.
Parinfer hits the sweet-spot for me -- you still get the benefits of the parens, but the indentation is still enforced to match, and changing either one causes the other to be updated accordingly
Agreed - he somehow gets the most amazing guests, but it's all un-interesting, softball questions that don't challenge the guest at all, delivered at a glacial pace.
Yes, it's designed that way imo. More of an exploration into a subject, letting the guest shine and transmit their thoughts.
Would you have preferred someone like Charlie Rose, constantly interrupting the guests. A 2 way dialogue might be more normal, but you lose a lot of signal. These aren't confrontational interviews, but more of an educational one hosted by a non-expert.
If people prefer a discussion of 2 experts, this is clearly not it.
Most things that increase normal cellular growth by stimulating the mtor pathway and IGF production, etc., will also increase abnormal cellular growth - including things like, well, eating - esp. carbs/sugar and protein. It's certainly possible that a marked increase in NAD _could_ increase the growth of certain types of cancers, but I'd think this effect would be swamped by other factors.
As for "medical doctors" -- unless the MD in question is a (sports) nutrition specialist, perhaps oncologist, _and_ has graduated in the last few years or spends many hours a week keeping up with research that is accelerating every week, there's a good chance an intelligent consumer of podcasts/blogs/reddit/books is a better source of info on this topic. (Which is why I had to tell my MD why I wanted to take Metforin.) Not knocking MD's - but most simply don't have the time for this sort of thing the way practices are set up now.
Note that this study shows that supplements increase _blood serum_ levels of NAD, not concentrations inside the cell and mitochondria where it counts.
So I'd say that without further qualification, this study shows safety more than efficacy. The role (if any) of exogenous NAD precursor supplementation is still a topic of debate, and there's a lot of money in selling these formulations.
FWIW, I personally take NAD precursors because they likely do no damage (other than to the wallet), and they possibly do some small amount of good. There are plenty of other supplements I'd spend my money before NAD precursors. (And of course diet and exercise outweigh them all for most...)
250 mg/day is 4.4 mg/kg. In the mouse study published in Cell they used 400 mg/kg/day.
I believe safety at these levels is irrevelant, so I prefer just working out every day and limiting my sugar and alcohol intake, all of which have proven life extension effects.
Sure - you can find many NR and NMN supplements on AMZN for example.
NMN has gained popularity over NR, I believe mostly due to longer shelf life.
Elysium and TruNiagen are two famous brands, that IIR were sueing each other at some point? I take another/cheaper NMN supplement.
Caveat emptor - there's really no way to tell what you're getting from many of these - and even if you're getting pure ingredients, the jury is still out.
For the past couple of years I’ve been taking Nicotinamide Riboside I buy on Amazon and it has basically eliminated inflammation and pain I used to get in my hands, wrists, and sometimes elbows. FWIW, I’m going to be 52 this year.
I'm tired about hearing about the lack of "explainability in AI" when the issue is with deep-learning & non-symbolic AI, the very allure of which to date has been its black-box, just-train-it nature, leading to all kinds of neat-o scary (truly) high-dimensional text and image trickery without the foresight of actually bothering to try to actually understand anything. We've had some pretty incredible explainable AI, planning & NLP processing, etc., in the past (1970's to early 2000's for the most part) that needs to act as the "prime motivator" to drive the DNN methods that really excel at processing at the periphery.
And even in the case of billion parameter models, explainability isn't as opaque or mysterious as it's made out to be. You can step through inferences and identify the semantics of individual neurons and clusters and layers. It took a while to see Google deep dream neuron exploration, but you can pick any feature you see in a generated image and identify exactly which neuron/s hold that feature, and how it ended up in an image.
Deep learning and neural networks are giant polynomial equations. Any variable can be investigated and its semantics understood in context of the model.
Gpt-3 neurons can be understood as encoded rules about language. It's tedious and requires a lot of compute and labor. Whether you ever get a map for a model depends on the need for it but there are no technical reasons a "black box" model can't be explained or understood. Neural networks aren't magic, they're just big and convoluted.
Think of it like application disassembly and reverse engineering. On a 100gb densely packed nn model, reverse engineering is going to be difficult, but explainability is just something that can be done if the expense is justified.
Related: this Future Perfect podcast https://pca.st/ngofo0ll