As someone who used to think a lot of the stuff in here, and definitely no longer does, I can say that this is not someone who is a practitioner.
Adam, if you’re reading this, I would definitely encourage you to take a step back and go deep on how you might prove that the problems are where you think they are. The stuff you’re describing has all been done many times at this point and just either hasn’t turned out to be valuable to solve, or now has big effective companies doing it, or is extremely difficult for non-obvious reasons. Your characterization of total doom in modern experimentation is inaccurate.
An API driven vivarium as a service with instrumented cages was tried to tens of $M by Vium. That didn’t take off, though there are lots of vendors of sophisticated cage instrumentation now, including based on computer vision. DeepLabCut is amazing and open source.
In terms of general progress in experimental tools, there has been tons. The modern super resolution confocals; affordable femtosecond light sources like the Coherent Monaco enables all kinds of awesome stuff; or newer methods like MERFISH and PatchSeq (or hell, just the total commodification of sequencing generally).
Microfludics are now widely used and super valuable as “ASICs”, though I think the lack of a general purpose “CPU” lab of a chip has misled people not in the field.
In terms of molecular tools, it’s just night and day from 10 years ago. iPSCs, CRISPR, expansion microscopy, tons of new labels and stains etc.
Ginkgo and Zymergen have enormous scale, invest heavily in software and robotics, and are working “in vivo.” Recursion also invests heavily in automation, and while I don’t think they run animals in house, it’s not clear what Transistor is proposing that would outperform them.
Lots of companies run lots of studies in tons of different species all the time. Less so in academia, but I don’t think saying “well everyone else is working in vitro and we will work in vivo” is the kind of arbitrage opportunity Transistor seems to think it is. Where there are bottlenecks that I think could be improved, they are either unsexy (an easy Stripe-product-quality 3rd party IACUC would be super useful) or hinge on showing up with an enormous bucket of money so you can do things like set up your own breeding colonies.
And of course scientists really do care about being right and finding lasting results that are big effects. It is so much harder than it looks to do that well, but the people working in it are super smart and, at least outside of academia, generally have good incentives.
Edit: I clicked through to their “Business” page, which reads in part:
> Transistor will instead build a system designed with speed and scale in mind from the beginning: an automated wet lab with an API interface. Current CROs require bureaucratic back and forth which can extend into the months and are extraordinarily expensive for results that one crosses their fingers and hopes are correct.
May I point them to a company I founded 9 years ago, which raised a $56M series B last week: https://strateos.com/
Does this mean an opportunity exists? Maybe. But I think Transistor has some education to do on where the true problems that would be valuable to solve lie.
As someone with software background and interested in bio, it's a real pleasure to see a commentary from an experienced practitioner.
While we're discussing venues of progress, it's clear that software (and deep learning advances specifically) is poised to have a large impact on how bio research is conducted, and what categories of questions we'll be able to answer. The current consensus on how you leverage software in practice is to put both bio and software teams under one roof (Insitro and Recursion are canonical examples). I wonder if you think software-only company makes sense in this space? The analogy I like to use: people used to roll out their own accounting software within large enterprises until spreadsheets came along. Is there room for an equivalent in some segment of bio?
as someone who has worked in biology and "building fault tolerant systems" (erlang), I think this might be a real problem (I don't know if it is; the last lab stuff I did was just before automation), but it's simultaneously obvious that the writer has no clue about how to correctly merge the two concepts.
Yes definitely. Here the "space" doesn't refer to physical space, but an abstract vector space that neuron's tuning represents. For example, there is a famous paper[1] that showed neurons could be responsive to abstract concepts -- for example, one might fire for "Bill Clinton" regardless of whether the stimulus is a photo of him, his name written as letters, or even (with weaker activation) photos/text of other members of his family or other concepts adjacent to him. The neuron's activity gives a vector in this high dimensional concept space, and that's the "space" GP is referring to.
Can I get an ELI5 on how physical neurons, stuck in a measly 3 dimensions, can possibly form higher-dimensional connections on a large scale?
I understand higher dimensional connections in theory (such as in an abstract representation of neurons within a computer), but I can’t imagine how more highly-connected neurons could all physically fit together in meat space.
If I’m recording from N neurons, I’m recording from an N-dimensional system. Each neuron’s firing rate is an axis in this space. If each neuron is maximally uncorrelated from all other neurons, the system will be maximally high dimensional. Its dimensionality will be N. Geometrically, you can think of the state vector of the system (where again, each element is the firing rate of one neuron) as eventually visiting every part of this N-dimensional space. Interestingly, however, neural activity actually tends to be fairly low dimensional (3, 4, 5 dimensional) across most experiments we’ve recorded from. This is because neurons tend to be highly correlated with each other. So the state vector of neural activity doesn’t actually visit every point in this high dimensional space. It tends to stay in a low dimensional space, or on a “manifold” within the N-dimensional space.
Agreed, it's really cool :). A lot of this is very new -- it's only been in the past decade and a half or so that we've been able to record from large populations of neurons (on the order of hundreds and up, see [0]). But there are a lot of smart people working on figuring out how to make sense of this data, and why we see low-dimensional signals in these population recordings. Here are some good reviews on the subject: [1], [2], [3], [4], and [5].
I'm curious about how much of this apparent low dimensionality is explained by (1) the physical proximity of the neurons being recorded, (2) poverty of the stimuli (just 4 sequences in this paper, if I'm not mistaken)
Both good questions. It could very well be that low dimensionality is simply a byproduct of the fact that neuroscientists train animals on such simple (i.e., low-dimensional) tasks. This paper argues that [0]. As for your first point, it is known that auditory cortex exhibits tonotopy, such that nearby neurons in auditory cortex respond to similar frequencies. But much of cortex doesn't really exhibit this kind of simple organization. Regardless, technological advancements are making it easier for us to record from large populations of neurons (as well as track behavior in 3D) while animals freely move in more naturalistic environments. I think these kinds of experiments will make it clearer whether low-dimensional dynamics are a byproduct of simple task designs.
Look up state space, then neural population and neural coding.
This isn't really something about neurons per se, it's about systems.
Suppose I have a system that can be fully characterized (for my purposes) by two number: temperature and pressure. If I take every possible temperature and every possible pressure, these form a vector space. But notice that temperature and pressure are not positions in the real world. It's a "state space" or "configuration space". At any moment in time, I could measure my system's temperature and pressure, and plot a point at (temperature(t), pressure(t)). As the system changes through time according to whatever rules govern its behaviour, I could take snapshots and plot those points (temperature(t+1), pressure(t+1)), (temperature(t+2), pressure(t+2)). This would give a curve "trajectory" that represents the systems evolution over time.
Okay, that's a 2D state space. But imagine I had a simulation of 10 particles (maybe some planetary simulation for a game). For each point I have maybe a 3D position (x,y,z) and a 3D velocity (vx, vy, vz). So I need 6 numbers to fully describe the state of each particle, and I have 10 particles. Therefore to fully describe the state of the whole system, I need 60 numbers. I therefore have a 60-dimensional state space. But each of these dimensions does not represent a position measurement along some axis in the world. In fact, only 30 of them do (3 * 10), the other 30 represent velocities.
The vector here refers to the "feature vector" where the dimension is the number of elements in the vector. E.g. a feature vector of [size, length, width, height, color, shape, smell] has 7 dimensions. A feature vector for the space has 3 dimensions [x, y, z]. The term "higher dimension" just means the number of features encoded in the vector is higher than usual.
In the context of neurons, while the neurons are in the 3 spatial dimensions, the connections of each neuron can be encoded in a feature vector. Each connection can specialize on one feature, e.g. the hair color of the person. These connection features can be encoded in a vector. The number of connections becomes the dimension of the vector. Not to be confused with the physical 3D spatial dimensions of the neurons.
The nice thing about encoding things in vectors is that you can use generic math to manipulate them. E.g. rotation mentioned in this article, orthogonality of vectors implies they have no overlap, or dot product of vectors measures how "similar" they are. Apparently this article shows that different versions of the sensory data encoded in neurons can be rotated just like vector rotation so that they are orthogonal and won't interfere with each other.
Linear algebra usually deals with 2 or 3 dimensions. Geometric algebra works better on higher dimension vectors.
Don't conflate physical and logical, in this case we don't care about the physical dimensions, only how the logic is expressed. Even a 2D function can be expressed in N-dimensional parameters, such as
y = a1 * x + a2 * x^2 + a3 * x^3 + a4 * x^4
where you only have one input and one output, but 4 constants that can be adjusted. These 4 constants make up a 4D vector.
Consider three neurons all connected together. Now consider that each of them may have some 'voltage' anywhere between 0 and 1. Using three neurons you could describe boxes of different shapes in three dimensions. Add more and you get whatever large dimension you want.
If you take a matrix of covariance or similarity between neurons based on firing pattern, and try to reduce it to the sum of a weighted set of vectors, the number of vectors you would need to accurately model the system gives you the dimensionality of the space.
This isn’t about the 3 dimensional structure the neurons occupy, but about their operational degrees of freedom.
Think about how a CNC machine works, you can have CNC with more than 3 axis, for example a 4 axis CNC machine can move left/right up/down backwards/forwards and also have another axis which can rotate in a given plane.
From a more mathematical perspective just think about the number of parameters in a system (excluding reduction) each parameter would be a dimension.
Appreciate the attempt, but in this example the 4th axis is not independent since the motion along that axis can be achieved, with some complexity, by the motion along the other axes. Granted this is not very useful for a machinist because it will be very tedious to machine a part this way compared to the dedicated 4th rotating axis, but mathematically it is redundant.
I have found it easiest to think of a logical dimensions or configurations when thinking of higher dimensions. Physically it can be a row of bulbs (lighted or not) wherein N bulbs (dimensions) can represent 2^n states in total. The 2 here can be increased by having bulbs that can light up in many colours.
I roughly understand what the article says about dimensional space (Reading higher mathematics books on the way to my meagre college course way back when, helps me a little, even if it is all half-remembered and a bit wrong -- this understanding is sufficient enough to satisfy me), however the poster above me doesn't, and clearly asked for a definition a 5 year old layman could understand.
The comment I am replying to, your comment in the tree, and the one next to you, does not seem to match that request in any sense.
Now, simplified definitions are an art, but Feynman managed it with Quantum Electrodynamics -- so it is not impossible to do it for complex subjects. And it seems to me the less you understand a subject, the less simple and more confusing your explanation will be, such as the explanations given by the other posters here. (fyi: I do not understand enough to properly convey my understanding clearly -- which is why I have not attempted to do so)
This isn't a matter of having an incomplete understanding, thanks for the offhanded aspersion though. The fundamental problem is that the concept of manifolds in state space isn't really something that has a non-tortured real world analogy, which is a prerequisite for a five year old to understand. It's probably possible to express more simply with a video demonstrating the covariance structure of a data set visually, then showing how that results from a small set of vectors, but I've read enough textbooks to be confident that a simple, concise explanation eludes words.
This is fun, I'm enjoying reading the replies :) I'm certainly no expert, but attempting an explanation helps me exercise my personal understanding, so here goes. Corrections welcome.
The "connections" you mention aren't the issue, in my understanding of the biology. Neurons are already very strongly interconnected by numerous synapses, so they already do physically fit together in their available 3D space, and appear capable of representing high-dimensional concepts. (See caveat below.)
The "higher dimensions" here are not where the neurons exist, only what they're capable of representing. If we think about a representation of the concept of a "dog" for example, there are many dimensions. Size, colour, breed, temperament, barking, growling, panting, etc etc. Those attributes are dimensions.
Take two dog attributes: size and breed. You can plot a graph of dogs, each dog being a mark on the graph of size vs breed. Add a third dimension and turn the graph into a cube: temperament. You can probably imagine plotting dogs inside this three dimensional space.
It's very difficult to imagine that graph extending into 4th, 5th or further dimensions. And yet, you can easily imagine, say, a dog that's a large, black, friendly Labrador with a deep bark who growls only rarely. We could say that dog can be represented as a point in 6-dimensional space (or perhaps a 6-dimensional slice through a space with even more dimensions, just a slice through 3D space could produce a 2D graph).
The number of connections between neurons may be related to the number of dimensions they can represent. In honesty, I don't know, and I guess that if there is a relationship it may not be linear. So neurons might be capable of representing 4 dimensions with fewer than 4 synapses, for example, I don't know. Seems possible to me, though.
Caveat: I think my reasoning here may be fallacious: "the fact that neurons are capable of representing high-dimension concepts demonstrates that they have adequate synapses to do so". It seems akin to anthropocentrism, I'm not sure. Perhaps it's just a circular argument. I think it provides an adequate basis for an ELI5 though.
Do you mean due to the thickness of each connection, they would occupy too much space if the number of dimensions was too high? Not necessarily 4 or more, just very high because there are on the order of n^2 connections for n neurons?
In the visual cortex, neurons are arranged in layers of 2D sheets, so that perhaps gives an extra dimension to fit connections between layers.
the ELI 5 of higher dimensions explained mathematically in text is that a coordinate in R^3 is identified uniquely by a three tuple u = (x, y, z). A four touple simply adds one dimension. That might be a time coordinate, color, etc.
If I remember correctly, the integers Z form spaces, too. Z^2 can be illustrated as grid, where every node is uniquely identified again coordinates or by two of its neighbours, eitherway v = (a, b).
Adjency lists or index matrices are common ways to encode graphs. My modelnof a neuron network is then a graph.
I imagine that, since Neurons have many more Synapses, that's how you get a manifold with many more coordinates.
Each Neuron stores action potential much like color of a pixel and its state evolves over time, but that's when the model becomes limited.
How it actually represents complex information in this structure I don't know.
PS: Or very simply put, physics has more than three dimensions.
There was a fun article in early March showing that the same is true for image recognition deep neural networks. They were able to identify nodes that corresponded with "Spider-Man", whether shown as a sketch, a cosplayer, or text involving the word "spider".
deep neural nets are an extension of sparse autoencoders which perform nonlinear principal component analysis [0,1]
There is evidence for sparse coding and PCA-like mechanisms in the brain, e.g. in visual and olfactory cortex [2,3,4,5]
There is no evidence though for backprop or similar global error-correction as in DNN, instead biologically plausible mechanisms might operate via local updates as in [6,7] or similar to locality-sensitive hashing [8]
Wouldn't it be especially inelegant/inefficient to try and wire synapses for, say, a seven-dimensional cross-referencing system, when have to actually physically locate the synapses for this system in three-dimensional space?
(and when the neocortex that does most of the processing with this data is actually closer to a very thin, almost two-dimensional manifold wrapped around the sulci)
There has to be an information-theory connection between the physical form and the dimensionality of the memory lookup, even if they aren't referring to precisely the same thing, right?
The issue with your question is that the dimensions of the configuration space and the physical form aren’t even approximately the same thing. Take, for example, a 100x100 grayscale image. It’s a flat image; the physical dimensions are 2. There are 10,000 different pixels though, and they are all allowed to vary independently of each other; the configuration-space dimensions are 10,000. Neurons are like the pixels in this analogy, not like the the physical dimensions.
Neurons aren't allowed to vary independently of each other, and neither are pixels; A grayscale image with random pixels is static, not even recognizable as an image. The mind cannot decode those pixels in a seven-dimensional indexing scheme, it can't even decode them in the given two dimensions if you have an array size error and store the same data in an array 87 columns wide. In your analogy, if you put a stop sign into the upper right side of the image, that is always going to be recalled associativity with the green caterpillar you put in the lower left side of the image. These properties don't work so well for memories & imperfect/error-prone but statistically correct biological systems.
The average neuron has 1000 synapses, and for geometric reasons (Synaptic connections take up space) most of those are to other neurons that aren't very far away in 3D space.
You’re not contradicting my point, you’re just using the word “image” to refer to a different concept than I did. That “100x100 pictures of the real world” won’t reflect all the 10,000 dimensions available in “a 100x100 bitmap” is certainly the case—and that is what is meant by saying they lie on a lower-dimensional manifold within that space.
Similarly: yes, physics limits neuronal connectivity. The actual space of neuronal connections lies on a manifold inside the full “n squared, divided by 2” dimensions of connectivity of any old set of n points. That still doesn’t mean neurons can’t represent high-dimensional concepts, because your treatment of physical dimensions as the same thing as concept space is still mistaken. Taking your 1000 synapses number for granted, the input to a given neuron would be 1000-dimensional, not three. If you’re not arguing the concept space is 3d, and merely arguing against those who’d say neuronal connectivity isn’t limited by physical constraints, then I’d advise a reread of the ancestor comments; none of them are saying that.
Neurons are not random access. The analogy is otherwise pretty apt, except that an image doesn't store information about what it displays and I don't mean EXIF.
Maybe, but my guess would be that there's a trade off made here. Either you can use higher dimensionality in the abstract, or you can have a much much bigger brain. A bigger brain processes slower merely because of volume and requires a lot more resources to support it.
Nature stumbled onto the path that it did because we don't have high enough nutrient food or fast enough neurons.
No basis to sue here: you are perfectly allowed to lend money without interest, you just have to pay gift taxes on it. (If it is really a loan, the gift taxes only apply to the forgone interest. But, keep in mind that if they can’t repay it, they will owe income taxes on the whole amount!)
On the plus side, this means we are much more likely now to get actually awesome prosumer/consumer-grade AR systems with many of the current issues worked out in the next N years. It's starting to get to a point where, though it's still kind of lame, you can see how it would be awesome, but still needs $B of R&D so... I'll take it.
Sure, but the process of getting there will either directly physically harm some humans or just otherwise be highly financially inefficient (in the case of developing wartime technologies which have no wartime purpose).
These aren't bombs. A heads-up-display has more chance keeping infantry safe and reducing the chances of misidentifying an innocent bystander as an enemy combatant.
HL2 is the best device I've ever used, the future is definitely AR. Once you try it you will never want to have a smartphone again. Smart glasses all the way. MSFT just needs to miniaturize at this point, the UX is already there
Yes, you now have a much bigger chance of getting your proprietary AR headset for $499 linked to your microsoft.com account with the development paid for by US tax dollars used to kill children in Yemen with drones.
As a pilot, the dynamics of flying for money are very different than flying for fun or personal reasons. The pressure to make the flight in marginal conditions, or push separation to get the shot, or otherwise just generally fly closer to the limits is real and significant. Adding money (and presumably third parties) totally changes the nature of the operation.
Yeah, when you put it that way, it makes more sense.
I still hope they’ll make the rules more permissive at some point, especially as drones are integrating more and more flight automation and collision avoidance technology. Maybe certify certain drones with specific features and restrictions as being suitable for commercial usage without a license? Accidents can still happen, but... accidents can also happen with any kind of commercial activity, even without drones involved at all.
My impression is that it's pretty easy to get a Part 107 Remote Pilot license; I think it's essentially just a knowledge test. Even if all you learn getting it is how the airspace system works and how to coordinate your use of it, that's a significant base of knowledge that is essential to flying responsibly in the US. There are serious civil and criminal penalties in the Federal Aviation Regulations for lots of complex rules that really do matter, so it makes sense that they at least just want commercial operators to understand what they are on the hook for.
I asked a flight school instructor about a part 107 license a while back and his response was "you want one? say the word and you'll have it by lunch." The only requirement is passing a written exam which he said was very easy compared to the private pilot exam.
The hardest thing about getting the part 107 license might actually be getting the identification form signed, since it needs to be done by an official. If you take the exam at a flight school they'll probably offer to have a CFI do it for you (for a fee I'm sure), but if you take the exam at a non-flight-specific testing center you might have to make a trip to a FSDO or call a CFI and see if they'll do it. It would help with the accessibility of the 107 license if they got rid of the somewhat arcane proof of identity rules inherited from other pilot's licenses.
In such cases it’s normally intent that matters. A lot of laws are based around intent, which might seem like a huge issue but that’s why terms like ‘beyond a reasonable doubt’ are used.
If you’re regularly selling photos from your regular drone flights then that’s hard to argue about. But a once off because you happen to take a news worthy photo is another story.
All these systems are just too complicated. We keep adding features on features to software without a second thought, because it's invisible and you can't immediately tell from looking at it how insane it is, in a way that you wouldn't be able to ignore if these were mechanical systems.
Also, not that it would have prevented this attack, but as a community we desperately need a fully open source FPGA-based ultra simple firewall appliance. The absolute minimum set of configuration options, the simplest possible hardware architecture; something you could actually trust with your life. Right now I have zero confidence that any of the commercially available security appliances are actually secure against nation-states.
This isn't going to help if the discussed is essentially tunneled through the Nat using legit protocols and traffic to the vulnerable service, where most of the attacks actually happen on the software side. How do you architect a firewall to accurately know if the application layer traffic is legit or not? It might not even be possible to fully implement something like this. Sounds complicated already. There is a reason firewalls are complex because the application landscape is extremely complex.
I agree with the concerns over complexity. That said, I see nothing in the article that indicates that a firewall appliance is the correct countermeasure. Defense in depth is likely what you need (and may ultimately get) here.
> as a community we desperately need a fully open source FPGA-based ultra simple firewall appliance
Firewalls are sort of an anti-pattern. The idea of firewalls to most people is "let's make sure they can't get past this one network control and then we can stop thinking about security." But we've known for a long time that defense-in-depth is the only real black-box security strategy. Yes, it would be nice if we could get high-performance packet inspection and analytics for cheap, but by itself it's next to useless.
More than a "firewall" we need 1) federated authentication+authorization protocols to stick in between the network and application-layer protocols (these basically exist but nobody uses them, so it's time to make something trendy), 2) a standard for ephemeral credentials tied to identities (tied to #1), 3) a standard for supply chain verification in modern technology stacks along with a simple way to certify they've been followed. This ensures better integrity of network boundaries (authn+z rather than network security) and that the software along the way is secure against supply-chain attacks.
I don't know if #3 is even possible, but theoretically it is. Just maybe not with any network we have today.
While it is difficult to design a secure procurement chain all the way to the SiO2, we could at least design simple enough hw/sw systems for which formal verification is an economical option. And then force government entities to use formally verified systems instead of the bug ridden crap most shops, especially the sw ones, have to ship under intense deadline pressure. The market has led us into a broken local optima, no way to get out short of state level action.
Maybe layers? For government agencies, the government could manage procurement.
For non-government organizations, realistically, maybe some sort of public-private/foundation partnership, if the NSA is willing to bend a little. Which at this point might be in their best interest.
MITM is getting harder due to TLS improvements. Most traffic is TLS encrypted over a handful of destination ports.
I don't want to oversimplify, because there is of course an uncountable number of software packages and a lot of noise on any network. However, "super simple FPGA network firewall" is not going to save us. Trustworthy logging, least-privilege account credentials and a competent SOC, among a number of other things, are critical.
Many orgs know how to be secure, they just can't or won't be due to some financial or usability constraint.
This stuff doesn't have to be complicated. Large organizations should have an internal CA to handle authentication and it should be a policy to use any other method.
Per the figures, it looks like 16 hours had the largest effect, even larger than 24 hour fasts. But this is mouse data and should all be taken with a huge grain of salt.
Off topic: I understand “taken with a grain of salt” is meant to express diminishment of the idea it is being associated with. By saying “huge grain of salt” which is very common, doesn’t that defeat the purpose?
I keep being surprised that anyone takes Gary Marcus seriously. As far as I know he hasn’t contributed anything meaningful to deep learning (or any other modern AI methods) but is always somehow there to criticize, waving his arms and shouting “but it’s not actually reasoning!” Baffling.
Edit: removed reference to Gary being a psychologist. It was distracting from the actual point.
I'm surprised anyone in psychology would take machine learning researchers seriously when they continually spout that they are "solving intelligence" or any other such nonsense. I see Gary as a necessary antidote to hype and hyperbole.
And he is going against a lot of money, hype driven startups and arguably inflated paychecks... It is expected to have people misinterpreting and criticizing him.
While I don't necessarily agree with a lot of Gary Marcus' arguments re: AI, I don't think it's fair to gate-keep based on what you perceive his background is. AI is an incredibly multi-disciplinary field, and it's a little silly to imply that people coming from a psychology background cannot contribute meaningfully to the discussion.
It's not that he is a psychologist, it's that he doesn't have any idea about what he's talking about in ML. It's important to have results when you want to change other people's opinions, he has only opinions. Let him do better than GPT-3 and show us how it's done.
If his background were in selling used cars, would his opinion receive any traction at all? I think it's fair to question the relevance of somebody's background if their background is the root of their supposed relevance.
You mean like the rebuttal the article gives? There is little more to Gary Marcus's position to rebut because his position is so weak. If he had a more relevant background then perhaps his opinions would have more substance and, consequently, warrant more substantial responses.
Eliezer Yudkowsky has zero credentials and has contributed nothing of note on the practical side, and yet I frequently see him touted as an "AI expert" (here and elsewhere on the internet).
There's no barrier to entry to posting your opinions on the internet, and (to the degree possible) it's usually better to judge the post on its own merits, rather than the author's.
Adam, if you’re reading this, I would definitely encourage you to take a step back and go deep on how you might prove that the problems are where you think they are. The stuff you’re describing has all been done many times at this point and just either hasn’t turned out to be valuable to solve, or now has big effective companies doing it, or is extremely difficult for non-obvious reasons. Your characterization of total doom in modern experimentation is inaccurate.