The LSD and sleeping pills were not in the original study I believe. That might be an artists representation of the image at the bottom of the original study, which I remember showed the results in a single row.
Don't ask me why some blogger posted the PDF in 2013, and also don't ask me how English Wikipedia editors determined that a Wordpress blog is a "Reliable Secondary Source". I did locate the original on NASA's own website. Public Domain (USGov).
What a find! It's on page 106 but I didn't immediately do a control-f to find it or look at the table of contents. My gosh, all the stuff I flipped through before that... some things haven't changed (e.g. Digikey and National Instruments ads).
I thought it was common practice to think things through first and only then start doing something, but it seems that these days a lot of people have taken inspiration from Zuckerberg’s motto, “move fast and break things”… I’ll never forget that.
I think it depends on your goals. There are many domains where you’re better off just trying lots of things and iterating towards a more ideal solution, vs. waiting to start until it’s been analyzed thoroughly to find the perfect solution.
For example, I suspect more startups die from over-analysis than from acting too quickly and breaking things beyond repair.
That said, I think LLMs can be a mixed bag here. I find that they can really help my analysis phase, by suggesting architectures, finding places where future abstractions will leak, reminding me of how a complex project works, etc. I’ve found it invaluable to go back and forth in a planning phase with an agent before even deciding what exactly I want to build, or how.
And on the implementation side, they make code attempts very cheap, so I can try multiple things and just throw them away if I don’t like the result.
But that said, I do find that it requires discipline, because it’s very easy to get into a groove where I don’t do any of that, and instead just toss half-baked ideas over the wall and the agent figure out the details. And it will, and it’ll be pretty decent usually, but not as good as if I pair program with it fully.
One place I've seen people get caught here is when they don't actually have the information they need to solve the problem - when they don't understand the problem space well enough, or they don't know the boundaries of the systems or technologies they're using well enough, or there's unanswered questions. At that point, I've seen people dig into research projects and 15 page design document discussions that would all be obviated by a day or two of just doing the thing and seeing what happens.
My understanding is that was the actual point of "move fast and break things" - gain knowledge by trying stuff to help you make better decisions, even if you make a mistake and need to roll back or fix it. The art to this is figuring out how to contain the negative consequences of whatever you're testing, but by all means, experiment early to gather information.
I've stated it to mentees as "don't be afraid to start a fire as long as you know where the fire extinguishers are" - it's OK to fail in the service of learning so long as you fail in a contained way.
P.S.: I didn't mean that in a negative way; I was just surprised that we have to learn this because our kids have forgotten it, or probably we don’t teach planning in elementary schools.
TBH I think the bigger problem for how we teach kids are twofold:
1. There's a right answer to every problem in school
2. If you got it wrong, that's bad, and you did bad.
The pattern I've seen from younger people these days is a learned helplessness, where there's no room for them to be creative in school, and any attempt to do so runs the risk of failing an assignment, getting a B, missing out on Harvard, and spending the rest of their lives poor in a ditch, or so they're told.
i think we need to encode (or refine) what we mean by “vibe code.” my original impression was that it was used to describe the process whereby someone with an idea but lacking development/engineering skills leveraged llm via an agent to create the mechanics to bring their idea to fruition. anymore it seems like if it has the hint of AI then it’s “vibe coded.”
ironically, i didn’t read the article because i come to comments now to see if its been identified as AI slop, so i don’t know which area this falls into
Some people (me included) are trying to separate Vibe Coding (no idea about code, just give me the result) from Vibe Engineering (I know how to do this, but can't be arsed to write. I also know what the result should look like)
Cool.
Thus I am an ai/llm-assisted coder amateur. I don’t code for living, I know principles (or I think so) but don’t remember syntax (too old to learn new tricks :)
Valerian missed the mark; I'm sure it's got great designs (although I also believe it's mostly CGI), but the story of the movie is disjointed (which is a risk when trying to merge multiple storylines into one) and the actors are lifeless.
I've grown to like Valerian over rewatches, but unfortunately it suffers from Besson being a massive Valerian fanboy and trying to stuff everything he possibly could into it... I think he'd have done far better if he'd gotten a more limited budget, or had to produce three of them for the cost of the one he did...
I know, hence why I think he should have gotten a smaller budget so that he was forced to try to contain himself to one story. Then maybe it'd have done well enough for a sequel as well... It feels like he got into it thinking he had this one shot so he better see how many things he could put in it, and as a result ensured he got only one shot...
The Fifth Element and Valerian and the City of a Thousand Planets are widely considered to share a thematic and stylistic universe, with similar aesthetic influences. There are shared elements (ha!) and aesthetics, with Valerian even featuring a shop called "Korbens" as an easter egg to The Fifth Element.
Unfortunately the movie doesn't do it for me, the 90s were a better time.
Once CGI became good storytelling and creativity took a backseat in Hollywood.
> Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
Revenue is on metric. Another important metric is gross margin.
OpenAI's gross margin is estimated at 33%[0]. They also have to pay Microsoft 20% of revenue.
So, for each $1 of revenue:
Revenue $1.00
COGS (inference) (0.67)
Microsoft (0.20)
Gross margin $0.13
That $0.13 must cover "everything else": R&D, payroll, etc., and ideally leave some profit on the table.
The problem for OpenAI (and other pure AI companies) is that inference is not like software that sells at marginal cost (build once, sell everywhere), but each token costs money. Inference gets cheaper, but newer models require more computing power and consume more tokens. So the gross margin does not improve over time.
Break-even in the future won't come from just growing API usage and subscriber base.
https://rarehistoricalphotos.com/nasa-spiders-drugs-experime...
reply