Heh, I agree. There is a vast ocean of dev work that is just "upgrade criticalLib to v2.0" or adding support for a new field from the FE through to the BE.
I can name a few times where I worked on something that you could consider groundbreaking (for some values of groundbreaking), and even that was usually more the combination of small pieces of work or existing ideas.
As maybe a more poignant example- I used to do a lot of on-campus recruiting when I worked in HFT, and I think I disappointed a lot of people when I told them my day to day was pretty mundane and consisted of banging out Jiras, usually to support new exchanges, and/or securities we hadn't traded previously. 3% excitement, 97% unit tests and covering corner cases.
Sam is an investor in a fusion startup. In any case, how long it takes us to get to working fusion is proportional to the amount of funding it recieves. I'm hopeful that increased energy needs will spur more investment into it.
I think we have a long way to go yet. Humanity is still in the early stages of its tech tree with so many unknown and unsolved problems. If ASI does happen and solves literally everything, we will be in a position that is completely alien to what we have right now.
> How to even find the value in living given all of that?
I feel like a lot of AI angst comes from people who place their self-worth and value on external validation. There is value in simply existing and doing what you want to do even if nobody else wants it.
> I feel like a lot of AI angst comes from people who place their self-worth and value on external validation. There is value in simply existing and doing what you want to do even if nobody else wants it.
I agree on this point, and have come to that conclusion myself regarding my own AI angst. However that doesn't solve the economic issues that arise from this technology. As large swathes of the workforce becomes replaced (something that, in my opinion, is rapidly approaching), how do we organise society so that everyone can survive / thrive?
As far as I can see there is very little impetus behind tackling such issues, compared to the forces pushing this tech forward so rapidly.
Sure, but if you're trying to get there by training a model on video games then you're likely going to wind up inadvertently creating a video game simulator rather than a physics simulator.
I don't doubt they're trying to create a world simulator model, I just think they're inadvertently creating a video game simulator model.
Are they training only on video game data though? I would be surprised when its so easy to generate proper training data for this.
It is interesting to think about. This kind of training and model will only capture macro effects. You cannot use this to simulate what happens in a biological cell or tweak a gravity parameter and see how plants grow etc. For a true world model, you'd need to train models that can simulate at microscopic scales as well and then have it all integrated into a bigger model or something.
As an aside, I would love to see something like this for the human body. My belief is that we will only be able to truly solve human health if we have a way of simulating the human body.
My prediction is that personal generation is going to be niche forever, for purely social reasons. The demand for fandoms and fan communities seems to be essentially unlimited. Big artists have big fandoms, tiny ones have tiny fandoms, but none of that works with personalized generations.
Well, maybe. But there are overwhelmingly large numbers of people who want to be in a fandom, and that means being fans of some shared thing. Maybe that shared thing will be AI generated, but it won't be a world of solipsists.
I think what the person you’re responding to meant was that you can generate a fandom for the content that was generated for you. So, you can get the feeling of being in a fandom despite there being no actual other humans that know what you’re talking about.
Sure, and people might enjoy that, but I'm saying that as much as people want to have fans, people also want to be fans, and that's not compatible with everyone consuming algoslop generated for them personally. Nobody is going to walk around with a T-shirt for an algoband that has an audience of just themselves. Maybe a virtual band gets famous in the same way Hatsune Miku is famous. But that's not personalized generation, that's just an old fashioned band with different tech.
A world without fandom is one without sports. That seems deeply unlikely to me! Anyone can generate personal podcasts with NotebookLM, which people enjoyed for a bit but doesn't seem to have made any impact on actual podcasts at all.
Communities around fictional universes are already fractured and shrinking in member size because of the sheer number of algorithmically targeted universes available.
Water cooler talk about what happened this week in M.A.S.H. or Friends is extinct.
Worse, in the long run even community may be synthesized. If a friend is meat or if they're silicon (or even carbon fiber!), does it matter if you can't tell the difference? It might to pre-modern boomers like me and you.
I think things will look a lot more like Vinge's Rainbows End than everyone burrowing into their own personal algoentertainment. I can't speak for GenZ but when D&D can sell out Madison Square Garden, there doesn't seem to be any softening in people's interest in fandom.
Virtual influencers might be a big thing, Hatsune Miku has lots of fans. But it's still a shared fandom.
Yea. I feel likes its a waste of time and energy to read/argue about it. Either it works or it doesn't. Everyone already has exposure to it and opinions on it. There's no need to convince anyone. Reality will bear out the results.
Crypto is a tech that solves problems that most people don't care about.
VR/AR is tech that is nowhere near ready so its premature to make judgements there. I am bullish on XR.
In terms of value-add, many times technology needs to be invented first before we decide whether its worthwhile. Nobody asked for computers in every home, smartphones in every pocket etc.
When computers/internet first came about, there were (and still are!) people who would struggle with basic tasks. Without knowing the specific task you are trying to do, its hard to judge whether its a problem with the model or you.
I would also say that prompting isn't as simple as made out to be. It is a skill in itself and requires you to be a good communicator. In fact, I would say there is a reasonable chance that even if we end up with AGI level models, a good chunk of people will not be able to use it effectively because they can't communicate requirements clearly.
So it's a natural language interface, except it can only be useful if we stick to a subset of natural language. Then we're stuck trying to reverse engineer a non documented, non deterministic API. One that will keep changing under whatever you build that uses it. That is a pretty horrid value proposition.
Short of it being able to mind read, you need to communicate with it in some way. No different from the real world where you'll have a harder time getting things done if you don't know how to effectively communicate. I imagine for a lot of popular use-cases, we'll build a simpler UX for people to click and tap before it gets sent to a model.
And this matters because? Most devs are not working on novel never before seen problems.