That would be both feature completeness of the implementation and also some kind of a native USD authoring mode (editing the structure and layering multiple USD files properly), like Houdini Solaris or Nvidia Omniverse. For now Blender can’t even read/write MaterialX properly. (I’m not an insider though)
I really hope not, but I wouldn’t bet against it. The nature of products that take high capex to build and then have nearly zero marginal cost to reproduce is monopolies.
The same way Star Wars was still running in between the original series and the prequels. It had an active fan base and lots of side content that was constantly being produced.
(Sorry for the shameless self-promotion.) I'm building an app _conceptually similar_, but with an AI on top, so you get a chat/assistant with your personal context. https://github.com/superegodev/superego (Warning: still in alpha.)
Really nice. Thanks for self promo! Will definitely keep an eye on your project.
What is the ideal final state you want to achieve?
Do you agree that data capture is the main issue here?
My latest experiments:
2 days ago I started to capture screenshots of my mac every 5 seconds and later use it to approximately tell me what I'm doing. - a lot of issues with this approach.
Yesterday I setup ActivityWatch. It captures a lot of stuff from the get go. TBD if it can capture background YouTube video playing in addition to the active tab.
The main value I want to extract is to be able to see what was the plan vs what actually happened. And if I can make what actually happens closer to the plan - this is WIN.
But capturing what you are thinking of, what you are working on etc. - quite challenging and often happens offline, on the phone, another computer, messenger, email etc.
My quick take: companies were closing their data for decades and now it will bite them in their back - AI needs as full of a context as possible across devices, services and programs, to make AI as powerful as it can be and current architecture works against it. May be one day AI is so powerful that it will ETL all this data for itself, but for now it is painful to try to build something like this.
A human child not taught literally anything can see some interesting item extend a hand to it, touch it, interact with it. All decided by the child. Heck, even my cat can see a new toy, go to it and play with it, without any teaching.
LLMs can't initiate any task on their own, because they lack thinking/intelligence part.
This to me overstretches the definition of teaching. No, a human baby is not "taught" language, it learns it independently by taking cues from its environment. A child absolutely comes with an innate ability to recognize human sound and the capability to reproduce it.
By the time you get to active "teaching", the child has already learned language -- otherwise we'd have a chicken-and-egg problem, since we use language to teach language.
Well, you can explain to a plant in your room that E=mc2 in a couple of sentences, a plant can't explain to you how it feels the world.
If cows were eating grass and conceptualising what is infinity, and what is her role in the universe, and how she was born, and what would happen after she is dead... we would see a lot of jumpy cows out there.
This is exactly what I mean by anthropocentric thinking. Plants talk plant things and cows talk about cow issues. Maybe there are alien cows in some planet with larger brains and can do advanced physics in their moo language. Or some giant network of alien fungi discussing about their existential crisis. Maybe ants talk about ant politics by moving their antennae. Maybe they vote and make decisions. Or bees talk about elaborate honey economics by modulating their buzz. Or maybe plants tell bees the best time for picking pollens by changing their colors and smell.
Words, after all are just arbitrary ink shapes on paper. Or vibrations in air. Not fundamentally different than any other signal. Meaning is added only by the human brain.
Agreed. Everything that looks like intelligence to ME is intelligent.
My measurement of outside intelligence is limited by my intelligence. So I can understand when something is stupider compared to me. For example, industrial machine vs human worker, human worker is infinitely more intelligent compared to machine, because this human worker can do all kinds of interesting stuff. this metaphorical "human worker" did everything around from laying a brick to launching a man to the Moon.
....
Imagine Super-future, where humanity created nanobots and they ate everything around. And now instead of Earth there is just a cloud of them.
These nanonobots were clever and could adapt, and they had all the knowledge that humans had and even more(as they were eating earth a swarm was running global science experiments to understand as much as possible before the energy ends).
Once they ate the last bite of our Earth(an important note here: they left an optimal amount of matter to keep running experiments. Humans were kept in a controlled state and were studied to increase Swarm's intelligence), they launched next stage. A project, grand architect named "Optimise Energy capture from the Sun".
Nanobots re-created the most efficient ways of capturing the Sun energy - ancient plants, which swarm studied for centuries. Swarm added some upgrades on top of what nature came up with, but it was still built on top of what nature figured by itself. A perfect plant to capture the Sun's energy. All of them a perfect copy of itself + adaptive movements based on their geolocation and time(which makes all of them unique).
For plants nanobots needed water, so they created efficient oceans to feed the plants. They added clouds and rains as transport mechanism between oceans and plants... etc etc.
One night the human, which you already know by the name "Ivan the Liberator"(back then everyone called him just Ivan), didn't sleep on his usual hour. Suddenly all the lights went off and he saw a spark on the horizon. Horizon, that was strongly prohibited to approach. He took his rifle, jumped on a truck and raced to the shore - closest point to the spark vector.
Once he approached - there was no horizon or water. A wall of dark glass-like material, edges barely noticeable. Just 30 cm wide. On the left and on the right from a 30 cm wide wall - an image as real as his hands - of a water and sky. At the top of the wall - a hole. He used his gun to hit the wall with the light - and it wasn't very thick, but once he hit - it regenerated very quickly. But once he hit a black wall - it shattered and he saw a different world - world of plants.
He stepped into the forest, but these plants, were behaving differently. This part of the swarm wasn't supposed to face the human, so these nanobots never saw one and didn't have optimised instructions on what to do in that case. They started reporting new values back to the main computer and performing default behaviour until the updated software arrived from an intelligence center of the Swarm.
A human was observing a strange thing - plants were smoothly flowing around him to keep a safe distance, like water steps away from your hands in a pond.
"That's different" thought Ivan, extended his hand in a friendly gesture and said
- Nice to meet you. I'm Ivan.
....
In this story a human sees a forest with plants and has no clue that it is a swarm of intelligence far greater than him. To him it looks repetitive simple action that doesn't look random -> let's test how intelligent outside entity is -> If entity wants to show its intelligence - it answers to communication -> If entity wants to hide its intelligence - it pretends to be not intelligent.
If Swarm decides to show you that it is intelligent - it can show you that it is intelligent up to your level. It won't be able to explain everything that it knows or understands to you, because you will be limited by your hardware. The limit for the Swarm is only computation power it can get.
May I ask your opinion, as industry insider, regarding what makes a good and bad OpenUSD support?