We're a tiny team that so far only got the confirmation that it works. The focus right now is to make the first batch of customers happy, and continue to spend effort on the metaplanner.
Btw, I advise you to take a look at the metaplanner's code. It's the definition of the planning task for the planner. So that the planner can learn to solve the planner's (it's own) task. It is leveraging the HyperC core that sits in a separate repository in the same github org.
I heard the approach was circulating in the AI planning academic circles but no one had put it together as many academically-uninteresting details needed to be cared for first. That's exactly what we did.
We are not a crypto company. Moreover, being as clean and open as possible is the only way to win the battle of the good guys vs. the bad guys in supply chain.
First, you can just google for "logistics is a shady business"
If you want to be more specific, you can google for "carrier haulage tax evasion" - this will be just the tip of the iceberg.
Supply chain is mixed with shady practices for tax evasion, bribery, laundering and all sorts of shady things at all levels that "compete" with actual process optimization. Combined with overall opposition to automation this incapacitates any effort to make the process efficient cuz it's hard, it "steals jobs" and not easy money.
Gotta say, I expected to find a lot more, compared to industries I know are shady (compare to googling "vpn shady"). I don't think I have a specific model of what shady practice are common in the logistics industry.
We're starting with small Amazon sellers. And it's just what I believe is right, the technology must take over the logistics and stop the supply chain gangs from thrashing the planet with waste. We aren't getting anywhere with the status quo.
Andrew here, the founder of HyperC. Thank you very much for your feedback, and especially for your attention and time to look at the note.
Our current and today's focus is, of course, not to send the sattelites to the sun to build the Dyson sphere but rather to automate sending average commodities from China to Amazon FBA. Everything else comes after.
And you're right, the supply chain mess isn't caused by the absense of tech (although this does contribute) but rather by the "dark" forces in the logistics world that would prefer to keep things messy and hard to trace. The entire logistics thing has historically always been a shady business. But big data is coming for the dark guys.
So the immediate plan is to build the model of the "black box" of logistics by analyzing the data we can actually collect, then use the combination of technology and HyperC's reputation-leveraged optimal financing offers to beat the shady schemes out of the market. That itself could be overly ambitious, but this is what I believe is right.
Hello HN, I'm the designer of HyperC Planning Database. Its purpose is to demonstrate that state-of-the-art autonomous planning techniques from self-driving and robotics are usable in real-world IT production environments.
We package all applicable knowledge in AI planning, automatic proof and machine learning to convert multiple data science problems into an IT database administration process.
The ultimate goal is to get as close as possible to the 'universal algorithm' which will solve things like optimal logistics, food production, dyson sphere construction plan, etc.., fully autonomously.
I'm introducing the 2 years of hard work of our team to package classical AI planning into native Python concept.
We will soon follow-up with our ML and GP accelerators of universal heuristics that we build on top of fast-downward and our naive symbolic execution engine.
Things are not quite that simple. You can't say "because my Go app serves a single request under no load in 50us, it will serve 20'000 per core under 100% load" you'd be surprised it will not.
Modern machines are like a networked cluster themselves. You need to do a ton of work to tune both kernel and "hardware" parameters to identify bottlenecks with near-non-existing debugging tools.
There is one truth here: we used to get more performance by scaling "horizontally". Maybe it's time to "scale within"?
That's a fair point. I've updated the post to read, "That translates to thousands of requests per second per core" instead of saying it's linear scaling. Thanks for the feedback!
That’s nice. But I didn’t really take it literally.
Do you have some benchmark results by any chance? Although it’s a bit of a can of worms, and I would understand if you didn’t want to get into it at this time.
I've only done some light benchmarking so far. I had it running on a two-core DigitalOcean machine with sustained write load to test for race bugs and it was replicating 1K+ writes per second. But honestly I haven't even tried optimizing the code yet. I'm mainly focused on correctness right now. I would bet it could get a lot faster.
Sharding is not really scaling for many, and I think despite your good intentions you may be misleading others. I’m glad you like the setup but people flocking to SQLite scares me.
I have a question about how Kolmogorov complexity is related to "Human complexity" or complexity of understanding?
E.g. a program may be very complex in Kolmogorov terms, like describing 1000 random numbers - easy to understand: you have a database of numbers, and a simple procedure that would scan through it. You can also imagine some real-world microservices-based program with a good architecture and a lot of code that handles all the exception cases of incoming data, all easily understandable.
And now imagine an optimizing compiler for prolog programs. It may have much less code but the algorithm will be so complex that it might be impossible to fully understand its behaviour. E.g. fast-downward is a great example of such a program.
So I'm wondering what does Kolmogorov complexity actually tell? Or does it tell anything useful in "real"-world?
Well, that's altering the 'language' that you're measuring the KC against. For instance, in the language that CS majors use the phrase "Kolmogorov Complexity" is sufficient to encode the concept of KC itself. And in the context of that language plus the concept of KC and adding in this entire thread, the letters KC themselves encode Kolmogorov Complexity in entirety.
So the 'real world' version isn't so easy to tell in absolute terms like that because the languages can differ. But if you were thinking of it like a compressor/decompressor pair, then moving things into the language is like moving things into the compressor/decompressor.
Naturally then you conclude that the "human complexity" depends greatly on the humans since any pair of humans creates a new universal description language that we can see as some base language plus the jargon that they are both familiar with.
> in the language that CS majors use the phrase "Kolmogorov Complexity" is sufficient to encode the concept of KC itself
Wait, isn't that just conflating "language" with "knowledge"/"information"? The underlying assumption here is that the CS major has an association of a concept encoded by the letters "Kolmogorov Complexity".
This is not universal, though, i.e. there's no computation that could derive the meaning behind these letters from the encoding alone. It's like claiming "620" is sufficient to encode Mozart's "Die Zauberflöte" ("The Magic Flute"), because in the language of a musician, the Köchel catalogue number along with the context would enable them to decode the full meaning.
But in reality you would still have to look up the number and the score somewhere so it's not really an encoding but more of a pointer or index. I'd see any technical term that way, in that the term itself is not an encoding, but a key/index/identifier of a concept, not a full definition of the concept itself.
Btw, I advise you to take a look at the metaplanner's code. It's the definition of the planning task for the planner. So that the planner can learn to solve the planner's (it's own) task. It is leveraging the HyperC core that sits in a separate repository in the same github org.
I heard the approach was circulating in the AI planning academic circles but no one had put it together as many academically-uninteresting details needed to be cared for first. That's exactly what we did.