The pricing is disappointing. I'm surprised Cloudflare has not tried to compete on price against AWS lately after a good start with R2. Queues, database storage, database writes, worker invocation all more expensive than the AWS offering.
I read the code. It's a good case study of one-shot output from AI when you ask it to replicate a SaaS product. This is probably better than most because MotherDuck has been open about their techniques to build the product.
Pre-compaction the recent data can be in small files, and the delete markers will also be in small files. This will bring down fetch times, while ducklake may have many of the larger blocks in memory or disk cache already.
Reading block headers for filtering is lots of small ranges, this could speed it up by 10x.
For files up to 100kB of size, this should effectively be really close to the same price as S3 when writing (didn't check reading so much, but the writes/PUT is always much more expensive than read/GET)
Would be really useful pre-compaction and to deal with small files issue without latency penalties
It does not make private energy storage viable on its own. You need to get enough charge/discharge cycles out of it in a certain time period. This means you need almost daily high price fluctuations. We aren't seeing that in Europe. We see high winds push down prices for multiple days and we see multiple weeks with consistent high prices in the winter, with occasional drops on the weekend.
It’s a daily event now in Australia. Very low prices during the middle of the day, and higher in the morning and evening. Anyone with a battery or an EV they don’t need to drive far can play the market, usually with scripted sell/buy trigger points.
There’s enough profit to make the payback period for a decent battery quite short.
Yes, I see energy-intensive industry moving away from extreme latitudes in the long run. Most of Europe is at an unfortunate latitude and has surprising levels of cloud cover.
The only interesting part is that renewable production in Europe is so random that there can be far too much for short periods.
The wholesale price system is not a reliable signal or incentive for electricity generation supply and demand in Europe. There are various subsidies, taxes, levies, and fixed costs not shown at the wholesale level that completely change calculations.
These numbers will make customers upset and complain about price gouging if they don't understand the disconnect. Or, it makes customers think that renewables are cheap because they are not seeing the subsidies that on net result in higher payments to renewable providers than carbon-based producers.
I have detailed knowledge of dynamodb. This code does not properly mimic the service exceptions, input validations, eventual consistency, or edge cases. I would not feel comfortable developing against ministack or running tests against it. A lot of these services also have free tiers, so the use cases are quite minimal.
Hello! The target use case is integration testing, verifying that your application calls the right operations with the right data, not production fidelity. For that, MiniStack works well and is free. For anything requiring exact DynamoDB behavior (capacity throttling, consistency window, stream processing), you're right that it's not a substitute. Contributions to improve error fidelity are very welcome (we have pending ProvisionedThroughputExceededException, TransactionConflictException, ItemCollectionSizeLimitExceededException)
I ran into the same thing with Kinesis. The Kinesis API has strict limits on batch size, message size, and rate limiting. Any client writing to Kinesis must deal with all of these constraints and handle errors properly.
I briefly looked at localstack to see how they implemented the Kenesis api and other than a `KINESIS_ERROR_PROBABILITY` option to simulate rate limiting, they did not implement any of the constraints.
Thanks for pointing this, I am adding a few extra validations in v 18 (to be released in a few hours) feel free to raise an issue, this is an open source project but I do review each PR and I keep the code as clean as I can... And yes we do use AI but we know what we are doing :)
I was also involved in writing a clean-slate port of JSONata after finding issues in the jsonata-go repo and not wanting to run the javascript version in a sandbox. It was relatively easy until we stressed it with 20 layers of nested context and 5000 line expressions and suddenly we had memory explosions not present in the JS version.
JSONata is too tied to the language. Looking back, we should have slightly altered the spec and written some code mods. we didn't have customers bringing their existing JSONata over so they wouldn't notice the differences.
reply