Explained above, I WFH, single (no kids) and have an EV - but I don't use the car for commute etc so I can be choosy about when I charge the car and take advantage of the ultra cheap periods.
Fixed price advantage is you use power whenever you want. Your average unit rate is just the price on the tariff. Predictable and safe.
Agile price changes every 30 minutes, so you need to do a little planning. But if you take advantage of the cheap periods you'll generally come out on top. My average unit rate last year was like 16.5p p/kWh whereas the standard tariff was 23-24p, so some nice savings. There's also some risk involved - the price can go up to £1 p/kWh and a few days in winter in 2024 it did that for a short while (around the peak periods) so you have to take on that risk - and obviously being exposed to the world energy markets does mean you get exposure to stuff like wars impacting global markets.
I mean there's nothing stopping you from using lots of power between 4pm-7pm it's just you'll drag that average unit rate up to the point where it's probably not worth it. When I say "use lots of power" I don't mean like I sit in the dark between 4-7pm, it's just I avoid the big ticket power users like ovens, showers, cookers etc
Yeah, and the really important point is that you get to see the prices a day ahead, which is what makes it actually pretty easy to live with.
For instance, if I know it's going to be expensive when I'd be cooking tomorrow's evening meal, then I won't make something that would need a long time in the oven. And if it's going to be particularly cheap around lunchtime, then I'll plan to do a big load of laundry then.
I have electric heating, which I thought might be a cause of anxiety but it's not really worked out that way. The temperature in my flat won't go down by more than a degree or two with the heating off over the course of the sort of 4 hour price spikes you tend to see in mid-January. If it looks like it's going to be unusually bad, I could always raise the temp by half a degree beforehand, but in reality I've only bothered to do that maybe three times in the past couple of years.
Basically, it's just another thing to factor in when planning my day. No more of a hassle than checking the weather forecast or glancing at my calendar.
> Basically, it's just another thing to factor in when planning my day. No more of a hassle than checking the weather forecast or glancing at my calendar.
Sounds like an incredible hassle at a level I would pay hundreds of dollars per month to avoid. That sort of mental overhead is crazy to me. But I'm also someone who finds having a single event on my calendar for the day disrupts my productivity and mental peace to an absurd level.
Time of day billing is definitely the future for renewables though, once they hit a saturation point for the grid it's the only thing that makes any sort of sense. Perhaps residential is the last place it needs to happen, but eventually it will be the norm. I see it working more in an automated fashion though. Smart load centers (panels), smart appliances, etc. that are connected to the local power company's API. Then you set some rules around it.
Stuff like cooking dinner though? I cannot imagine planning my day around saving a couple bucks. That's just insane to me. Energy use and all this mechanization/automation/technology exists to make life more convenient in the first place! Stuff like EV charging, raising/lowering temps in anticipation of power pricing, laundry (dryer) scheduling, etc. seems to be where 80% of the wins can be made, and are all much more automatable to avoid having to think about it. That last 20% can simply be taken up by whole-home battery storage, which by the time any of this happens at scale will be pretty much the norm.
The thing that concerns me most though are regional "seasonal" events where a once-a-decade lul in energy production happens and there is simply not enough dispatchable power on the grid to meet demand due to everyone hyper-optimizing their loads in such a fashion.
I've been on the tariff for 2 years now, at first I was looking at the prices every day, but over time you get used to how it works and the price watching starts to tail off. The rule of thumb is just to avoid high load stuff during the peak window (load shift) - sticking to those principles you generally come out of on top. Playing the averages is the key.
Nowadays I don't really look at the prices that much other than when it's windy as I might be tempted to charge the car.
That being said though, if current world events continue and the energy situation degrades further - causing my average unit rate to start creeping up, I might consider getting a home battery , solar etc to compensate, or leave the tariff entirely.
Yeah, it's definitely a bit of a game for me, and my electricity bill was already low enough that the savings are trivial.
But I'm the sort of person who enjoys being flexible when planning my day. I'll fit chores such as laundry around work meetings. Decide whether to go for a lunchtime run (and thus have an extra shower) based on the weather and having an a big enough gap in my day. Buy ingredients for dinner based on the weather and how I'm feeling. Expected energy cost is just another factor in the mix - and one that only rarely becomes decisive.
The closest the UK grid has ever come to not being able to cover demand was a few years ago when most of our nuclear fleet went offline at the same time in the middle of a January cold snap due to the discovery of a potential maintenance problem in the steam plant. If there were to be a repeat of that scenario, then the spread of domestic dynamic pricing would actually help matters by driving load shifting behaviour.
> For instance, if I know it's going to be expensive when I'd be cooking tomorrow's evening meal, then I won't make something that would need a long time in the oven. And if it's going to be particularly cheap around lunchtime, then I'll plan to do a big load of laundry then.
That sounds like something out of a dystopian novel lol.
Imagine setting up a complex system like this where you have to plan your food choices around energy prices when people in other countries pay a fraction of what you do all day around.
A decade back, in the US, the local power company would give you a discount if you used less energy than average during peak hours. At the time I had a vacant rental, very little energy use. (FWIW I’ve since sold it, being a landlord isn’t a good time)
I watched a specific neighbor go through great pains to honor this system and so as to reap the benefits of a much lower bill. Sweating their buns off during the hottest part of the day, open windows, no tv on, etc. fully committed.
They saved 8 dollars that month. My vacant rental, not doing a goddamn thing, saved 6 dollars.
If your system is similar, you’re optimizing your life around the cost of a monthly Netflix subscription, at best.
There is an element of truth in that if you go to the extremes, where it's almost definitely not worth it.
I don't sit in the dark during the peak times, during the week I'm working during that time anyway and I still have my monitors etc on. It's just I don't usage high-draw appliances like cookers during that time. I eat dinner after 7pm anyway.
Also I have an EV, but don't commute or travel long distances regularly, so I charge my car when opportunity strikes, especially when the prices go negative - this means I don't really spend that much on fuel really. The savings really start to come in if you have "bursty" high energy stuff that can take advantage of the cheapest periods like an EV or home battery. If you just have "baseload" stuff that runs all day like A/C or whatever then yeah you won't really see any significant savings.
Not GP but I assume the fixed prices have to be fairly high to account for people using lots of power during peak demand when most people use lots of power?
For me I'm happy to avoid big power draws during the peak times, as I'm 'compensated' for it outside of those periods with a little planning. Downside is when the wind is not blowing AND disruptions to global energy markets - I'm exposed to that, warts and all, there's definitely been an increase in prices over the past 4 weeks, although there has been a few days (including today) where the wind has basically made the energy free and my average unit rate is dropping again.
It's not only that, you also need reserve for the intermittent sources like wind and solar.
I live on an island, we have big batteries that can supply up to 15 MW of power for a period. In the Netherlands we have natural gas plants that are called up when the wind or sun output decreases, lest the grid frequency drop.
How is this on the front page? This reads like pure AI slop. It feels like an insult to the reader.
OP: if you thought you had something useful to say, why didn’t you write it in your own words. There’s no useful content I can discern while reading this post.
> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree
I feel that we continue to miss the forest for the trees. Writing (or generating) a million lines of code in Rust should not count as an achievement in and of itself. What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
SLOC was a bad indicator 20 years ago and it is today. Don't tell them - once they realize it's a red flag for us they will use some other metric, because they fight for our attention.
Most of us probably knew this already, the internet had paid content for as long as I can remember, but I (naively perhaps) thought that software developers and especially Hacker News was more resilient to it, but I think all of us have to get better at not trusting what we read, unless it's actually substantiated.
I don't understand, what does that screenshot show? That there exists at least one anonymous Chinese company that has offered someone $200 to post about them on HN? Why is that relevant to a conversation about Cursor?
Who are the "they" in "they straight up pay people"?
Read the parent comment first then mine, if you haven't, and it should make sense. Otherwise; "Them" here is referring to "AI companies wanting to market their products". The screenshot shows one such attempt of a company wanting to pay someone on HN to talk and share their product in return of compensation for that. Proof that "They" aren't just "fighting for our attention" in the commonly understood way, they're also literally paying money to talk about them.
To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files [...]
Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.
The point is that the agents can comprehend the huge amount of code generated and continue to meaningfully contribute to the goal of the project. We didn't know if that was possible. They wanted to find out. Now we have a data point.
Also, a popular opinion on any vibecoding discussion is that AI can help, but only on greenfield, toy, personal projects. This experiment shows that AI agents can work together on a very complex codebase with ambitious goals. Looks like there was a human plus 2,000 agents, in two months. How much progress do you think a project with 2,000 engineers can achieve in the first two months?
> What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
They did build. You can give it a try. They did function as expected. How many edge cases would you like it to pass? Perform decently? How could you tell if you didn't try?
That’s not what I meant. What I’m asking is whether there’s any evidence that the latest “techniques” (such as Ralph) can actually lead to high quality results both in terms of code and end product, and if so, how.
I used Ralph recently, in Claude Code. We had a complex SQL script that was crunched large amounts of data and was slow to run even on tables that are normalized, have indexes for the right columns etc. We, the humans spent significant amount of time tweaking it. We were able to get some performance gains, but eventually hit a wall. That is when I let Ralph take a stab at it. I told it to create a baseline benchmark and I gave it the expected output. I told to keep iterating on the script until there was at least 3x improvement in performance number while the output was identical. I set the iteration limit to 50. I let it loose and went to dinner. When I came back, it had found a way to get 3x performance and stopped on the 20th iteration.
Is there another human that could get me even better performance given the same parameters. Probably yes. In the same amount of time? Maybe, but unlikely. In any case, we don't have anybody on our team that can think of 20 different ways to improve a large and complex SQL script and try them all in a short amount of time.
These tools do require two things before you can expect good results:
1. An open mind.
2. Experience. Lots of it.
BTW, I never trust the code an AI agent spits out. I get other AI agents, different LLMs, to review all work, create deterministic tests that must be run and must pass before the PR is ever generated. I used to do a lot of this manually. But now I create Claude skills that automate a lot of this away.
I don't understand what kind of evidence you expect to receive.
There are plenty of examples from talented individuals, like Antirez or Simonw, and an ocean of examples from random individuals online.
I can say to you that some tasks that would take me a day to complete are done in 2h of agentic coding and 1h of code review, with the additional feature that during the 2h of agenti coding I can do something else. Is this the kind of evidence you are looking for?
This is exactly the issue I have with what I'm seeing around: lots of "here's something impressive we did" but nearly nothing in terms of how it was actually achieved in clear, reproducible detail.
Your point is fair, but it rests on a major assumption I'd question: that the only limit lies with the user, and the tooling itself has none. What if it’s more like “you can’t squeeze blood from a stone”? That is, agentic coding may simply have no greater potential than what I've already tried. To be fair I haven't gone all the way in trying to make it work but, even if some minor workarounds exist, the full promise being hyped might not be realistically attainable.
So far I've found https://github.com/jae-jae/fetcher-mcp which mostly does what I want, but it only started working well when I asked Codex to run it with `disableMedia: false`.
No experience in the field, other than 2048, so take this with a grain of salt.
In my opinion it’s about your ethical stance and who your target audience is, and whether you’re trying to make a ton of money or just enough to survive. You’re obviously going to fight an uphill battle if you don’t employ any such (predatory?) marketing tactics. However, you could position yourself as explicitly standing against those and that might attract a smaller but loyal user base.
If you’re lucky, and build something good, and people talk about it, you might find that you’ll get users regardless. However, at the end of the day, what matters is whether you can keep the lights on, so you may have to relax some of your stances and rules or find ways to market your product that don’t fall into the categories you’ve described.
reply