> If collapse happens, it notes the UK does not have the ability to absorb global shocks through higher domestic output. It lacks enough land to feed its population or rear livestock to maintain current consumption patterns and price levels.
Yet they're pushing to use farmland for solar farms and social housing.
There's a real hatred of farmers among the UK Leftist/Green crowd.
Farmland is ~69% of the total area of the UK with it being quite stable in the last decade, and most of that being grassland. Solar accounts for ~0.1% of (former) agricultural land, and after 2030 it should be around ~0.6%. The Land Use Framework aims for 1% of land (mainly agricultural, so ~1.5% of ag) for renewables by 2050. Housing doesn't have as comparable figures, but I'm envelope mathing it to about 1.5% by 2050 again. Which makes 3% of ag land, of which crops are roughly 1/3rd, and google says 60% of solar is on good crop land. So a reduction of up to ~6% at the top end.
There are a few flip-sides here - things that many don't like to hear or acknowledge:
There are alternate diets that are suitably nutritious that are achievable on this land, largely by reducing meat consumption. Meat production uses much more land than other food sources, and a lot of that is input crops. Its inefficient. A change in diet can make the UK self-sufficient on much less land.
Solar is a developing technology; it will improve, requiring less land for the same production. If long term trends push up the value of crops and down solar production, the panels can be removed and crops grown instead - its either/or but it is reversible. This is especially important for bringing in non-solar power sources which take longer to realise (nuclear, tidal). I will also note that solar farms are built on the good crop land because it is convenient and the price is right. We will see south facing hills covered soon enough.
Housing and social housing are the same problem; where housing prices are so high compared to salaries, social housing demand increases. Building houses - any houses - solves the problem. The question of density, location, style is a question about what kind of problems we want in 10-20 years time. High rises did not work well at all.
> > It lacks enough land to feed its population or rear livestock to maintain current consumption patterns and price levels.
> Yet they're pushing to use farmland for solar farms and social housing.
Cities, you may note, never ever make enough food to feed themselves. Always been true, everywhere and everywhen since the invention of the city.
Farmers choosing between cash crops and food crops was literally a game the teachers got the kids to play when I was in school in the 90s. Cash crops, and PV is kinda a cash crop, let you make enough money to buy food. That said, how much money depends on what industry you have to use the power, because nobody else in the world will care for the £ if the UK employment consists entirely of baristas, hairdressers, and Amazon warehouse staff/delivery drivers.
The biggest problem with using farmland for social housing is that a lot of the good farmland is a flood risk.
But the only case where the UK has to care that it doesn't make enough to feed itself is if the economy becomes an autarky, at which point it cannot help but suffer a massive population reduction because it's a small island quite close to the arctic circle which has spent or depleted most of its natural resources, first the wood (1600s-1700s), then the coal (1930s or so), then the fish (1980s or so), then the natural gas (early 2000s).
I don't think it's been possible to feed the UK domestically since before WW2.
I note you put the word "social" in there; very little social housing is being built, it's mostly private. Agrivoltaics are also possible, but of course everyone would rather do the politics of emotions ("hate farmers") than discuss the issues. Such as how we grow enough electricity, too.
This covers moving the UK to self-sustain by reducing animal products and repurposing animal feed cropland to direct consumption cropland; it also covers reforestation.
So while it isn't possible today, its possible to become possible without relying on any technological advancements.
> I don't think it's been possible to feed the UK domestically since before WW2.
Of course you can, I know people being almost self sustainable right now on very little land. It is hard, frugal but highly rewarding and we have evolved to do it since very recently.
Energy and heating is a bit more complicated. We obviously cannot burn wood or coal like we use to because this is actually very damaging to the planet.
So this is where technology has to play a bigger role.
Farming might feel rewarding while watching someone else do the hard work. I watched and had to help my grandparents do it and went through my own decade of "farming" and it never gets easier and you only get older.
> ... "it never gets easier and you only get older."
Hence why traditionally farmers either had large families, hired outside workers, or most often did both.
(Source: I grew up in rancher/farmer territory and earned some of my earliest "spending money" working for local farmers or ranchers during harvest season.)
The farmers are the ones selling the land off and living off the million pound proceeds
Farmland is worth £100/acre/year, at most that’s £3k an acre. But people pay £10k because it’s a way to avoid tax and if you get the right planning permission a way of making millions.
Dyson? Sure - he seems performative (from afar, I'm antipodean to this BTW) with his industrialised strawberry wheels etc.
> the musician
Lost me .. I'm sure the UK has a few gumbooted millionaire class rockers / composers - I'm guessing that's a throw at the impresario of musical theatre with a life peerage who is rarely seen cutting hay.
I'm not sure I'd class either of those as farmers (by our local understanding), and Clarkson smacks of content farmer cos player more than generationally consistent production farmer .. but perhaps he might get there.
Andrew Lloyd Webber. Billionaire composer of musicals like cats and phantom. Also a farmer apparently, owning thousands of acres and befitting from the tax breaks and hope value.
They either employ people to farm the land (like Clarkson did I to 2019, and indeed stop does), or rent the land out for a tiny return on investment in the 1-2% range while avoiding the only tax that even attempts to fight against the aristocracy
No. It's because farmers sometimes pollute rivers (despite household sewage being pumped into UK rivers daily), want to kill badgers to stop TB spreading, and because they work large areas of land they're obviously wealthy.
There’s no real hatred of farmers on the left, other than the fact that farmers generally vote small-c conservative.
There’s certainly a hatred of land owners, and vast amounts of UK farm land is privately owned, renting the land to farmers. It’s the right wing parties and press that takes that to mean that the left hate farmers.
The problem is the code unconditionally dereferences the pointer, which would be UB if it was a null pointer. This means it is legal to optimize out any code paths that rely on this, even if they occur earlier in program order.
When NDEBUG is set, there is no test, no assertion, at all. So yes, this code has UB if you set NDEBUG and then pass it a null pointer — but that's obvious. The code does exactly what it looks like it does; there's no tricks or time travel hiding here.
Right so strictly speaking C++ could do anything here when passed a null pointer, because even though assert terminates the program, the C++ compiler cannot see that, and there is then undefined behaviour in that case
> because even though assert terminates the program, the C++ compiler cannot see that
I think it should be able to. I'm pretty sure assert is defined to call abort when triggered and abort is tagged with [[noreturn]], so the compiler knows control flow isn't coming back.
Shouldn't control flow diverge if the assert is triggered when NDEBUG is not defined? Pretty sure assert is defined to call abort when triggered and that is tagged [[noreturn]].
It's the RAM. It needs to "trained" which takes some time but for for some reason these boards seem to randomly forget their training, requiring it to happen again.
I've never had memory training be forgotten with my AM4 nor LPDDR5-based laptops and NUCs. Is this a new thing with AM5 or something? Or just a certain brand of BIOSes?
It's a common issue on consumer boards with DDR5 and more than two DIMMs installed.
Doesn’t affect soldered memory or lower speed memory (like DDR4). Many memory controllers fail to achieve good speeds and timings at all on 4 DDR5 DIMMs, and fall back to running DDR5 at 3600MHz instead.
Ok, so user selects too-high speed, controller tries for ages and fails, but doesn't save since it's overridden by user in BIOS?
I distinctly recall thinking my LPDDR5 NUCs were broken since they seemingly didn't boot the first time, until I recalled the training stuff. Took up to 15 minute on one of them. But neither has had any issues since, hence my question.
DDR5 is much, much more fickle than DDR4 and earlier standards. I think it's primarily due to pushing clock speeds (6000 MT/s would be insanely fast for DDR4, but kinda slow for DDR5).
Memory training has always been a thing: during boot, your PC runs tests to work out what slight changes between signals and stuff it needs to adapt to the specific requirements of your particular hardware. With DDR4 and earlier, that was really fast because the timings were so relatively loose. With DDR5, it can be really slow because the timings are so tight.
> But as of now there is no such problem on any kind of significant scale.
This is not the same as saying there's no problem.
A fraction of humans will ever compete in the Olympics. People train their whole lives for it. It's not about 'scale', it's about safety and fairness. It's not reasonable to expect them to 'shut up' about it.
I don't want to watch a man beat up a woman in a boxing ring.
I sincerely doubt more than half the population of the entire planet showed more than a passing interest in them, and I'm still curious how it'd be possible to measure that.
The reference implementation of the profiler [1] was originally built by the Optimyze team that Elastic then acquired (and donated to OTEL). That team is very good at what they do. For example, they invented the .eh_frame walking technique to get stack traces from binaries without frame pointers enabled.
Some of the OGs from that team later founded Zymtrace [2] and they're doing the same for profiling what happens inside GPUs now!
> For example, they invented the .eh_frame walking technique to get stack traces from binaries without frame pointers enabled.
This is not an accurate summary of what they developed.
Using .eh_frame to unwind stacks without frame pointers is not novel - it is exactly what it is for and perf has had an implementation doing it since ~2010. The problem is the kernel support for this was repeatedly rejected so the kernel samples kilobytes of stack and then userspace does the unwind
What they developed is an implementation of unwinding from an eBPF program running in the kernel using data from eh_frame.
True, I should have been more specific about the context:
Their invention is about pushing down the .eh_frame walking to kernel space, so you don't need to ship large chunks of stack memory to userspace for post-processing. And eBPF code is the executor of that "pushed down" .eh_frame walking.
I believe this is a case of convergent invention – the idea of pushing DWARF/.eh_frame unwinding into eBPF seems to have occurred to several people around the same time. For example, there's a working implementation discussed as early as March 2021: https://github.com/iovisor/bcc/issues/1234#issuecomment-7875...
OTel Profiling SIG maintainer here: I understand your concern, but we’ve tried our best to make things efficient across the protocol and all
involved components.
Please let us know if you find any issues with what we are shipping right now.
* we limit data shared to an atomic-writable size and have a sentinel - less mucking around with cached indexes - just spinning on (buffer_[rpos_]!=sentinel) (atomic style with proper sematics, etc..).
* buffer size is compile-time - then mod becomes compile-time (and if a power of 2 - just a bitmask) - and so we can just use a 64-bit uint to just count increments, not position. No branch to wrap the index to 0.
Also, I think there's a chunk of false sharing if the reader is 2 or 3 ahead of the writer - so performance will be best if reader and writer are cachline apart - but will slow down if they are sharing the same cacheline (and buffer_[12] and buffer_[13] very well may if the payload is small). Several solutions to this - disruptor patter or use a cycle from group theory - i.e. buffer[_wpos%9] for example (9 needs to be computed based on cache line size and size of payload).
I've seen these be able to pushed to about clockspeed/3 for uint64 payload writes on modern AMD chips on same CCD.
Yet they're pushing to use farmland for solar farms and social housing.
There's a real hatred of farmers among the UK Leftist/Green crowd.
reply