For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more fleventynine's commentsregister

It's still above where it was a few weeks ago. I don't see why this is news.


It's the momentum, inertia, that is worrisome.


Bubble running out of steam.


Many people trade more than once every few weeks


They shouldn't. Day trading is a plague.


Everybody is doing some form of rebalancing now. Major institutions have lower trading fees and do this more frequently.


I'm curious about a system where capital gains are 100% for the first... I don't know, let's say a month. Then you ramp down over the course of the next year until it matches the regular income tax rate. I'm less concerned about the specific time periods than I am about the idea that it would be beneficial to society to have our financial systems encourage long-term thinking.


In my tax jurisdiction, Capital Gains is a way lower percentage than Income Tax.


Publicly traded companies are a plague. Have you seen the damage they inflict on the poor?


The damage done to the poor are the roadblocks that prevent them from investing in them. Having to have a minimum income to open an IRA, etc.


It's just a correction. Unless it isn't, in which case it might be a massive bubble bursting, followed by recession or even depression.

Everyone wants to know, so it's always news. Even though it usually isn't.


Exactly, the headline sort of paradoxically reflects the desire for news, and not news itself.

But still, the actual stock market behavior right now is PROBABLY (!!) more reflective of random motion than it is of a fundamental shift in investor behavior.

Unless it isn't.


I don't think NASDAQ is technically in correction territory yet; I believe that would mean it's down 10% from its high


"My attention is a limited resource. In order to prove that you're a serious human, please donate exactly $157.42 to the Rust maintainers fund and paste the receipt link here".


No mention of 120Hz; I'm waiting for a 6k or higher-density display that can do higher refresh rates.


I was going to joke about 8k@120Hz needing like 4 video cables, but it seems we are not too far from it.

[8k@120Hz Gaming on HDMI 2.1 with compression](https://wccftech.com/8k-120hz-gaming-world-first-powered-by-...)

> With the HDMI 2.2 spec announced at CES 2025 and its official release scheduled for later this year, 8K displays will likely become more common thanks to the doubled (96 Gbps) bandwidth.


Uncompressed, absolutely we need another generation bump with over 128Gbps for 8K@120Hz with HDR. But with DSC HDMI 2.1 and the more recent DisplayPort 2.0 standards is possible, but support isn't quite there yet.

Nvidia quotes 8K@165Hz over DP for their latest generation. AMD has demoed 8K@120hz over HDMI but not on a consumer display yet.

https://en.wikipedia.org/wiki/DisplayPort#Refresh_frequency_...

https://en.wikipedia.org/wiki/HDMI#Refresh_frequency_limits_...

https://www.nvidia.com/en-gb/geforce/graphics-cards/compare/


My primary monitor is the Samsung 57" 8Kx2K 240Hz ultrawide. That's the same amount of bandwidth, running over DisplayPort 2. It mostly works!


Is it actually good for productivity? The curve isn’t too aggressive? Could you, e.g. stack 3 independent windows and use all 3? Or you kind of give up on the leftmost / rightmost edges ?


I think window managers these days do a better job on 3 monitors than on a single one that could have the same area.

With an ultra wide you lose the screen concept for managing area and it gets awful because you lose grouping windows on different screens, picking per-monitor workspaces, moving windows across screens.

Either monitors need to present themselves as multiple screens, or window managers need to come up with virtual screens to regain the much needed screen abstraction.


I have three 4K 27" which yield a bit more screen real estate. Otherwise I'd love to go to a single ultrawide.


I prefer 3 monitors because it eases window management while being cheaper. For gaming I only need one 240Hz+ monitor and for Lan parties I only take that one.

Although for sim racing I've been thinking about getting a single ultra wide and high refresh rate monitor, but I'd probably go for a dedicated setup with a seat, monitor and speakers. It gets pricey, but cheaper than crashing IRL.


Yeah, window management is certainly better with separate monitors, hopefully this will get better with time.

On the flip side I would love to get rid of physically managing three individual pieces of hardware that wasn't made to work together as one setup.


I use the same monitor can I love it. Couldn't recommend it more to people.


Fifty seven inches??


Just two 4k monitors slapped together, it’s 8k wide but 2k tall.


> 4 video cables

The IBM T220 4k monitor required 4 DVI cables.

https://en.wikipedia.org/wiki/IBM_T220/T221_LCD_monitors


Also as far as 6k goes, that's half the bandwidth of 8k.


Thunderbolt 5 supports up to 120Gbps one-way.


Just don't try putting something convenient in between, at least that's what my adventures in TB4 taught me: displayport from a TB port works fine, even when DP goes to a multiscreen daisychain and the TB does PD to the laptop on the side, but try multiscreen through a hub and all bets are off. I think it's the hubs overheating and I've seen that even on just 2x FHD (Ok, that one was on a cheap non-TB hub, but I also got two certified TB4 hub hubs to fail serving 2x "2.5k" (2560x1600). And those hubs are expensive, I believe that they all run the same Intel chipset.


How about putting the dock at the end of the chain, after the monitors, does that work?


That would require monitors supporting daisy chain in the first place and I never had any problems with them anyways. Likely related to not using a full on hub but a minimalistic dongle with a DP outlet, a PD inlet and a USB outlet (which then goes to a USB hub switch managing access to simple hubs serving all those low bandwidth peripherals like the mouse).

The failing hubs were either driving cheap office displays connected through HDMI or high resolution mobile displays connected through USB-C. Few of those support anything like daisy chaining or at least simple PD passthrough so that you can use the same port for driving the display and powering the laptop, and I absolutely do want dual mobile displays. Even if only so that I can carry them screen to screen for mutual protection of the glass.


> Thunderbolt 5 supports up to 120Gbps one-way.

Two clarify, there are two options for allocating bandwidth:

* 80Gbps both up- and downstream

* 120Gbps in one direction (up or down), and 40 in the opposite

See:

* https://en.wikipedia.org/wiki/Thunderbolt_(interface)#Thunde...


I wouldn't hold my breath. Competing models seem to top out around 120 Hz but at lower resolutions. I don't imagine there's a universal push for higher refresh rates in this segment anyway. My calibrated displays run at 60 Hz, and I'm happy with that. Photos don't really move much, y'know.


> Photos don't really move much, y'know.

They do when you move them (scroll)


And?

Can you provide a ROI point for scrolling photos at 120Hz+ ?


It looks and feels much better to many (but not all) people.

I don't really know how you expect that to translate into a ROI point.


Sure, give me your ROI point for an extra pixel and I can fit refresh rate in there.


I imagine your mouse still moves plenty though.


If "the mouse looks nice" is the selling point, I'm not sold.


Still no unsigned integer types in the standard library after 26 years?


In one of James Gosling's talks he tells a funny story about the origin of this design decision. He went around the office at Sun and gave a bunch of seasoned C programmers a written assessment on signed/unsigned integer behaviors. They all got horrible scores, so he decided the feature would be too complicated for a non-systems programming language.


Non-systems languages still need to interact with systems languages, over the network or directly. The lack of unsigned types makes this way more painful and error-prone than necessary.


It’s rare I have to do bit math but it’s so INCREDIBLY frustrating because you have to do everything while the values are signed.

It is amazing they haven’t made a special type for that. I get they don’t want to make unsigned primitives, though I disagree, but at least makes something that makes this stuff possible without causing headaches.


Sometimes I'd like to have unsigned types too, but supporting it would actually make things more complicated overall. The main problem is the interaction between signed and unsigned types. If you call a method which returns an unsigned int, how do you safely pass it to a method which accepts a signed int? Or vice versa?

Having more type conversion headaches is a worse problem than having to use `& 0xff` masks when doing less-common, low-level operations.


> If you call a method which returns an unsigned int, how do you safely pass it to a method which accepts a signed int?

The same way you pass a 64-bit integer to a function that expects a 32-bit integer: a conversion function that raises an error if it's out of range.


This adds an extra level of friction that doesn't happen when the set of primitive types is small and simple. When everyone agrees what an int is, it can be freely passed around without having to perform special conversions and deal with errors.

When trying to adapt a long to an int, the usual pattern is to overload the necessary methods to work with longs. Following the same pattern for uint/int conversions, the safe option is to work with longs, since it eliminates the possibility of having any conversion errors.

Now if we're taking about signed and unsigned 64-bit values, there's no 128-bit value to upgrade to. Personally, I've never had this issue considering that 63 bits of integer precision is massive. Unsigned longs don't seem that critical.


Like I said, I understand why they don’t.

I think the only answer would be you can’t interact directly with signed stuff. “new uint(42)” or “ulong.valueOf(795364)” or “myUValue.tryToInt()” or something.

Of course if you’re gonna have that much friction it becomes questionable how useful the whole thing is.

It’s just my personal pain point. Like I said I haven’t had to do it much but when I have it’s about the most frustrating thing I’ve ever done in Java.


I don't know about gifted programs, but anything that separates kids who don't want to learn from those who do is a good thing; far to much time is wasted in America's schools catering to bad behavior.


At this age it's probably not even about "want to learn" or "bad behavior" but "is capable of reading fluently" vs "can't read at all".

Clearly there's a middle ground, but that doesn't mean the extrema don't need to be separated.


This. Frankly, it is aggravating if not depressing that it is somehow issue that should be considered at a national level. It is an issue that it even is an issue.


I think you're missing the point. This isn't separating children on "willingness to learn" but based on race.


Yeah, I think to make a system using this really scale you'd have to add support for this protocol in your load balancer / DDOS defenses.


This isn't really that different to GWT, which Google has been scaling for a long time. My knowledge is a little outdated, however more complex applications had a "UI" server component which talked to multiple "API" backend components, doing internal load balancing between them.

Architecturally I don't think it makes sense to support this in a load balancer, you instead want to pass back a "cost" or outright decisions to your load balancing layer.

Also note the "batch-pipelining" example is just a node.js client; this already supports not just browsers as clients, so you could always add another layer of abstraction (the "fundamental theorem of software engineering").


Does anyone know what it looks like when you use a line scan camera to take a picture of the landscape from a moving car or train? I suspect the parallax produces some interesting distortions.


I've taken a couple of pics from a moving train...

Nankai 6000 series, Osaka:

https://i.dllu.net/nankai_19b8df3e827215a2.jpg

Scenery in France:

https://i.dllu.net/preview_l_b01915cc69f35644.png

Marseille, France:

https://i.dllu.net/preview_raw_7292be4e58de5cd0.png

California:

https://i.dllu.net/preview_raw_d5ec50534991d1a4.png

https://i.dllu.net/preview_raw_e06b551444359536.png

Sorry for the purple trees. The camera is sensitive to near infrared, in which trees are highly reflective, and I haven't taken any trains since buying an IR cut filter. Some of these also have dropped frames and other artifacts.


Exactly what wanted to know. Is it technically feasible to 'scan' a whole landcape of lets say an hour long trainride?


It’s just a blur. Like the background of the photos in this article.

You can get some cool distortions at very slow speeds, but at car or train speeds you won’t see anything


The background in the article is not a "blur".


Because i can produce 5 clean, properly sized commits in the time it takes to do one round of reviews, so they have to be stacked. It's important that the CI run independently on each commit, and each commit builds on the work of the previous one.


> Are there any improvements to be done to Git?

Github's workflow for stacked PRs is still terrible. There's plenty of room for improvement.


I've had a lot of success writing driver test cases against the hardware's RTL running in a simulation environment like verilator. Quick to setup and very accurate, the only downside is the time it takes to run.

And if you want to spend the time to write a faster "expensive mock" in software, you can run your tests in a "side-by-side" environment to fix any differences (including timing) between the implementations.


It's cool to learn about Verilator: I've been proposing our HW teams give us the simulations based on their HW design for us to target with SW, but I am so out of the loop on HW development, that I can't push them in this direction (because they'll just give me: "that's interesting, but it's hard", which frustrates me to no end).

Can you perhaps do a write-up of what you've done and how "slow" it was, and if you've got ideas to make it faster?


The hardest part is toolchains, for two reasons. First, Verilator doesn't have complete SV language support, although it's gotten better. Second, hardware has a tendency to accumulate some of the most contorted build systems I've ever seen and most hardware engineers don't actually know how to extricate it.

Once it's actually successfully run through Verilator, it's a C++ interface. Very easy to integrate if your sim already has a notion of "clock tick."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You