Piketty’s main point is that if the return on capital is greater than the return on labor then over time you will get inequality. But like... what would a world in which that is not true look like? What would it even mean for labor to scale? I’ve thought about this a bunch and cannot imagine what such a system would look like or how it would work, except for maybe utopian socialism, which basically isn’t physical.
What kind of system is he proposing in which capital doesn’t create leverage?
The world and social structure we grew up in was a world where that wasn't true. This idea that you can start a business and pull yourself up by your own bootstraps - that's made possible by the fact that the return on labor is greater than the return on capital. YC is kinda built on that idea: work hard, get rich.
The other kind of society is one in which the best way to establish a secure place for yourself is to take secure control of some piece of income generating property. Piketty uses Jane Austen's novels to show what these societies look like: marriage and other political arrangements become far more important than how hard working you are. Hard work and toil is generally associated with the poor.
So Piketty's argument is that, while we may believe we are in a pull-yourself-up-by-your-bootstraps society of hard work, the numbers show that we are in fact in a work-your-political-connections-to-establish-control-over-productive-capital society.
So it should be no surprise that HN is a great place to find counterarguments for Piketty's theories. :) But that's his theory, and I believe it's worth keeping an open mind as to whether it's true. If it were, we would see that it's easier to get richer by already being rich or making friends with the rich than it is to get rich by working to build something yourself.
Maybe? If r only exceeded g in the past 20 years or so, though, we would expect that most of the wealthy would have acquired their wealth at a time that g > r.
Point being that wealth is a lagging indicator. If I'm 25, I'm making my way in a different world than the one my grandfather grew up in.
> What kind of system is he proposing in which capital doesn’t create leverage?
Creates less leverage (per unit). Make it less efficient for a single investor to invest a lot and reap all the profits, giving more room for multiple people to invest instead.
I don't think the proposal is to cease return on capital, but to reduce it to a level which maximizes some other aspects of the system, perhaps even overall growth.
From what I understand, not the "return" on labor but just economic growth via productivity increase of labor. [1]
And a world in which the return on capital is equal to, or less than, economic growth is simply one where capital gains are taxed more -- enough so that they're less than economic growth.
It's literally that simple. I'm curious what you're having trouble envisioning about such higher tax rates? It doesn't seem to have anything to do with utopia or even socialism.
Every path along that road goes back to the same thing, including Piketty's: extreme control by a very small group of people over everyone else.
Piketty is merely suggesting control by a different group. The non-wealthy often respond very positively to that, thinking their sandwich will be bigger and they'll have some enhanced influence - which never actually happens in practice. The types of people that pursue power in very concentrated systems of any sort, almost never like to share their accumulated power.
You have to either nationalize wealth and production formally (Socialism), so that bureaucrats or authoritarian citizen boards/panels (or the military as often happens) govern the use & distribution of all wealth and production; or you have to aggressively regulate all possible uses of private wealth, which also requires similar authoritarian control groups for oversight & dictate, that will de facto rule as fascist entities (a veneer of private ownership scenario).
This is Socialism vs Fascism, in which one formally nationalizes and doesn't usually bother to pretend about who owns & controls what; and the other de facto nationalizes but leaves you with fake property ownership & control (which they can and will revoke at any time). If you want to pursue any Piketty style premise, all roads go to the same place. There will be no exceptions, there has never been an exception. You can vary the details, it doesn't matter, you get inherently violent, humanity crushing authoritarianism in every scenario if you actually pursue it very far. Piketty requires authoritarian-heavy structures, you can't redistribute and equalize enough in any other manner.
We've already figured out the furthest you can go before you tip over: various European nations have experimented out in the open with very hefty welfare state concepts for generations now. They've taken it as far as it can go, before tipping into authoritarianism. Several have had to roll back their burdensome welfare states at times, including eg Sweden, as they rolled into stagnation. It's a challenging balancing act that requires careful management to continue operating well.
Piketty has proposed nothing new, these are very old ideas and they've all been trialed in the real world repeatedly, all have failed repeatedly. Piketty is a rehash for modern times, Marx lite. The only response you will ever get when you point out that it has all been tried already, is: well, they didn't crush enough skulls (insert alternative fantasy story here: they didn't invent a technocratic utopia smart AI solution that was good enough to achieve harmony), or it would have worked.
Socialism is simply the workers own the means of production. A hybrid model exists in Germany which mandates that workers are representative by half of the board of directors.
Democracy is compatible with socialism, so are free markets. Owning money to control businesses is not. The people who do the work are in charge.
According to [1], a chest CT scan rad dose ranges from 4 to 18 milliSieverts. Annual background dose is ~3 milliSv. [2] states that a 100 mSv dose would increase risk of cancer by 0.5% (5/1000).
You cannot compare an annual background dose to the dose received in a CT scan. Similarly, the 100 mSv dose of your reference number 2 is an inappropriate comparison because it is spread out over time compared to a CT scan.
Your premise is like saying, walking puts an impact of .8 J on my foot. I walk 10,000 steps a day. 10,000*.8 is 8,000 J. A bullet produces an impact of 168 J when fired from a .22 rifle into a human body. Therefore walking is much more dangerous than a bullet from a rifle.
The Sievert unit is based upon the Gray unit of radiation dose, but it also includes a weighting factor for the type of biological tissue that is exposed to radiation. You can also read up on the linear no-threshold model of radiation exposure for more information. In terms of cancer risk, radiation doses on a range from 1 mSv to 100 mSv can definitely be compared, regardless of the duration of the exposure. There is a range of higher doses e.g. above 1 Gy that would cause acute radiation syndrome if received all at once, where different biological effects would predominate.
What you suggest seems plausible, but I can't immediately calculate, for example, DNA repair rate vs. X-ray photon flux and cross-sections. Can you point me to something that confirms incidence or severity of radiation-induced disease depends on flux rather than dose? Tnx.
Sun burns are known to significantly increase cancer risks vs similar exposure over longer periods. Sun burns are rather similar to accuse radiation poisoning suggesting similar issues.
And yet, this Harvard Medical School link (not a report, not peer reviewed, but whatevs) clearly states that one CT scan of 4 to 18 mSV increases cancer risk by 0.7%, which is higher than the increase of 0.5% from 100 mSV exposure over a greater period of time. So, no. The Harvard Medical School link does not refute in any way my point. In fact, it supports my point.
This is called “Special English” aka “Simple English” or “Voice of America English”. It is meant to be very clear and straightforward to understand to minimize the possibility of miscommunication and make it so that non-experts of all backgrounds can take away the important messages. Given how important the medicine supply chain is to lots of people’s daily lives, I’m sure they didn’t want this page to seem understandable only by the educated.
There is not enough of a drug, according to a drug maker. There aren’t enough ingredients to make the drug. The missing ingredients are made at a place affected by the coronavirus.
Fortunately, there are different types of drugs that can help. We are making these other drugs available.
If they are trying to say there aren't enough ingredients to make the drug then the second way is better. But that isn't the same as what the original sentence said.
The phrase 'issue with manufacturing of' covers a much wider range of problems than a shortage. Maybe they have tonnes of materials and they just need a mechanic to fix the machine for example. No shortages of anything except people in the right place at the right time.
> Special English is a controlled version of the English language first used on 19 October 1959... World news and other programs are read one-third slower than regular English. (paraphrased from Wikipedia)
This is fascinating. I had trouble understanding the FDA's article until I slowed down and imagined a newscaster reading it. (It's still really hard to skim, though. Weird phrasing.)
I work for a health-related company, and to us it's called "Health literate."
But, you're right- the idea is to write as plainly and clearly as possible so the largest number of people can understand.
As for why the drug or manufacturer wasn't named, my guess is that it's so that there isn't a run on a particular medication causing unnecessary shortages.
I'm also interested in this question of "Special $LANG" for other languages. For German learners, DW has a daily podcast Langsam gesprochene Nachrichten ("News read slowly"), which I have found invaluable.
I'm glad to see this posted. When you dumb down the vast majority of media that your population consumes and make all of it essentially predigested, the average person becomes substantially less able to parse technical literature or think critically, because in the majority of the media they consume there is little required in the way of thinking.
Making things accessable to the common denominator has a generally negative effect on society IMO. This is one major reason why our U.S. schools have been falling behind for decades - they are too focused on not leaving the poor performers behind, at the cost of limiting the achievement of those above the average.
I had a lot of trouble understanding it until I read it 5 times. All the words just turn to soup in my brain as I read them... It's like the information density of the text isn't high enough to keep my attention or something.
The first sentence in particular has a very "the department of redundancy department" structure. Also took me a few tries to realize what it was actually saying.
Music production is usually Mac at this point, and that requires wired headphones/speakers. Not sure it’s big enough of a market that Apple wouldn’t drop it anyway, but Bluetooth latencies are too high to be a substitute there.
Anyone doing pro music work will have a USB interface anyway.
But, like, people listen to music and watch videos on their laptops, and it would be absurd to force them to use bluetooth headphones for no good reason.
This statement isn't supported by any real evidence. In fact, a lot of actual research supports the idea of keto for improving outcomes in some types of cancer.
Again, the important thing is ATP, and you can get it from either glucose or ketone bodies:
> In fact, the late George Cahill did an experiment many years ago (probably would never get IRB approval to do such an experiment today) to demonstrate how ketones can offset glucose in the brain. Subjects with very high levels of B-OHB (about 5-7 mM) were injected with insulin until glucose levels reached 1 mM (about 19 mg/dL)! A normal person would fall into a coma at glucose levels below about 40 mg/dL and die by the time blood glucose reached 1 mM. These subjects were completely asymptomatic and 100% neurologically functional.
no, you're both oversimplifying. the brain needs some glucose to sustain itself, a little bit, according to present day research. yes, it can principally function on ketones but not /exclusively/.
it is not controversial that the brain can survive long periods of time on principally ketones, but what has not yet been proven (for obvious ethical reasons) is whether or not the brain can sustain itself with absolutely no glucose whatsoever.
again, the ratios are key. i'm not contesting that ketones can be used as the principal fuel source, just that "the important thing is ATP" because the brain needs some glucose.
I am so over goal-less MMOs with simplistic pastel graphics. We've done this so many times before. How is Horizon any different from Second Life? It's really unclear. Both are ugly.
I am a huge fan of the idea of the Matrix, and I don't even think it needs to be gamified to be fun, but the graphics are such a big part of the appeal of VR for me. I would pay a lot of money just for a photorealistic VR house with a beautiful view out the windows and in-world access to my computer - a better place to hack than my usual Starbucks. Why can't we have something like the Unreal tech demos? It's clear that beautiful worlds are possible now: https://www.theverge.com/2019/3/20/18273832/epic-unreal-engi...
If a VR world that looked like the first video in that article existed, it would be huge. But the intro video for Facebook Horizon? I am completely unsold.
They're possible now, but you also need expensive hardware to run them. This is meant to run on the oculus quest. Give it a few more years (and probably a few more) and portable ultrarealism will get here eventually.
You don't need expensive hardware to do better than glossy pastels, at least for desktop gaming. I don't know what the requirements are for VR, but even turning modern games down to low/medium settings can still look pretty stunning if you haven't gamed in a while.
This reminds me of higher-res Mii's from the Wii, which was dated tech even at the time.
That said, the most important thing people should be asking is: is it fun? Because if you aren't gonna go super realistic then you had better nail the gameplay (Nintendo or Blizzard style).
What if it's targeted at kids/teens/young people and is lightweight "casual" playing experience? Wouldn't this look be preferred?
Sometimes ultra-realism takes the fun nature out of things.
Simpsons and South Park could easily use high tech graphics but the simple nature of it all is what makes it appealing . Same with Minecraft or even Zelda.
It's also easy to build expansive worlds with breakable objects that don't require tons of physics and special effects to make something like a box exploding look realistic.
The Nintendo Switch is by no means powerful hardware. But Breath of the Wild looks beautiful on it because of the art style.
I really don't think VR needs to wait half a decade for hardware to catch up. Existing systems can create beautiful experiences with the right artistic decisions
I 100% agree. I'd say Breath of the Wild looks incomparably better than this artstyle though. Something about this artstyle feels so... sterile? Maybe I'm just out of touch with what kids are playing these days, although as another commentator said, VR is counter-indicated for children.
A technology like Google Stadia will probably solve this. Nobody will need a expensive hardware to run it. The graphics will be generated in cloud computing and streamed to the VR. Keep creating hardwares to run better graphics is a dead end.
The speed of light says no. “The cloud” will always have too much latency to do acceptable vr. To meet your deadline for a 120hz video frame, your signal can travel at most 1500 miles in a vacuum, closer to 1000 miles in copper/fiber.
> “The cloud” will always have too much latency to do acceptable vr.
Perhaps but you haven't established that.
> To meet your deadline for a 120hz video frame, your signal can travel at most 1500 miles in a vacuum, closer to 1000 miles in copper/fiber.
Edge computing is a feature of some modern clouds, and certainly is intended to have much lower round trip distance than that to at least major markets, specifically to reduce latency.
Admittedly, public cloud offerings don't tend currently to have compute resource suitable for hosting VR in their edge offerings, but it's certainly something a major cloud player supporting their own gaming/VR offering could do, and for that matter a feature that it is easy enough for public cloud vendors to offer if there was sufficient demand.
Edge computing is a thing, but it really doesn’t help that much: That 1000 miles has to include the signal path inside your gpu to generate the frame and assumes no data loss in transit.
If it takes 4ms to generate a frame you’re down to < 500 miles from the data center. If you need to pad for packet loss you’re down even further.
> If it takes 4ms to generate a frame you’re down to < 500 miles from the data center. If you need to pad for packet loss you’re down even further.
As I understand, edge computing locations tend to be in most major cities in the regions covered, providing tens-of-miles distance to most people in
major markets.
Global birth rate is around 150,000 people per day. At Facebook scale that’s not one sucker born every minute but tens of thousands; millions of new Facebook users born every month.
> Global birth rate is around 150,000 people per day. At Facebook scale that’s not one sucker born every minute but tens of thousands; millions of new Facebook users born every month.
Yes, but how many of them are going to waste time playing pointless games like Horizon online? How many people that would play such a game haven't already? To say that there are a lot of Facebook users and that that naturally means a lot of players is strange given that VR headsets are still a luxury item and that Facebook Horizon doesn't essentially offer anything novel outside that.
> They never played second life or minecraft.
> "We" have done this, most humans haven't.
Similarly, most humans haven't played Horizon, and there is as far as I'm concerned no compelling reason to believe that a lot of people will. Who would waste their time in that disgusting looking world and present themselves using a bland Wii-like avatar?
I really enjoyed SL for a few years. At one point in some role play world that involved hunting or being hunted with bow and arrow or melee weapons. I had played plenty of other purpose build FPS but enjoyed that SL game more despite the crap graphics, laggy scripts, weapons, hacks etc. It felt more visceral.
I never spent a cent inside of an FPS but probably spent a few thousand dollars in SL.
Sure but the parent post was talking about the graphics. Second Life looked good when it launched but is so old that it looks outdated. It tried to look realistic while both Horizon and Rec Room goes with the Mii look. My point is that Rec Room is a better comparison since it uses the same graphics style, is way more popular/up-to-date and is also VR.
Going to be interesting how Facebook will handle the (legal) fringe and subvert creativity when it doesn't fit their business model.
Any such VR world should be openly developed and not be provided by a corporation who can selectively put anything in there and inhibit anything going against company policies.
Or why not compare it to SecondLife's foray into VR "Sansar"? I haven't tried in about 9 months but it's definitely an improvement from SL, albeit full of bugs.
I don't think this is actually true; being close to some deep learning research groups, it's really surprising how powerful GPUs have gotten. Improvements over the last few years have been very nonlinear; it's kind of astonishing. And, internet latency has also gotten very good. I bet it's totally possible to make something like this now, though most people don't realize it yet!
Do you see a lot of games that run well in 4k@120hz? VR is much more demanding. Maybe if you spend $20k on a box with 8 GPUs it will be able to run it. But it’s not just GPUs, you also need much better headsets (much higher refresh rates, 8k+ resolution, wide FOV, very good eye tracking for the foveated rendering, much higher dynamic range). Most importantly, creating all that photorealistic content is a huge effort, and you need a lot of it for VR. How much are you willing to pay for it?
Tell that to the people who get nauseous from VR. There are some people that will get nauseous from VR no matter what, and then there are people of varying sensitivity. The higher refresh rate, lower latency, improved FOV, and improved optics will all help more people be able to experience VR at all. And it will allow everybody to experience it for longer periods of time more comfortably.
Your parent post is true. This isn't about PC GPUs where the cooling hardware alone weighs almost as much as the entire standalone headset. These headsets are based on mobile phone hardware and need to deliver stereo rendering at 75 to 90Hz to make the user comfortable. Also, heat dissipation is a lot harder for a box strapped to a face that can't reach 50 to 60 degrees. You need to make lots of compromises for that.
> it's really surprising how powerful GPUs have gotten. Improvements over the last few years have been very nonlinear
These improvements did not come at fixed cost, though. Prices for top-of-the-line GPUs have been increasing (as well as die sizes, power consumption, and manufacturer margin).
Together with Moore's law, that already brings nonlinear improvements, the available computation power has increased a lot.
However, those GPUs are very expensive. Most people just can't afford them. And if few people can afford them, it's hard to justify paying a whole army of developers (like AAA game studios) to bring that kind of experiences.
I had that exact coffee break argument some years ago and have a slack reminder "play some VR" at 9:00 Sunday, December 7th, 2025. I might have to revise that.
6 years from now? Maybe. It depends on Apple - if they release a VR headset by then, it might become an iPhone moment for VR. However it seems like they are focusing on AR, rather than VR, so who knows.
There is no chance they are detecting extracellular action potentials with their giant skin surface electrodes. This is 100% EMG, which is a potential produced by muscle tissue. They might be able to detect very subtle EMG signals that don't correspond to visual movement, but they for sure aren't detecting APs.
Super interesting. At Burning Man this year it seemed like a significant percent of the city had goTennas, which had never been true before. I also remember reading about a big mesh in Brooklyn being a thing recently. I bet we’re going to be hearing more about practical mesh networks in the coming years.
Reminds me of Nintendo's PictoChat, a peer-to-peer, physical proximity-based chat application that was built into the DS. I've written about it on here before [0]:
> [You could] chat on one of four available channels with nearby users over local wireless, by exchanging text and hand-drawn pictures.
> At about the peak of its popularity—right before smartphones became truly prevalent among kids my age—I was doing middle school science fairs. This involved lots of standing around waiting in front of your trifold posterboard while judges slowly worked their way along the rows and rows of other projects. We weren't supposed to leave our own stations in the meantime, and they were spaced out enough to make talking with your neighbors inconvenient. Instead, the whole room was on PictoChat, filling up all four channels with streams of chatter and doodles and mutual commiseration on our anxieties over presenting to the judges.
For a company known for perpetually trailing behind its competitors in online services, Nintendo was oddly ahead of its time here (and with the Download Play feature, too: peer-to-peer software distribution and wireless networking!).
I think a few things hampered Nintendo. Their (laudable) commitment to backwards compatibility led to them getting stuck behind the curve technologically -- the POWER architecture they stuck with from GameCube (when it was a mostly mainstream choice) to WiiU (when it really wasn't) and the older pre-Cortex ARM designs they were using in the 3DS years after Cortex was shipping left Nintendo of the position of not being able to really ride the bulk of the improvements happening in the marketplace. (It's notable that the Switch is an abandonment of hardware-based backwards compatibility in favor of Nvidia internals that wouldn't look out of place in an Android tablet.)
And because Nintendo prides themselves on being a family company, in much the way that Disney does, they had less of a tolerance for looking the other way at just how bad other users can be. Microsoft would just throw out an "Online interactions not ESRB rated" warning and moderate the worst of it after the fact when building Xbox Live. I don't think Nintendo ever saw that as an acceptable option.
What kind of system is he proposing in which capital doesn’t create leverage?