AI is the best thing that happened to America in the last decade, and I dearly hope that politicians don't try to ruin it the way they're ruining other parts of the country.
I respect some of Bernie's positions, but his stance on tech is dangerous.
AI is the best thing that happened for a relatively few number of privileged people, most of them being located in USA. This includes the programmers that may have benefited from using AI assistants.
AI has already stolen great amounts of money from a very large number of people all around the world, due to the huge increases in the prices of DRAM, SSDs and HDDs.
Moreover, there have already been a great number of layoffs, which truly or not have been blamed on AI. AI may have not been the true reason for those, but it certainly has provided a convenient justification.
There is no doubt that the number of people who have already been harmed by AI greatly exceeds the number of people who have benefited from AI.
There is no reason to believe that this trend will not continue.
When used in the right way, there is no doubt that LLMs and other ML/AI tools can ensure a significant progress, but from the recent history it seems almost certain that they will be more often used in the wrong way than in the right way, so most people will be negatively affected, not positively.
The problem is not AI itself, but the fact that AI is a tool controlled by extremely evil people, e.g. Sam Altman and Larry Ellison. It is very unlikely that this will change and this is the reason why AI will do more harm than good.
(There are a lot of examples that prove that individuals like those that I have named are truly evil, but I will just quote from TFA: "Larry Ellison predicts an AI-powered surveillance state in which “citizens will be on their best behavior, because we’re constantly recording and reporting everything that is going on”". Even only this is enough to prove that Ellison is an enemy.)
> AI has already stolen great amounts of money from a very large number of people all around the world, due to the huge increases in the prices of DRAM, SSDs and HDDs.
Well it's not like AI investment money is coming out of thin air - ultimately normies are buying stuff that AI enhanced companies produce which allows those companies to feed this money to the actual winners/1% that truly benefits. I don't think that it's fair to say that AI has been only negative to the normies, when they need to be the ones willingly feeding the beast with their money for the whole thing to perpetuate.
That said I obviously also see a bunch of negative consequences, and perhaps agree that the negatives outweigh the positives.
AI is certainly powerful, but despite tech CEO whitewashing, none of them are planning for how the economy will recover from a potential devastation of white collar jobs. Token bills fund rich investors & executives, not everyday Americans.
For AI to give me abundant free time & happiness, I need to have money, and I don’t see UBI anywhere on OpenAI’s roadmap.
Do you really think it's OpenAI's job to create UBI? Surely, if you feel that it's a good idea, then it should be the government who sets it up.
We can't just magically freeze the economy in time. If we conduct our industries inefficiently just to keep jobs around, we won't be competitive on a global market.
I have strong concerns around the inflationary effects of UBI, but whatever the solution there is, it's not the responsibility of private companies to organize their own welfare systems.
For those of us who watched FB play the “move fast and break things” card, and are now watching the predicted effects of that play out, we think people like YOU are dangerous, and we respect people like Bernie for trying to pump the brakes (knowing the last 10 years have been downhill).
"Legibility" must be the wrong word because I can't understand what the author is talking about. Is he saying that the overuse of abstractions is ruining corporate culture? Or is he saying that the uniformity of corporate processes is becoming overbearing?
I think this needs to be re-written with different terminology.
"...you don't care about code review"
Code review is one of the things I care about most! In fact, now what we're in an age where many code changes are generated by LLMs, I think that code reviews are far more important than they used to be.
"Processes adopted by a company aren't about the end result of the work. Their stated goals and their actual goals are always distinct."
Wrong again. Often the goal is exactly as stated, for better or for worse. Let's take incident reviews as an example: their goal is to reduce the occurrence of emergency incidents in the future by learning from the mistakes that led to incidents in the past. There's no doublethink involved.
I would suggest that people stop overfocusing on benchmarks, and give this a try. Gemma 4 is performing really well for me, and seems to hallucinate much less than other models I tried in this size range.
This is pretty impressive. I think This sort of thing is a perfect fit for agentic coding because of the fact that you can compare the generated assembly afterwards as a safeguard/test. Plus, even if the code is messy, you can always ask the LLMs to do some cleanup passes afterwards.
This looks great, but I'm wondering how effective this would be for full model weights rather than just the KV cache. Their paper only gives results for the KV cache use case, which strikes me as strange since the algos are claimed to be near optimal.
This is really cool research, but I'm wondering how much it slows down inference. The readme says that it's "...distinguished by zero overhead (no learned components, no entropy coding)" but does that really mean that this is a "free win"?
It's a very interesting concept, and I signed up to try it. However, after seeing the landing page, my first question was:
"Where's the data on accuracy?"
Backtesting is difficult to do correctly with LLMs, but because this is marketed as being for macro investing, I would expect to see a level of rigor and quantitative analysis consistent with that.
The Monte Carlo simulation engine sounds really cool, but is there evidence to indicate that it generates superior results to expert predictions, or to LLMs alone?
I actually think it would be totally fine for your beta version to have low accuracy numbers. After all, this seems to be something in the very early stages. But to have no quantitative analysis of your system's performance definitely makes me uneasy to trust it.
> because this is marketed as being for macro investing, I would expect to see a level of rigor and quantitative analysis consistent with that.
Thanks for bringing this up - while we talk about Soros' forecasts and comparing them against those of an LLM, in the end Soros is not a forecasting tool, it's an analytical framework.
There is a gap between quant modeling and geopolitical analysis that we seek to fill. Specifically, quant models are great at capturing statistical regularities in financial time series but typically treat geopolitical shocks as exogenous noise. Meanwhile, geopolitical analyses in the policy and intelligence communities (with the exception of Bueno de Mesquita [BdM]'s work) provide deep contextual reasoning but rarely produce probabilistic scenario structures or asset-level transmission mappings that can directly inform capital allocation.
We will be shortly publishing a technical preprint laying out the Soros framework in full, but the TL;DR is: we model geopolitical events (or crises in the literature) as partially observed ("fog of war") stochastic games with multiple actors jostling for control over resources. We map out actors across various axes (think of these as actor embeddings), identify key decision points, and enumerate paths across them to estimate scenario probabilities. The scenarios in turn have associated transmission flows and market implications. We will evaluate those as mentioned in the sibling comment. Happy to discuss more.
First, thank you so much for signing up to try out Soros!
You are absolutely right, of course, to ask about accuracy. TL;DR: we don't have any formal calibration data yet.
The reason why is interesting, though, and it strikes at the heart of global macro investing in particular: things change, often, and sometimes dramatically. Basically, geopolitical "events" are really smeared across time (and sometimes space). Each event update can lead to a cascade of new scenarios branching off and older ones dying out, each with implications on capital flow. It's difficult to disentangle, which is why our preference has been to enable the system itself to monitor feeds, but also update its alerts as it deems fit, and re-run the analysis when it feels there's been enough of a change of state (pun not intended).
One markets-focused eval we have been building towards (and apparently you have been thinking of as well) is comparing against LLMs. Our plan is to run simultaneous comparisons against a variety of frontier models, armed with the same information that we provide Soros, but without the structural framework and simulation engine we've built though. Ideally we want to map out the Pareto frontier of model capability vs realized returns, and examine performance over horizons, asset classes, and so on, and have concrete numbers on where Soros pushes the curve outwards.
This is being built :), and we hope to get there in the coming few weeks!
It's ok, it's not the best. There are models that do better, I'd use it for some basic tasks but not actual complex tasks like query generation and retrieval.
A mobile failover would be cheaper and would give you better connectivity in heavy rain.
A 4G dongle can be purchased for $15, rather than $200 for a Starlink Mini. Then, let's say your main internet source fails and you need to actually use the backup plan beyond the standby amount of 0.5 Mbps. That will cost you a minimum of $50 for Starlink, versus roughly $25 for a month of unlimited cell service. As for standby costs, you can find phone plans for $5 per month tat give a small amount of fast data, as opposed to Starlink's unlimited amount of slow data.
But of course this only works for areas that actually have cell service.
I pay $25 for my backup 5G internet - but unlike a mobile plan, it's actually unlimited at 300mbps, and I don't have to resort to TTL shenanigans and such to use it for my whole network. It's just plugged into one of the ports on my router, and provides it with real public IPv4. Ran it for a few days when the fiber dropped out and consumed 200GB without complaint from either myself or the ISP.
The bottom of the page does give some details about what "unlimited data" means here in the UK between the different carriers. Some cap speeds, some monitor usage and then either turf you off on "fair use" grounds or do traffic management/shaping. The general rule seems to be 650GB in 6 months is just about the limit of what is ok.
That wouldn't be anywhere near enough for me. Looking at my router I see I've downloaded 522GB in the last 34 days alone.
> Romania reportedly has unlimited for 4€ but I don't know which operator.
Orange Yoxo is the only one which has actually-unlimited, all the others have a fine-print somewhere with "up to X GB/month, then bandwidth is severely throttled".
I'm using the 4.9€ plan for a mountain webcam[1] and they have been true to their word, no throttling so far.
I mean it's more to do with the cool factor of using a satellite, not practical concerns. Practically a mobile failover is superior if you have coverage.
Wouldn't a widespread ground-based infrastructure outage also take out (or at least severely degrade) Starlink in the affected region if people were to widely use it as a backup solution?
Starlink has been known to carry traffic over lasers from Southern Africa to Europe and from New Zealand to Eastern USA. During the power outage in Spain/Portugal they proactively moved traffic to the UK
Local failures don't matter unless your country doesn't allow landing user traffic in other countries (Indonesia, Bangladesh)
If you're in a rural area (and heck, even in an urban era) the primary ISP of a region dropping is likely to cause a lot of congestion from cellphones falling back to the operator network.
I found it quite absurd that Spectrum (my cable operator) wants to sell me a modem with integrated 5G/4G backup knowing that as soon as the cable plant drops, hundreds of local phones are going to congest the network as well and my "Invincible WiFi(tm)" will end up dead as a dodo.
I'll just throw a Peplink up and throw the cable and Starlink into it and run that as my load balancer.
Managing wireless at a large corporate campus we’re tucked away far enough we have a couple cell towers for the operators on site.
If our site wireless dies, it’s a near instant logjam as we watch 1500 phones and cellular devices on our WiFi alone dump back to the macro network for data to the sole tower on campus.
Out of hand management also becomes an immediate nightmare in this scenario when we need to swim upstream of the phones.
> A mobile failover would be cheaper and would give you better connectivity in heavy rain.
When I was living in the rural seaside (literally grapes growing in front of the sea: nice place), when a bad electricity outage would happen it'd take down everything, including the only cell tower we'd be connected to. So no Internet, no mobile phone. No nothing but the laptop's batteries.
There are also people who have the same ISP as the company giving them their phone number: about a year ago in my country (highly modern, western EU country) a major carrier went down for a few hours. Electricity kept working but all the people on that ISP and mobile phone carrier were sorry out of luck. Most shops couldn't accept payments anymore (except cash but people don't use that much here).
Failover on mobile is, for many, the same thing as no failover at all: you may as well not even bother.
Satellite failover, on the other hand, is quite harder to disrupt.
The issue is that mobile is easily overloaded if those around you are also failing over onto it. There are only so many channels available per sector. In my experience, when one of the two incumbent carriers in my area goes down, mobile is immediately useless as a backup.
Starlink seems to provision capacity by locking your service to one address at a time; presumably, this means they have enough capacity for the customers in each physical area. By contrast, mobile networks have to contend with highly mobile terminals and highly volatile demand.
I would wager that today’s Starlink is better able to cope during a fixed line outage in an area simply because they at least have already provisioned capacity for the subscribers in that area, whereas mobile operators operate closer to capacity limits at all times and do not have the ability to scale when everyone is tethering suddenly.
I don't think mobility matters during a broadband outage. The problem is people failing over to cellular. (If anything, mobile terminals may require cellular providers to provision extra headroom, which helps during an outage.)
Starlink will have the same problem unless it provisions extra capacity for users on standby plans. The plan is so cheap that I can't imagine they're provisioning much.
I'm currently using 4G as a backup and the Starlink Standby plan would definitely be cheaper. Only by a few dollars, but still cheaper, and unlike the cellular plan there's unlimited bandwidth while with the cell plan I'm relying on rollover data accumulating during periods of non-use to cover when it's being used.
I respect some of Bernie's positions, but his stance on tech is dangerous.
reply