Geoffrey Hinton: was considered a quack and an outsider for YEARS because of his ideas on AI... the breakthrough he spawned was done on a shoestring of a budget (read: home pc).
Edit: I forgot John Yudkin: Pure White and Deadly, talking about how bad sugar is for you in 1972...
Rejected by the mainstream academics, and in a brutal way, happens a LOT more than we think.
We started with Gecko in 2015, but it lost on many dimensions in a head to head comparison enumerating gaps vs. Chrome. DRM is just one example. We don't like DRM, but users want Netflix to work, and Google makes WideVine free-as-in-beer; whereas at the time, Mozilla had an Adobe deal that did not extend to non-Firefox Gecko embeddings.
So we switched to chromium/Blink in late 2015. Much later, when I visited Apple in early 2017, a devrel friend asked why we couldn't use WebKit. A WebKit founder in the meeting agreed with me that there was no way for Brave to do so on Windows w/o running out of capital. DRM again was an issue too, without WideVine. Don’t blame startup for not carrying a full engine — that needs deep pockets. While MS does have deep enough pockets, it is starting by using chromium/Blink and slow-forking.
The Megahertz Wars were an exciting time. Going from 75 MHz to 200 MHz meant that everything (CPU limited) ran 2x as fast (or better with architectural improvements).
Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.
It’s the harmless version of “my tribe battles your tribe” for thousands of years, without the bloodshed. We’ve evolved to enjoy competition in general.
Not everyone of course. But I find sports fans to be not that different from chess fans for example, in their passion, armchair strategy, and sheer emotional ups and downs.
My personal favorite sport is Formula 1. It tickles all the same parts of our sports fans brains, but also tickles my nerd brain with the strategy, lap math, and all the precision and tech (apart from the fact that I personally looooove driving and Kart racing)
About the tech, you’d be amazed at the amount of tech involved in F1. Just the bandwidth used for telemetry. The supercomputer simulations performed during races, etc. and that’s just the computer tech.
I think the conversation needs to change from "can't run software of our choice" to "can't participate in society without an apple or google account". I have been living with a de-googled android phone for a number of years, and it is getting harder and harder, while at the same time operating without certain "apps" is becoming more difficult.
For example, by bank (abn amro) still allows online banking on desktop via a physical auth device, but they are actively pushing for login only via their app. I called their support line for a lost card, and had to go through to second level support because I didn't have the app. If they get their way, eventually an apple or google account will be mandatory to have a bank account with them.
My kid goes to a school that outsourced all communication via an app. They have a web version, but it's barely usable. The app doesn't run without certain google libs installed. Again, to participate in school communication about my kid effectively requires an apple or google account.
I feel like the conversation we should be having is that we are sleepwalking into a world where to participate in society you must have an account with either apple or google. If you decide you don't want a relationship with either of those companies you will be extremely disadvantaged.
I guess it's the end of days, if tags have stopped blinking.
> And the beast shall come forth surrounded by a roiling cloud of vengeance. The house of the unbelievers shall be razed and they shall be scorched to the earth. Their tags shall blink until the end of days.
— from The Book of Mozilla, 12:10
All: if you can't respond in a non-violent way, please don't post until you can.
By non-violent I mean not celebrating violence nor excusing it, but also more than that: I mean metabolizing the violence you feel in yourself, until you no longer need to express it aggressively.
The feelings we all have about violence are strong and fully human and I'm not judging them. I believe it's our responsibility to each carry our own share of these feelings, rather than firing them at others, including in the petty forms that aggression takes on an internet forum.
If you don't share that belief, that's fine, but we do need you to follow the site guidelines when commenting here, and they certainly cover the above request. So if you're going to comment, please make sure you're familiar with and following them: https://news.ycombinator.com/newsguidelines.html.
I don't understand how slivers of stock is supposed to incentivize anyone to do anything.
Amazon has 1.5 million employees. Say that it's a completely fair co-op and I have a 1-in-1.5-millionth share of the whole company. Their market cap is about 2.5T, so this is about 1.6 million USD in stock that I own. (By amazing coincidence, their market cap in USD is about the square of the employee headcount)
But if I'm a rank-and-file employee with nobody under me, then doubling my production could only be equivalent to adding one more 1-in-1.5-millionth to the company's value, right? Equivalent to hiring one more employee at my level.
For that impossible extraordinary 80-hours-per-week double effort, my stock would go up... a dollar, right? Roughly 1-in-1.5-millionth of my 1.6 million dollars of stock.
I think it's a joke. I think "stock incentivizes people to work harder" is a little joke that people tell each other so that labor will be pacified with company stock and leftists will bicker about co-ops instead of saying the quiet part which is that people just want more money
I just don't see any math in which stock isn't basically a tragedy of the commons for boots-on-the-ground workers. If I was paid for exactly the labor I do, doubling my effort doubles my paycheck. If I have stock, some of that revenue is spread to everyone else who has stock. Giving everyone stock doesn't incentivize anyone any more, right? What am I overlooking?
The argument seems flawed to me: by "killing the web", they refer to the example of a company adding SEO'd information to their website to lure in traffic from web searches.
However, me personally, I don't want to be lured into some web store when I'm looking for some vaguely related information. Luckily, there's tons of information on the web provided not by commercial entities but by volunteers: wikipedia, forum users (e.g. StackOverflow), blogs. (Sure, some people run blogs as a source of income, but I think that's a small percentage of all bloggers.)
Have you ever looked for a specific recipe just to end up on someone's cooking website where they first tell your their life story before - after scrolling for a half a day - you'll finally find what you've actually come there for (the recipe!) at the bottom of their page? Well, if that was gone, I'd say good riddance!
"But you don't get it", you might interject, "it's not that the boilerplate will disappear in the future, the whole goddamn blog page will disappear, including the recipe you're looking for." Yeah, I get it, sure. But I also have an answer for that: "oh, well" (ymmv).
My point is, I don't mind if less commercial stuff is going to be sustainable in a future version of the web. I'm old enough to have experience the geocities version of the early web that consisted of enthusiasts being online not for commercial interests but for fun. It was less polished and less professional, for sure, but less interesting? I don't think so.
Nice effort but this isn’t interesting at all. You skipped the most interesting part; parsing http. This is beejs networking tutorial with writing a file to a socket.
Harsh? Maybe, but you’re posting this to a site with some of the most talented developers on planet. Real talk, sorry.
You give a long list of features that I don't want. And then go on to encourage everyone to switch text editors, and adopt a specific plugin that happens to work in the way that you personally like.
As a vim user, this is kind of what I have come to expect from emacs users. Honestly, I'm glad that you've found something that works well for you. But I hope that some day you internalize the fact that other people aren't you, and they shouldn't always be "encouraged" to give up their existing solutions to do things in the way that you've decided is perfect.
I'm so interested in this topic, for a weird reason.
Since I was a kid, I've thought I was "prone to migraines", and ascribed various triggers to them - sun exposure, heat, physical exertion, mental exertion, etc. I'd get a migraine sometimes after a long hike on a weekend - and also a long business meeting entirely indoors in an air-conditioned space.
Only when I was around 35, did I figure something out. All these situations lead to me getting dehydrated without any obvious accompanying feeling of thirst. Hiking all day will do it - walking around an outdoor shopping mall on a hot afternoon - or sitting in an all-day business meeting focused on the work at hand and forgetting to drink. And all these situations lead to a migraine - my only "migraine" trigger is simple dehydration, nothing more complicated.
The weird thing is, it took me a long time (decades) to put this together, because I just figured that I couldn't be dehydrated if I wasn't thirsty, and I had no association between "feeling thirsty" and getting a migraine.
I get what I consider normally thirsty in other circumstances, but somehow there's a failure mode where my body doesn't warn me. So now I just remember to chug lots of water (and electrolytes) if I'm exerting myself even if I don't really feel thirsty, and I can systematically avoid triggering migraines.
Now that I understand it the association is quite clear and obvious in retrospect.
My father wanted to open a butcher shop when he was 25, he was given a large loan by my grandfather to do so. He was already a master of his trade at this point and I am sure he had a deep insight into the industry and the practices of the time. However, I think that if my granddad had used the "Coffee Beans Procedure", there would have been a lot of questions that he would not have been able to answer.
My father is no longer a butcher, he sold the shop after ~25 years, working every day to afford our family a comfortable life and having enough money to pay for a restaurant that he wanted to run. Again, no one asked about where the coffee beans would come from, and after ~10 years he closed the restaurant after again working tirelessly to support himself, his children and his new grandchildren. He had the money to buy kitchen equipment for a newly built restaurant that he has now been running for 5 years.
To make a long story short, he is certainly crazy and he is doing what he wants and, on some level, is meant to do. But if your takeaway from this article is that you need to unpack everything and know everything to the smallest detail, you might get lost or discouraged by the complexity. You can't plan it all out.
While the courts, supposedly, focus on what the law actually says, remember that Wickard v Filburn (1942) established that growing a plant on your own property for your own personal use is "interstate commerce".
I don't know a lot about law, but I at least know that ruling on what the "actual law is" is selective, and usually selective in a way that is beneficial for the rich and powerful.
HTML as a mail format is a horrifying mess. What you want is a rich text format for displaying static text and maybe some images and links and stuff. What we got is the entirety of the modern web application development environment stuffed into our mail clients, with maybe 1/100th the attention to standards compliance and bug fixing that real browsers get, and a metric ton of "Oh Wait Not That" workarounds to plug the obvious security gaps inherent in the "run web apps from any attacker who has your email address" metaphor.
This is one of the big reasons why email has pretty much died for casual use. Even in work environments almost everyone uses chat clients these days.
Can this ever work? I understand what you're trying to do here, but this is a lot like trying to sanitize user-provided Javascript before passing it to a trusted eval(). That approach has never, ever worked.
It seems weird that your MCP would be the security boundary here. To me, the problem seems pretty clear: in a realistic agent setup doing automated queries against a production database (or a database with production data in it), there should be one LLM context that is reading tickets, and another LLM context that can drive MCP SQL calls, and then agent code in between those contexts to enforce invariants.
I get that you can't do that with Cursor; Cursor has just one context. But that's why pointing Cursor at an MCP hooked up to a production database is an insane thing to do.
It seemed like a good idea in 01981; the purported expansion of MIPS was "Microprocessor without Interlocked Pipeline Stages", although of course it's a pun on "millions of instructions per second". By just omitting the interlock logic necessary to detect branch hazards and putting the responsibility on the compiler, you get a chip that can run faster with less transistors. IBM's 45000-transistor 32-bit RISC "ROMP" was fabbed for use in IBM products that year, which gives you an idea of how precious silicon area was at the time.
Stanford MIPS was extremely influential, which was undoubtedly a major factor in many RISC architectures copying the delay-slot feature, including SPARC, the PA-RISC, and the i860. But the delay slot really only simplifies a particular narrow range of microarchitectures, those with almost exactly the same pipeline structure as the original. If you want to lengthen the pipeline, either you have to add the interlocks back in, or you have to add extra delay slots, breaking binary compatibility. So delay slots fell out of favor fairly quickly in the 80s. Maybe they were never a good tradeoff.
One of the main things pushing people to RISC in the 80s was virtual memory, specifically, the necessity of being able to restart a faulted instruction after a page fault. (See Mashey's masterful explanation of why this doomed the VAX in https://yarchive.net/comp/vax.html.) RISC architectures generally didn't have multiple memory accesses or multiple writes per instruction (ARM being a notable exception), so all the information you needed to restart the failed instruction successfully was in the saved program counter.
But delay slots pose a problem here! Suppose the faulting instruction is the delay-slot instruction following a branch. The next instruction to execute after resuming that one could either be the instruction that was branched to, or the instruction at the address after the delay-slot instruction, depending on whether the branch was taken or not. That means you need to either take the fault before the branch, or the fault handler needs to save at least the branch-taken bit. I've never programmed a page-fault handler for MIPS, the SPARC, PA-RISC, or the i860, so I don't know how they handle this, but it seems like it implies extra implementation complexity of precisely the kind Hennessy was trying to weasel out of.
The WP page also mentions that MIPS had load delay slots, where the datum you loaded wasn't available in the very next instruction. I'm reminded that the Tera MTA actually had a variable number of load delay slots, specified in a field in the load instruction, to allow the compiler to allow as many instructions as it could for the memory reference to come back from RAM over the packet-switching network. (The CPU would then stall your thread if the load took longer than the allotted number of instructions, but the idea was that a compiler that prefetched enough stuff into your thread's huge register set could make such stalls very rare.)
It's always weird to see Quakerism be mentioned somewhere else. I grew up Quaker and still sometimes attend Quaker meeting, and I related to his ceiling-tile counting; I used to count the wooden boards that formed the ceiling of our meetinghouse.
The best part about Quakerism, in my opinion, is that it teaches a very hearty disrespect of un-earned authority without teaching disrespect for the concept of authority itself. One of my favorite anecdotes is a group of Quakers who refused to doff their hats for the King, as they only doff their hats for God.
There's another old practice of refusing to swear on the Bible before telling the truth, as that would imply that they weren't telling the truth before they were sworn in.
I find the inclusion of Zen in this article is interesting, as you won't find the word "Holy" or "God", used, and "Spirit" is only used twice, once to comment on how he felt pressured to receive a message from it. The original purpose of Quaker silent worship was to remove the church-imposed barrier between man and God (the "Holy Spirit") so that anyone could be a mouthpiece for the wishes and desires of the Spirit. Modern American Quakers, especially the ones who write in Friends Journal, tend to be pretty secular.
It's the 'authenticity' issue of generative AI tat troubles me rather than the content of the viewpoints.
If these same ideas were expressed by Vtubers (virtual youtubers, anime-like filters for people who want to do to-camera video but are shy or protective of their privacy), it would not be troubling, as everyone understands that fictionalized characters are a form of puppetry an can focus on the content of the argument.
But using generative video to simulate ordinary people expressing those ideas is a way of hijacking people's neural responses. Just pick the demographic you wish to micro-target (young/middle/old, working/middle/upper class, etc. etc. etc.) and generate attractive-looking exemplars for messages you want to promote and ugly-looking exemplars for those you wish to discredit.
The answer is pretty simple here - hire CodeWeavers to work on supporting your game in Proton/Wine rather than some other porting shop doing an old rewrite-style port.
What is curious to me is that there's a possibility that a single plant in conjunction with natural oscillations caused enough trouble to start a doom scenario.
Oscillation -> damping -> possibly faulty equipment and possibly lack of power plants to absorve the reactive load -> 0 voltage in two countries and some neighbouring regions
There's also the possibility that Portugal put too much demand on the market due to negative prices, but I'm not sure if it was explained how much that had an effect on the whole thing.
The article made me think deeper about what rubs me the wrong way about the whole movement
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
There was some Ubuntu (or Linux) forum where I had asked a question and I wanted an app or something (I can't recall now) which was easier to use and do repeatedly. Most of the people were replying with stuff like "why can't you just do <something that involves lots of CLI and more than an hour ro so>" or on the lines of it.
I, someone extremely new to Linux (hell, new to computers), was bewildered. Then a commenter replied with something that helped me and exactly what I needed. He added a note directed towards others which went something like - the battle for Linux as THE desktop OS was sabotaged by its most ardent practitioners.
One of my favorite things about the JVM ecosystem is how stable it is. A 5-year-old library will almost certainly Just Work. And Clojure very much follows the same spirit. There's a lot of great, useful libraries that haven't been updated in years... not because they've been abandoned but because they're _done_ and just don't require active maintenance.
Immutability as a cultural value, not just a data structure.
This isnt a new thing though.
Cantor: https://en.wikipedia.org/wiki/Controversy_over_Cantor%27s_th... they didnt just reject him, they basically publicly beat him down, and drove him away from math and into depression.
David Bohm: https://en.wikipedia.org/wiki/Quantum_potential spent years on the outside for having his ideas on this.
Geoffrey Hinton: was considered a quack and an outsider for YEARS because of his ideas on AI... the breakthrough he spawned was done on a shoestring of a budget (read: home pc).
Edit: I forgot John Yudkin: Pure White and Deadly, talking about how bad sugar is for you in 1972...
Rejected by the mainstream academics, and in a brutal way, happens a LOT more than we think.