For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | wayneftw's favoritesregister

I'm really glad to see an article like this. I've worked in the space for a while (Fluid Framework) and there's a growing number of libraries addressing realtime collab. One of the key things that many folks miss is that building a collaborative app with real time coauthoring is tricky. Setting up a websocket and hoping for the best won't work.

The libraries are also not functionally equivalent. Some use OT, some use CRDTs, some persist state, some are basically websocket wrappers, fairly different perf guarantees in both memory & latency etc. The very different capabilities make it complicated to evaluate all the tools at once.

Obviously I'm partial the Fluid Framework, but not many realtime coauthoring libraries have made it as easy to get started as Replicache. Kudos to them!

A few solutions with notes...

  - Fluid Framework - My old work... service announced at Microsoft Build '21 and will be available on Azure
  - yJS - CRDTs. Great integration with many open source projects (no service)
  - Automerge - CRDTs. Started by Martin Kleppman, used by many at Ink & Switch (no service)
  - Replicache - Seen here, founder has done a great job with previous dev tools (service integration)
  - Codox.io - Written by Chengzheng Sun, who is super impressive and wrote one of my fav CRDT/OT papers
  - Chronofold - CRDTs. Oriented towards versioned text. I'm mostly unfamiliar
  - Convergence.io - Looks good, but I haven't dug in
  - Liveblocks.io - Seems to focus on live interactions without storing state
  - derbyjs - Somewhat defunct. Cool, early effort.
  - ShareJS/ShareDB - Somewhat defunct, but the code and thinking is very readable/understandable and there are good OSS integrations
  - Firebase - Not the typical model people think of for RTC, but frequently used nonetheless
I should add... I talk to many folks in the space. People are very welcoming and excited to help each other. Really fun space right now.

I must ask those who are still sticking with Firefox over Chromium-based browsers, especially those who do so soley for the purpose of staying off a Google web monopoly, to what end? When and how do you think Mozilla will ever do anything to help the web? They integrate all of Google's non-standards just as quickly as Google does, take Google money, and then do adamantly anti-user stunts like this and the Pocket and the incessant VPN shilling. And it's significantly harder to make a good version of Firefox, you have to mess with userscripts or profiles or LibreWolf or cross your fingers and hope some nebulous omnipotent distro maintainers just take care of it. With chromium there are several builds with the spy and suck stripped out readily available because most of it is blobs or API keys, very easy to just exclude at build time. See https://chromium.woolyss.com. I make this comment partially because this is one of the more egregious things Mozilla has done recently but also because I've recently switched to Chromium myself and the whole experience is much faster and smoother so I recommend everyone do it. Happy to be debunked.

In chess there's an idiom for this. "Long think, wrong think" because it's a quite common phenomenon that very good players will ruin positions by rather than playing with their good instinct, over-analyzing a position, there's a related situation of the hardest games to win being already won positions because there's so many ways to win that people will on occasion start doing something really stupid akin to the example of having too much choice in the article.

I think a nice collective analogue to this is Alfred Whitehead's observation that 'civilisation advances by extending the number of important operations which we can perform without thinking of them'. Progress is being made by holistically integrating knowledge in a way that makes it sort of ambient.

It also reminds me of a slightly snarky article why all the people in the rationalist cult never seem to actually be successful at anything other than rationalism. It's precisely because consciously thinking is easy, it's the integration of knowledge into the whole is what's difficult but actually necessary.


> It’s less Chomsky and more Orwell.

As Neil Postman pointed out, it's less Orwell and more Huxley.

“What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny "failed to take into account man's almost infinite appetite for distractions.""


We use a combination of Markdown + PlantUML, CSV, YAML, Structurizr DSL, and Gherkin feature files under Git version control next to the code to structure the requirements and examples. A GitHub Actions continuous integration workflow does validations and compiles reader-friendly documentation with Mkdocs. We are experimenting with consolidating more and more of this into feature files.

We structure the project and its related projects hierarchically along service design packages, similar to the “class-responsibility-collaboration” breakdown in object-oriented design.

All of our stream-aligned team collaborates continuously on this data as part of sprints, including analysts, managers, software and QA engineers. We recently started to collaborate with the enabling Risk & Compliance team on the same data, and started doing compliance audits using the generated, Git hash-versioned reports.

Our other teams use similar combinations of data, mostly centered around Confluence spaces which enable some form of traceability due to bi-directional linking.


This doesn't only apply to the way JWT tokens are usually used for sessions (no persistence). The default session store for Devise (Rails) and .NET Identity is cookies, on the client. They are encrypted with a secret key and decrypted for authentication. Identity in particular allows you to store any number of "claims" in the cookie, such as a username or role. Because the cookies are signed and HTTP only, this is safe from attackers, but this method, along with pretty much any method that isn't storing some sort of state on the server, has the same 3 problems listed in the article.

   1. Logout doesn’t really log you out!
   2. Blocking users doesn’t immediately block them.
   3. Could have stale data
I know there are ways around this with a really fast refresh time, or as I've heard, storing some sort of signature in the cookie, but I personally prefer a plain old server-side session store with something like Redis, or even just an in-memory HashMap. Authentication doesn't have to be that complicated.

JSX [1] is essentially a syntax fix for something people have been doing for a long time in JavaScript with DOM building libraries.

Having extensively used nested function call (like MochiKit.DOM [2]) and nested JSON-ML-ish array (like dombuilder [3]) variants of DOM builders in the past, JSX's syntax is a lot more friendly to write and maintain.

[1] http://facebook.github.io/jsx/ [2] http://mochi.github.io/mochikit/doc/html/MochiKit/DOM.html [3] https://github.com/creationix/dombuilder


This seems like a misconception. It's clear that Apple was brewing the SDK for developer adoption and that their crowing about web apps was a stop-gap measure.

Are we really imagining Apple brought out a public SDK, set up the app approval system, certification, all that stuff, updated Xcode to support it all on a whim in under a year (starting with iPhoneOS 2) just because Cydia existed No way, José.

Apple are good but even they can't pull all that out of their ass overnight. Web was clearly a stop-gap because the SDK wasn't ready and iPhoneOS was, at the time, an outlier in purposefully not supporting Java ME apps.


TLDR: The claims and descriptions in these Navy patents exactly match the “leaked technical descriptions” from many conspiracy theories over the past 30-40 years claiming that the US government has secret craft capable of defying the current known limits of aerodynamics and propulsion.

Antigravity conspiracy theories has been a favorite source of fascination/amusement for me for a long time (see my username), and I cannot describe how eerie and weird it feels to read official Navy patents and claims by ex-head of Lockheed Martin skunkworks literally corroborating some of conspiracy theories I’ve heard going around for decades. (Yet the skeptic in my brain still won’t allow me to fully believe it until I see a working prototype.)

The whole story here is just so utterly bizarre; the deeper you dig, the weirder it gets. These TheDrive articles seem to do a great job of investigating and reporting on the facts without too much speculation, and I highly recommend diving down the rabbit hole of linked articles (especially on this connected “To The Stars Academy” organization making similar claims of incredible technology by a panel of founders with astoundingly impressive and official credentials).

To add to the strangeness of the Navy patents: the actual patent contents reads almost like gibberish, or the kind of pseudoscientific technobabble you’d write to add scientific explanations of space ships to a sci-fi novel. If the technical descriptions and phenomena are real, then these patents really are describing or hinting at new physics, but in a way that misuses existing terms and reexplains obvious basics of physics with what seems like amateurish imprecision (like referring to cross products as “multiplication”) — which just doesn’t make sense at all given the credentials of those vouching for the patents.

Yet, the descriptions match almost exactly the rumors and conspiracy theories of how electrogravitic propulsion systems of secret military craft worked. And, they describe something that should be testable even without a room temperature superconductor (one could take a charged super capacitor and spin it and/or vibrate it at extreme frequencies and see if this has any measurable effect on its inertial or gravitational mass), which intrigues me.

Look into the “TR-3B” or “Aurora project” conspiracy theory and you’ll find tales and descriptions of the crafts antigravity drive that is absolutely identical to what is described in these patents: a superconducting medium carrying an extremely high charge density, that is rotated and vibrated are incredibly high frequencies — and this is claimed to somehow reduce the inertial mass of the surrounding area.

This TR-3B conspiracy theory is decades old, at least, and it matches these official government patents exactly.

It’s almost as if someone took some of the most intriguing rumors or conspiracy theories of secret government craft from decades ago, and started filing patents from extremely official government sources on exactly the kind of tech that was rumored to exist; yet in a way that sounds much like pseudoscience to anyone educated in modern physics.

This is the kind of thing which, based on the content itself, you’d immediately dismiss as a crackpot conspiracy theory technobabble. But the highly official credentials of the source of these patents, and of the people claiming or hinting that the tech is real (the ex-head of Lockheed Martin skunkworks — it doesn’t get much more credible than that) makes it impossible to ignore in that way.

So I honestly don’t know what to make of any of this. It would be interesting to try experimenting with some of the testable claims that these patents describe.


When I was a hiring manager at the previous company I was at, I gave an interview where I gave an exact description of what I wanted coded. No algorithm tricks (you could solve it faster with a paradigm, but it wasn't necessary) actually this is a problem I remember having to solve for homework in intro HS CS class 25 years ago. I was checking to see if the candidate could

1. read documents.

2. ask questions.

3 identify corner cases.

4. identify why the problem was hard

5. actually try out their code instead of just thinking about it.

One candidate used a very troublesome technique to try and solve it, and proudly declared, "I am sure this works". I knew it didn't - but it took me one hour to find the bug, and in order to find the bug I had to write a property test. This is SUPER dangerous behavior IMO. Code Reviews will not catch everything, so if you have a dishonest coder you can wind up in trouble. He got hired for another team "this is literally the best hash table I have ever seen".

Unfortunately all of the candidates were clearly folks who had practiced their leetcoder skills. Nobody completed the problem correctly. However, I was okay with partial answers that fulfilled some of the softer qualities I was looking for The other folks who were interviewing gave one of the candidates who bombed my part rave reviews. I got steamrolled into hiring him onto my team, and sure enough he didn't read documents, he didn't try out his code. In the month that I worked with him he did not push any code. I left.


I trust Mozilla way less than Google. They've never pulled a Mr. Robot (3rd party advertisement integration) or a Pocket (forced 3rd party extension integration) on me like Mozilla has. There are way more eyeballs on Google, so if they do something that I don't like - I'll know about it and how to fix it or work around it. I also don't like Mozilla's culture of political ideology at all. Finally, they had the swing vote for getting rid of WebSQL and they pushed that freaking piece of crap IndexedDB instead (among other bad decisions).

Beyond that, there are lots of things about Firefox that just annoy me. Off the top of my head - The search box being on the bottom and taking up the whole horizontal width of the view box, the inconsistent menus (context menu sub-menus fly out on hover, the program menu sub-menus do not), the boxy design and the native title bar which results in less vertical space. They only thing I'd want from Firefox would be containers - but I just use separate Chrome profiles to get the same effect.

I'm also a developer and I vastly prefer to use, test with and develop against something from the Chromium/Blink/WebKit lineage over Gecko. I want a mono-culture and to that end, I basically only test with Chrome and Safari. Everybody else can either keep up or die (or fork Chromium, customize it and maintain that). There's absolutely no problem with every Linux distro using the Linux kernel mono-culture, so why would there be a problem with a common browser engine/infrastructure mono-culture?

If I have to switch, I'd much rather go with something like Vivaldi or Brave. I doubt it will come to that though as I will probably only have to pause Chrome updates for a brief time while new extensions come out to keep blocking ads. The Manifest v3 limitation is in the number of rules (30k per extension) so I bet that each filter-list will just be it's own extension. There is no limitation on cosmetic filtering via injected JS.


Is REST perfect? Probably not. But it certainly does its job, and that's why most middle-tier APIs these days, whether it be those that power web applications or mobile apps, are in REST, with the backend itself being written in a variety of languages (PHP, Python, Java, C#, etc).

I will admit that core REST doesn't support basic "querying" functionality, or even things like pagination, filtering, sorting, etc, which is why there are a set of standards, or best practices on top of REST that aim to standardize those commonly used patterns. [1] [2] [3]

Finally REST, whether by design or not, follows the KISS principle (Keep It Simple and Stupid), and that's probably why it's gained as much traction as it has over the years.

Want to get a user object? GET /user/<id> Want to update it? POST or PUT /user/<id> Get a list of all users? GET /user/

If you look at just the "basic" examples for GraphQL, you will understand why it's never going to replace REST in its current form.

[1] Microsoft's OData: https://www.odata.org/ [2] https://www.moesif.com/blog/technical/api-design/REST-API-De... [3] https://stackoverflow.com/questions/207477/restful-url-desig...


I don’t think you will ever be able to change the cursor to an arrow pointer because Apple is more interested in being different than they are interested in being useful.

This became very clear to me when I realized that you have not ever been able to even change the color of the pointer in macOS, nevermind the shape. I wanted it to be white so it would stand out more. Apple doesn’t care - they are too busy projecting style to care about users petty practical needs.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You