For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more datenarsch's commentsregister

MPC-HC is just as good as VLC and maybe a little less cluttered. It's Windows only of course, but if that isn't a limiting factor, then it's a very good alternative to VLC in my opinion.


From the home page:

"MPC-HC is not under development since 2017. Please switch to something else."


It's still actively being developed, the author just created a new project page for it for some reason. You can find it at https://github.com/clsid2/mpc-hc


The version packaged with K-lite is frequently updated to this day, last release today.


Sorry, but no. Electron is the absolute worst piece of technology that I have come across in at least the last 20 years. It's not only that it is bloated, ridiculously resource hungry and terribly inconsistent because every program has it's own set of UI controls, but it's also just absolutely plain terrible to work with:

The APIs are a complete mess, half of them are not working properly, things constantly get broken between even minor version revisions, the documentation is usually outdated and incorrect and you need to plaster your code with OS-specific if-else statements all over the place because many of Electron's APIs behave differently on different OSes. The whole thing is just a complete disaster.


Agreed completely. In terms of the absolute worst piece of technology, there have been no applications that have even been shipped successfully with it, and every developer who uses it is probably encountering some form of s=Stockholm syndrome, or they're inexperienced enough as developers to not see the value of native applications.


Yes, just no.


I wish I could agree but from my experience, DDG's search results aren't really that great. Often even worse than Google's.

And another private company is not the answer I believe. We need something more drastic, an open-source search engine organized as a genuine non-profit organization. Something like that. Otherwise, whatever replaces Google will just turn into another Google as soon as it gets any momentum.


I think open source will be tough because you're going to need a lot of saints to work on a search engine of Google's caliber.

Maybe an alternative revenue model instead of ads.


Consortium of universities, perhaps? Every top school (globally) kicks in some design and development time. It seems odd that the most critical link to access information on the planet is not the product of academia. With a country’s skin in the global game, there may be better leverage to keep it free and open for their citizens.


Because Google is not interested in serving the best possible search results but rather in serving those that will make them the most money.


That's what it's still somewhat good for. I mostly use it to read about random history stuff I'm interested like medieval Europe etc. but avoid it completely for anything contemporary because it always has incredibly left-leaning bias.

I'm thinking of just getting a subscription for a real encyclopedia though because Wikipedia is often rather low quality in my opinion.


> Incredibly left-leaning bias

Read: fact-leaning bias


If the facts support leftism, you should be strongly in favour of a completely neutral retelling of the facts and let reality speak for itself.


Which I am. As is implicit in my comment, for those informed enough to interpret it accurately.


But when articles written about right-leaning figures are filled with condemnation instead of dry facts, while articles about left-leaning figures are laudatory instead of drily factual, it creates the opposite effect.


> Are we the baddies?


I'm not talking about Nazis, just like how I'm also not talking about Marxists. I'm talking about Western politicians within the current Overton window.


>Read: fact-leaning bias

That is if you redefine what the words 'fact' or 'fact-checking' mean in the style of Orwellian newspeak.


Actually, no. Nice try, though!


Medieval history is often caricaturish as taught in school. I would not presume Wikipedia is free from the influence of centuries of Protestant and Enlightenment smears.


> I mean, I assume that for some reason the platforms just buy the rights for the dubbed version, but don't the subbed original versions also have a ton of demand (maybe more than the dubbed version)?

Why would it have a ton of demand? Nobody in Germany wants to watch subbed original versions for the same reason nobody in the US wants to do that. The German market is big enough so that everything gets localized and people are used to that and expect it.


On that note, Catherine the Great had her own husband and several other people brutally murdered to illegitimately usurp the Russian throne and was a terribly repressive and authoritarian despot, even if she liked to claim otherwise.


Oh yea let's build an OS where we can just import our kernel-mode drivers from the repository of quality code that is NPM. I mean what could possibly go wrong, right?

I'm sorry but we need less NodeJS and JavaScript in this world, not more.


You're just picking something to hate and it's not even correct.

It's not true that OS==kernel. The kernel is Linux, and this is replacing the GNU stuff around it with something else. The result is a new kind of OS. But nobody said that any privileged code will be written in JS.


Well I can still develop like this in ASP.NET if I wanted to and probably and many other languages/frameworks too, so it's not exactly unique.


> XMPP is fundamentally flawed

how is it fundamentally flawed, can you elaborate?


I'm not going claim that it's fundamentally flawed, but here's an anecdote. Many years ago, when XML was having its day in the sun, long before it was sidelined by the simplicity of REST and JSON, I was at a Java One convention listening to a speaker present on some new XML parsing API. After the talk, I approached the presenter for some post-talk Q&A to ask how one might use the API to parse the Jabber protocol, which may or may not be relevant to what XMPP is today (I haven't been keeping up.)

The presenter was unfamiliar with the protocol, so I had to describe how the xml document was opened when you establish a connection, and how elements keep getting appended to it, and how the "xml document" isn't really completed until you're all done and the connection is terminated.

They looked at me like I had two heads.

To them, XML didn't make any sense at all unless you have the entire document available all at once. After all, how on earth could one ever apply an XSLT transform to it, right!?

Good times.


> After all, how on earth could one ever apply an XSLT transform to it, right!?

There is streaming APIs for XML. Just as XSLT 3.0 can do streaming. Saxon has implemented it, for example[1]. I am aware, that you are talking about the past, but also the XML world moves forward, albeit slowly, since the community has gotten much smaller.

[1]: https://www.saxonica.com/html/documentation10/sourcedocs/str...


It was common to have to parse below XML (angle bracket counting) to convert each stanza to an XML element, then parse those as separate XML documents.

You'd also have to explicitly turn XML namespace support _off_, since so many systems didn't actually support them. XML defines well-formedness and namespace-well-formedness as two different things, and you didn't want to completely drop communication because the other side was sending a message that didn't meet the more stringent requirement.

Some implementations would figure out ways to incrementally disrupt and extract elements from the DOM - but this would sometimes cause resource leaks due to the design of the W3C DOM itself.

The expat had explicit support for parsing Jabber/XMPP messages very early, and was by far the most often XML component used for making libraries.


That's what Jabber is? Streaming XML? Whoa boy...


I really wanted to like XMPP back in the days, but honestly I always ended up feeling that the protocol is just bad. This idea of opening an XML document at the beginning of the stream, then only allowing a subset of XML and all the mess with the xmlns, all that bringing really nothing to the table except complexity.

I think ideally in a good protocol, the server should not have to parse the content for the messages that are not targetted to itself (only the metadata useful for routing). the XML mess makes it impossible to do that since you have to validate the full document.

At the time I think this page was a good summary of the issues https://about.psyc.eu/XMPP No idea if this is still relevant though.


I've built an XMPP client for an internal application. My impression was the protocol was overly complex, starting with the lower level problems you describe ("streaming XML.")

It's been years since I've looked at. Maybe things are better now.


There were several members of the core team, pre-XMPP effort in IETF, who wanted to change it to have framing. If not a length-prefixed model, a nil-separated one.

There were ideas to separate out the addressing/routing from the actual messaging, so that servers did not need to process XML and so that messages did not necessarily need to be XML.

There were even some very preliminary ideas on using the servers to potentially negotiate peer connections for arbitrary traffic.

Back in the early 2000s there were a _lot_ of self-hosted Jabber servers though, and there was push-back from early commercial interests on any protocol-breaking changes resetting adoption to zero. This resulted in Jabber 1.0 pretty much becoming the basis of the XMPP RFC, with XMPP adding new authentication techniques and internationalized JIDs.

Later on, there were efforts to establish alternative transports to accomplish some of these alternative transports and forms - HTTP endpoints to poll for messages, JSON mappings of the core messages, etc.

I would argue against PSYC's claims (or would have, back in the day) that the usage of XML in XMPP is not proper, however. Its proper, it just wasn't the best idea.


Well, I guess it kinda makes sense? Beats having to reinvent a tokenization format from scratch?

Of course, smaller messages (à la Matrix) probbably make more sense.


Ultimately the issue is that XML is a document markup language, not a purpose built streaming protocol. You can kind of force it into that role, but the parser ends up being overly complex and issue prone. It's like basing a chat app around creating a MS Word document of every message and sending it across the network.


I still find it amusing that XMPP by design violates XML spec, thus requiring its own custom XML parser.


It does not violate the spec, but it violates the expectations of a lot of XML tooling for sure.


It uses a subset of the XML spec, it does not need a custom parser, any existing parser will do.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You