For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more MatthewWilkes's commentsregister

That video was enough for me to lose all interest in this channel. Misrepresenting basic maths with something obtuse that is obviously false pushes people away from the discipline.


Surprising that the only mention of intellectual property in the FAQ and the legal terms page is about copyright that the service claims in the outputs, not restrictions on the inputs.


The ToS for this claims that you have to pay for it to use the results commercially, but I couldn't find the pricing for it anywhere or any way to sign up.

Interesting, as the primary use case for this seems to be something along the lines of editing music to fit into video projects.


I plan to implement paid features later with a basic free tier. I am still trying to figure that out. I've updated the terms to be more clear about that!


Ah, I see. Nitpick: It currently says "Do not upload rights that you do not have the rights to".


Whoops, thank you, will update!


since there is no login mechanism, I used the anonymous feedback section to request their agreements with BMI and ASCAP for creating liscenced mechanical renditions of copyrighted materials.


This is a good point, I have updated the site to only allow you to view content you submitted and also made it clear that you need to have the rights to the audio to edit it.


One of the first things the author of the article does is breaks it down into the 5 code points and explains their individual meanings.


Your point being, what exactly?

If the user gives you what, as far as they're considered, is a glyph. And you return a completely different glyph. You've mangled their data.


A Google image search for each will probably be best. They are relatively technical terms within highways, most people wouldn't be able to describe them as well as the person you replied to, so there aren't all that many good diagrams comparing the types.


I believe that one of the responsibilities of the board of auditors is hearing complaints against officeholders for waste of public funds.


My Samsung Ultrawide has a 3840x1080 resolution. Once or twice it's been misdetected as normal horizontal resolution, when switching PIP off. I suspect you've been the victim of a driver/host issue.


Thats bad. That max horizontal resolution is no better than a 4k monitor’s.


I was very involved in GSoC from an open source perspective for many years. I think this is the best tweak they've made in years. There is so much talent outside of the University system, and widening this will give lots of motivated people opportunities they otherwise wouldn't have.


I don't speak for Google, however I was involved in Summer of Code from the open source side when this decision was made. The reason given was that they were concerned that the amounts were having a disproportionate impact on some students, paying significantly more than the jobs it was meant to compete with to the extent that getting into Sumner of Code was one of the main objectives for students rather than a nice to have.

I can tell you that when it was announced it was a very unpopular move with the mentoring organisations, where the strong feeling was that equal work deserves equal pay, but ultimately the decision for mentoring orgs is to participate or not, and the vast majority chose to participate.


From the article "WhatsApp can read some of your messages if the recipient reports them."

Is this surprising? Any third party can read part of an e2e encrypted communication if one of the participants forwards it.


"AI" running on every client can automatically flag messages and send them to moderators.

> Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.


The problem here is that the third party controls the software on both ends of the communication. And that software can send the messages to this party without the participants knowingly triggering it.

The article says that by reporting a user, the software on the site of the reporting user silently sends data to WhatsApp. The reporting user does not know what data is sent.


its not silent, when you report messages there is a prompt that tells the reporter what is happening

take a look https://twitter.com/WABetaInfo/status/1435221936888483847


Yeah it's not really too surprising, except that maybe the scale of what gets shared with Facebook is a bit unclear.

I'm not too sure at what point the artificial intelligence program gets involved though.


'a bit unclear' like in having got it half right like in simply unknown ?!


The article quotes the terms of service which say that they send:

>"the most recent messages”

and

>“information on your recent interactions with the reported user.”

which is unclear, but not unknown, and as far as this article claims they don't actually send anything else, though they do combine it with whatever metadata they have on the users involved.


There seem to be two things happening.

When a user reports a post it is (unsurprisingly) forwarded to the moderators.

Additionally, there is some kind of AI CSAM detector, which automatically forwards posts.

In both cases, it also forwards the previous five messages from the thread to the moderators.


> Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems.

> Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

From the actual ProPublica report. If their published understanding is correct, E2EE is not broken, but rather end users who are one of the ends of E2EE are sending the decrypted content to be moderated. The AI bit is a filter to reduce the amount of content passed on to human moderators.

From near that second quote:

> Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.

That part is AI driven, but my reading is that the moderators do not get access to the encrypted data (the actual messages) only the behavior patterns, and from that make a determination of what to do.


Correct me if I'm wrong but unless the "AI CSAM detector" is running on the client, it simply cannot be e2e encrypted.


It sounds like the only unencrypted data that the moderators see is sent from an endpoint (a user clicking "report"). After that an AI looks at the report and prioritizes ones that looks like it might be CSAM.


Yes, so I assumed it is running on the client, but for all I know they could be encrypting the message and sending an image hash to Facebook.


It looks like the AI stuff applies to the groups content which is not E2E.


> But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.

It's not just when a recipient reports them it seems but also when they have been flagged by their algorithm. If that were true, the claim that the conversation is e2e encrypted simply cannot be true, unless the algorithm runs on the client.


just think about the sheer number of people that report stuff.

given that facebook has less than 1k moderators, do you honestly think that they'd just let the moderators sift through everything manually?

obviously you'd classify stuff first, checking against known images is easy. Classifying new images is a lot harder, plus the ethics of training and labelling a dataset for accurate detection is pretty hard, also almost impossible to do legally.

I suspect the next best thing is detecting nudity and age of the subject, and taking the hit that you're going to prioritise a lot of malicious reports, rather than genuine.


It sounds to me that there’s actually an algorithm between the report and the moderator to control the volume of manual moderation.

What you’re describing doesn’t work with E2E encryption. I really doubt it works that way.


>What you’re describing doesn’t work with E2E encryption

This smells like message franking, but I can’t be sure.

https://eprint.iacr.org/2017/664.pdf


Is there any particular reason to believe its not running on the client?


because that would mean running a fairly large model on underpowered hardware. Also it would mean that you could never actually trust the output.

its far far more simple to run it server side on the reported message.


E2E is useless if the software is not opensource. Especially if you don't trust the vendor


Even just the kindles, I carried mine with me so much when it first came out as an emergency communications channel.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You