That video was enough for me to lose all interest in this channel. Misrepresenting basic maths with something obtuse that is obviously false pushes people away from the discipline.
Surprising that the only mention of intellectual property in the FAQ and the legal terms page is about copyright that the service claims in the outputs, not restrictions on the inputs.
The ToS for this claims that you have to pay for it to use the results commercially, but I couldn't find the pricing for it anywhere or any way to sign up.
Interesting, as the primary use case for this seems to be something along the lines of editing music to fit into video projects.
I plan to implement paid features later with a basic free tier. I am still trying to figure that out. I've updated the terms to be more clear about that!
since there is no login mechanism, I used the anonymous feedback section to request their agreements with BMI and ASCAP for creating liscenced mechanical renditions of copyrighted materials.
This is a good point, I have updated the site to only allow you to view content you submitted and also made it clear that you need to have the rights to the audio to edit it.
A Google image search for each will probably be best. They are relatively technical terms within highways, most people wouldn't be able to describe them as well as the person you replied to, so there aren't all that many good diagrams comparing the types.
My Samsung Ultrawide has a 3840x1080 resolution. Once or twice it's been misdetected as normal horizontal resolution, when switching PIP off. I suspect you've been the victim of a driver/host issue.
I was very involved in GSoC from an open source perspective for many years. I think this is the best tweak they've made in years. There is so much talent outside of the University system, and widening this will give lots of motivated people opportunities they otherwise wouldn't have.
I don't speak for Google, however I was involved in Summer of Code from the open source side when this decision was made. The reason given was that they were concerned that the amounts were having a disproportionate impact on some students, paying significantly more than the jobs it was meant to compete with to the extent that getting into Sumner of Code was one of the main objectives for students rather than a nice to have.
I can tell you that when it was announced it was a very unpopular move with the mentoring organisations, where the strong feeling was that equal work deserves equal pay, but ultimately the decision for mentoring orgs is to participate or not, and the vast majority chose to participate.
"AI" running on every client can automatically flag messages and send them to moderators.
> Most can agree that violent imagery and CSAM should be monitored and reported; Facebook and Pornhub regularly generate media scandals for not moderating enough. But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.
The problem here is that the third party controls the software on both ends of the communication. And that software can send the messages to this party without the participants knowingly triggering it.
The article says that by reporting a user, the software on the site of the reporting user silently sends data to WhatsApp. The reporting user does not know what data is sent.
The article quotes the terms of service which say that they send:
>"the most recent messages”
and
>“information on your recent interactions with the reported user.”
which is unclear, but not unknown, and as far as this article claims they don't actually send anything else, though they do combine it with whatever metadata they have on the users involved.
> Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems.
> Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service. This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.
From the actual ProPublica report. If their published understanding is correct, E2EE is not broken, but rather end users who are one of the ends of E2EE are sending the decrypted content to be moderated. The AI bit is a filter to reduce the amount of content passed on to human moderators.
From near that second quote:
> Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.
That part is AI driven, but my reading is that the moderators do not get access to the encrypted data (the actual messages) only the behavior patterns, and from that make a determination of what to do.
It sounds like the only unencrypted data that the moderators see is sent from an endpoint (a user clicking "report"). After that an AI looks at the report and prioritizes ones that looks like it might be CSAM.
> But WhatsApp moderators told ProPublica that the app’s artificial intelligence program sends moderators an inordinate number of harmless posts, like children in bathtubs. Once the flagged content reaches them, ProPublica reports that moderators can see the last five messages in a thread.
It's not just when a recipient reports them it seems but also when they have been flagged by their algorithm. If that were true, the claim that the conversation is e2e encrypted simply cannot be true, unless the algorithm runs on the client.
just think about the sheer number of people that report stuff.
given that facebook has less than 1k moderators, do you honestly think that they'd just let the moderators sift through everything manually?
obviously you'd classify stuff first, checking against known images is easy. Classifying new images is a lot harder, plus the ethics of training and labelling a dataset for accurate detection is pretty hard, also almost impossible to do legally.
I suspect the next best thing is detecting nudity and age of the subject, and taking the hit that you're going to prioritise a lot of malicious reports, rather than genuine.