Look. I'm a small startup employee. I have a teeny tiny perspective here. But frankly speaking the idea that Netflix could just take some off the shelf widget and stuff it in their network to solve a problem... It's an absurd statement for even me. And if there's anyone it should apply to it would be a little startup company that needs to focus on their core area.
Every off the shelf component on the market needs institutional knowledge to implement, operate, and maintain it. Even Apple's "it just works" mantra is pretty laughable in the cold light of day. Very rarely in my experience do you ever get to just benefit from someone else's hard work in production without having an idea how properly implement, operate, and maintain it.
And that's at my little tiny ant scale. To call the problem of streaming "solved" for Netflix... Given the guess of the context from the GP post?
I just don't think this perspective is realistic at all.
> the idea that Netflix could just take some off the shelf widget and stuff it in their network to solve a problem
Right. They have to hire one of the companies that does this. Each of YouTube, Twitch (Amazon), Facebook and TikTok have, I believe, handled 10+ million streams. The last two don't compete with Netflix.
I believe this is the spirit of the "solved problem" comment: not that the solution is an off-the-shelf widget, but that if it has ever been solved, then that solution could technically be used again, even if organizing the right people is exorbitantly expensive.
There are multiple companies that offer this capability today that would take a few weeks to hide behind company branding. This was a problem of netflix just not being set up for live stream but thinking they could handle it.
It's funny because in a group crowded channel there is feedback so long as you are part of the channel. But I think even in those circumstances, the feedback you get from the noise of everybody else, you don't interpret that as the same thing as you sending just "your one message".
Humans are really bad at understanding distributed harms.
Honestly the idea of a valuable communication channel getting abused for selfish purposes feels like it needs its own law. I'd happily call it csharps law. Maybe it's already got a name. We have the idea of spam, but it's vauge, nebulous, and doesn't concretly identify the systems and forces in place that lead to this innevitable outcome. It casts this outcome as not even a problem of individuals, but something like "the problem is someone sent me a message I didn't want." As if that person had not done that, then this wouldn't be a problem.
I think this is important because it feels like an endless surprise to everyone that this keeps happens. It feels like we have to cover the same ground again and again in discussions about it, and it feels like if we could tackle this problem more generally, the benefits to society at large would be massive.
Product reviews are valuable, producers capture reviewers, spam fake reviews.
Email is valuable. Spam nearly destroyed it until we migrated the entire decentralized system to Google.
Public discussions like these are valuable, and God knows how much work Hacker News does to moderate all this.
None of this feels like it's designed to resist this problem.
I do find chatgpt is helpful in laying out the formulas you should you in the context that they should be used in. Something I personally know I have no business doing.
... At the same time, I also am aware of how hilariously dumb chat GPT can be in deep technical contexts. I've taken to saying that when it comes to a technical topic, chatGPT will confidently tell you the wrong thing to do 50% of the time, but that's fine because it will give you the terms and context you can use to audit its solution yourself. Even if you don't understand the answers you can easily have chat gpt explain the gaps, again, 50%, but giving real information contextualizes the conversation better. I would expect this to improve accuracy, and my personal experience bares this out.
There was a brief window of time where the strongest chess players were chimeras, humans who assessed AI suggestions. Quickly even they found themselves outperformed by engines.
I suspect this is happening here as well. And I also suspect its going to take a good deal longer.
I want to ask and answer two of my own questions here:
1. Why clone Jeff's voice?
When I was messing with stable diffusion using Automatic1111's interface, I noticed it came with a big list of artists to add to the prompt to stylize the image in some way. There was a big row in the media about ai art reproducing artists work and many artists came forward feeling it was a personal attack. But... I mean the truth is more general than that. When I pressed a button to insert a random name into a prompt, my goal was not "yes give me this person's art for free", it was "style this somehow".
I wasn't personally interested in any particular artist, I honestly would have preferred a bunch of sliders.
Jeff here is clearly a good speaker. That's a practiced talent and voice actors exist because it's hard. Elecrow wanted a voice over and they wanted it to be as good as they could make it. Jeff is very good. So did they want Jeff?
I think what they really wanted was a good and cogent narration with the tenor of a person. Not a machine making noises that sound like english. If they had an easy way to get that, we wouldn't be talking about it here.
2. What function does copyright serve?
Well. I think a reasonable argument would be that if people were able to reproduce your work for free, you would quickly find yourself without a monetary incentive to make more of it.
So. What happens if you combine answer 1 with answer 2?
I think it leads to: "We should consider making it illegal to automatically reproduce the work of an artisan.", you know, the luddic argument. An argument that has been perceived to be, more or less, settled.
So it seems to me: That for individuals, harms matter, and for society, it doesn't.
For 1), it seems clear that there's a heavy overlap between Jeff's market and Elecrow's, and it's difficult to see that as a coincidence.
If someone cloned both Shaq's voice and Jeff's, and used them to endorse sneakers - I think it's a fair assumption that Shaq would see this as a business risk, and Jeff .. I'm going to go out on a limb, and assume he'd probably find it hilarious. Using Jeff's voice for sneakers would be more akin to your example of finding a midwestern voice with a useful corpus. Using Shaq's would be a much more obviously targeted appropriation.
What we're looking at here appears to be exactly this scenario, except this is Jeff's niche, not Shaq's. Using Shaq's voice for SBCs and related products would feel quite absurd - using Jeff's feels like a much more obviously targeted appropriation.
>I think what they really wanted was a good and cogent narration with the tenor of a person. Not a machine making noises that sound like english. If they had an easy way to get that, we wouldn't be talking about it here.
I think the general assumption is that they wanted to, at the very least, strongly imply his endorsement of the product or video.
Which I would say they did effectively. If I had happened on a clip of one of these videos outside the context of this controversy, I could have easily gotten the impression he was working with the vendor.
>> But... I mean the truth is more general than that. When I pressed a button to insert a random name into a prompt, my goal was not "yes give me this person's art for free", it was "style this somehow".
yeah and that's the problem. The style of an artist is a developed thing. To think that one could borrow your style not through learning and caring, but through mathematically analyzing the width, and colors, and patterns and applying it to a random noise — that's kinda insulting.
If nobody cares about my real work, why do they care about using my style, then? Develop your own an teach your AI on that, if there really isn't any difference.
People say that AI learns how a human would. But a human wouldn't (couldn't!) learn like an AI can. He can't look at the pixels, can't mechanically churn through patterns. If someone can learn from art like AI learns art, I would also be opposed to them learning anything from me :D
> Jeff here is clearly a good speaker. That's a practiced talent and voice actors exist because it's hard. Elecrow wanted a voice over and they wanted it to be as good as they could make it. Jeff is very good. So did they want Jeff?
Jeff has worked with and endorsed some of their products before, so that puts a wrench in that theory of "well they just picked a clean voice" and makes this almost litgable.
>I think it leads to: "We should consider making it illegal to automatically reproduce the work of an artisan.", you know, the luddic argument. An argument that has been perceived to be, more or less, settled.
There's the labor argument: People who's voices are samples should get a residual on the product they are being used for. Combine that with some sort of lack of liability on the subject when AI is used and we'd have a win-win.
But that requires money and companies don't want to pay other people. So we come at an impasse that leads to the luddite argument. Take the ball and go home if you don't want to pay. The fact that this comes into so few people's minds shows how successful companies are at casting off the idea of residuals.
I appreciate your post here and I'm glad you shared, because it's an example of a distributed harm. One of millions to shake out of this incident, that doesn't have a dollar figure, so it doesn't really "count".
To illustrate:
If I were to do something horrible like kick a 3 year olds knee out and cripple them for life, I would be rightly labeled a monster.
But If I were to say... advocate for education reform to push American Sign Language out of schools, so that deaf children grow up without a developmental language? We don't have words for that, and if we did, none of them would get near the cumulative scope and harm of that act.
We simply do not address distributed harms correctly. And a big part of it is that we don't, we can't, see all the tangible harms it causes.
Blaming an entire countries population specifically, that elects representatives on a first past the post system, strikes me as a bit divorced from reality. Canadians do not live in a direct democracy, they don't explicitly vote on every policy, and they get to have less than a single bit of informational influence on their government every 4 years. There's blame to go around, for sure. But pegging it entirely on the citizenry is just not realistic.
To achieve this users declare emoji's in a known format the browser recognizes inline in text, which includes a content hash of the emoji they wish to use.
eg. ":T15PXExNem0xX:"
Browsers use a DHT and local emoji datastore looking to the DHT to fetch any emoji's seen in text but not present locally.
You may submit a picture to your browser to create an emoji, but it must be a 128x128 webp image. The browser calculates the hash, and puts it in your local datastore. You may now use this emoji anywhere.
A few things to think through here:
1. Would be nice to have an non profit come forward and make a nice big server act as a reliable hub to the emoji database. The nice thing is, basically anyone can step forward at any time and provide servers to the network. Anyone can serve the emoji to anyone as they are addressed by their hash.
2. Regarding offensive or hateful emoji's, I think the question "What if this is used to promote hate?" Is a good question to be asking. But try to keep in mind that the system I am outlining is meant to be communication infrastructure, not a platform. Think of it like a different way to write text. This system in and of itself isn't a platform at all, as the only way to discover and use emoji's is to see someone use it somewhere else. You can browse your local datastore, but that's it.
3. What about REALLY hateful or outright illegal content? IPFS is a guiding light here (in more ways than one if people are familiar with the project). Lists can be made of Violating content by third party organizations and browsers could be configured to subscribe to those lists, very similar to the way ad blocking networks work. Apparatus could be constructed from there to support DMCA and abuse reports and distribute block lists. These subscription lists are user configurable, in case those apparatus are captured or abused and users feel the need to revolt.
Likely more issues than this? I don't think there's anything that is insurmountable. But that's the rough outline. Instead of relying on the unicode consortium, take advantage of content hashing distribution networks to create user defined emoji's instead.
I think about a system a lot when stuff like this comes up. Because I ask, what would be the most popular examples in such a database look like? Do we think it would be close to what the unicode consortium laid out?
It's really clear that hacker news puts its thumb on the scale of pretty much everything in a pointedly opaque way. It's really easy to see this in action if you go down to the bottom of comments section and you'll notice a bunch of examples of comments that have negative total votes and are older sitting above comments that have positive votes and are newer. Makes me wonder, is hacker news applying global weights to users? If I post on a page, is there some metric I don't get to see that just says "this person starts with an effective -2 votes"?
This is just the top in the last 24 hours, or you can switch it to last week to catch up. Plus the search is pretty nice and very fast so if you're looking for something specific it's convenient. This sort's explicitly in order of votes and nothing else. It's a lot better.
I'd tolerate all this rank fiddling better if it was transparent as to why things were being sorted the way they are. But that's not going to happen. Make the best of it you can.
Normally things work quite well, with manual interventions by moderators explained in thread. However something seems to have gone wrong this time. Usually a new model from openai attracts more than 73 comments! I'm missing the depth of discussion and analysis that usually occurs here.
Every off the shelf component on the market needs institutional knowledge to implement, operate, and maintain it. Even Apple's "it just works" mantra is pretty laughable in the cold light of day. Very rarely in my experience do you ever get to just benefit from someone else's hard work in production without having an idea how properly implement, operate, and maintain it.
And that's at my little tiny ant scale. To call the problem of streaming "solved" for Netflix... Given the guess of the context from the GP post?
I just don't think this perspective is realistic at all.