- Facts are claims that are citing matter-of-fact reporting, public data, sequences of events etc.
- Opinion statements are ones where its clearly the author's perspective or take on what's being reported on - the embedded reasons show the explanations in context.
- Dubious claims usually highlight either exaggeration of some sort, or manipulative argumentation including extreme emotional language, appeals to authority and such
appreciate the question and I understand the concern. Our algorithm will balance for this by surfacing and allowing discovery of novel ideas even if they don't have the strongest weight yet - as the source of that idea gets validated over time, their own track record will be used as a proxy for the quality of their future ideas.
The idea is that by relying on their eventual track record rather than just an individual claim, we can surface and reward novel ideas that have merit too.
Hey - would love to connect about this, I am going to be in a group of founders where others might be interested as well so could even organize a group session. Please get in touch.
I watched this story and couldn't believe it when the actor responds to the anchor,
"That I know of that have lost money (to scammers impersonating him)? It's in the 100s...
I see people come to my appearances and look at me like we've had a relationship online for a couple of years and I'm like, no, I'm so sorry, I don't know who you are. It's so sad, and you see the devastation."
youtube.com/watch?v=ghmvOP6Daso
At 1:52:00 in this DOAC video Steven says his team spends 30% of their time sorting through deepfake ads, to the extent he had to hire someone whose exclusive job is to spot scam videos and report them to FB etc:
I feel like there's a big undercurrent brewing but because the individual damages are not high enough and victims have limited recourse, nothing significant happens.
I think it's time to build a new system - something that can annotate the post the user is on, if there's at least another savvy user (or AI system) that can pick up on the uncanny signals. This youtube video about the "Walker Family" sham on Facebook is particularly relevant here:
@knicholes & @pil0u - I am working on a system that would prevent this exact same scenario. I appreciate the docs write up, given that you were personally impacted by this and are passionate about it, I'd love to speak.
I feel like the scale at which this is happening cross-internet must be staggering but because this is small-scale and un-reportable theft - who would the average person even go to, if they willingly sent the money, and they'd also have to get over the embarrassment of having fallen for it.
What really got me thinking about the scale of this is watching the deepfake discussion at 1:51:46 in this video (at 1:52:00 he says his team spends 30% of their time sorting through deepfake ads, to the extent he had to hire someone whose exclusive job is to spot these scam videos and report them to FB etc):
What are your thoughts on this being solved by the negative of the situation? So, instead of having to vet every single stream, tweet etc to check if it's legit, basically the idea is that you shouldn't "trust" what you are seeing unless it's explicitly endorsed via a signature from the original creator.
Obviously, if it's coming from their official channels the "signature" can be more obvious, but a layer that facilitates this could do a lot of good imo.
do you have a point of view of this type of collaborative approach applied to other areas, for example, collective understanding for groups of people? We are working on something in that space.
The amount I have to say on this topic would be inappropriate for a Hacker News comment. But some brief and unstructured thoughts I can offer.
For collaboration I believe that _lineage_ is important. Not just a one-shot output artifact but a series of outputs connected in some kind of connected graph. It is the difference between a single intervention/change vs. a _process_. This provides a record which can act as an audit trail. In this "lineage" as I would call it, there are conversations with LLMs (prompts + context) and there are outputs.
Let's imagine the original topic, audio, with the understanding that the abstract idea could apply to anything (including mental health). I have a conversation with an LLM about some melodic ideas and the output is a score. I take the score and add it as context to a new conversation with an LLM and the output is a demo. I take the demo and the score then add it to a new conversation with an LLM and the output is a rhythm section. etc.
What we are describing here is an evolving _process_ of collaboration. We change our view from "I did this one thing, here is the result" to "I am _doing_ this set of things over time".
The output of that "doing" is literally a graph. You have multiple inputs to each node (conversation/context) which can be traced back to initial "seed" elements.
From a collaborative perspective, each node in this graph is somewhat independent. One person can create the score. Another person can take the score and create a demo. etc.
I resonate with this in a major way. A couple of times now we have received feedback around the product, branding etc when I've gone from not even considering any changes to the new direction being fully implemented within the product in 48H. This was unthinkable when you were part of a "machine" within a larger enterprise however nimble it may have been. The internal friction that comes up whenever you're thinking about a product change - because of the PTSD from the drawn out discussions in my prior roles - almost needs to be unlearned and coaxed out!
Feels like we’re not just building products, but also unlearning layers of corporate conditioning. And maybe, in that process, we’re slowly rebuilding ourselves too, quieter, leaner, but closer to the core.
I've actually thought about doing this - everyone works remotely from the day - from one of the group's houses. It sounds like something that would benefit us all.