Because written English makes so much sense normally. God forbid someone has to figure out the ambiguous pronunciation of those particular words. It seems like a silly thing to provide extra guidance on to me.
I feel like asking the image generators to mark AI images is the wrong way to go about it. It's like trying to maintain a blocklist. It seems better to me to have the major camera manufacturers or cell phones cryptographically sign their images as real.
I feel like this idea comes up often and in my opinion it doesn't solve anything. Take a picture of an AI image and you've made this approach useless. Which then goes to the argument of "well you'll see it's a picture of a picture" to which I will say there are plenty of ways to make this not appear so, and the ultimate form of this argument is that you can eventually project light directly into the photosensors, or otherwise hack the input between the photosensors and the rest of whatever digital magic that turns light into a JPG on your phone.
SynthID survives basic transforms including screenshots/photos, although it can of course be defeated. Even still it helps with the laziest fakes, which there seem to be a lot of - I've seen several quite widespread misinformative images over the past couple months that failed a synthID check.
Anyways I think approaching the problem from both directions is probably good.
As someone trying to think about OAuth apps at our SaaS, it certainly is very hard.
Do any marketplaces have a good approach here? I know Cloudflare, after their similar Salesloft issue, has proposed proxying all 3rd party OAuth and API traffic through them. But that feels a little bit like trading one threat vector for another.
Other than standard good practices like narrow scopes, shorter expirations, maybe OAuth Client secret rotation, etc, I'm not sure what else can be done. Maybe allowlisting IP addresses that the requests associated with a given client can come from?
This was probably partly a Google refresh token theft (given the length of the access). No inside info, just looking at how the attack occurred.
OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.
This is recommended for public clients, but I think makes sense for all clients.
The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.
The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.
What? This makes no sense to me. What's the threat model where you'd rather the OAuth flow result in the client app getting fake data?
If you reject the permissions the client already doesn't hear about it because the callback redirect isn't invoked (or at least, there's no reason for it to be, but that's up to you).
> What are you to do: say no, and then not use the app?
Um, yes? That's literally the point of what's happening. The app is asking for permissions because it needs it to do whatever it's doing. If you don't want to give it access to the data then there's no reason to use the app.
Siri first needs to fulfill the promise from the Apple Intelligence keynote. In this context, the small beans are things like setting timers and playing music reliably. AI was pitched as a true assistant who understood your whole digital life.
Nobody is going to hand control of their home to a system that was the dumbest smart assistant 14 years ago and is still behind everyone else.
It’s amazing to me that Apple announced vaporware that they didn’t know how to build yet. Nobody did, but Apple usually bides their time making it work before the reveal.
This is about user replaceable batteries, which is subtly different for me. Batteries that can be replaced by a shop, with some specialized tools and knowledge are important to me from a sustainability perspective since that extends the life of the phone.
Batteries that can be popped out and replaced by your average consumer are something beyond that, and have certain consumer benefits like being able to bring along a backup or something, but aren't that important to me.
Co-founder of CoachUp, a two-sided marketplace between sports coaches and athletes, here.
We started in a single city (Boston), and just the coach side. The non-technical founder was himself a coach and had lots of friends who were coaches, so it was easy to get the first ~50 coaches across a handful of sports. From there we focused heavily on getting coaches to sign up with the pitch that it was free to them, gives them a nice profile / landing page they could put on their website or business cards or tell people about, and would eventually even start driving leads.
I think the cost/benefit was there for the coaches: little cost (short application) and some minor benefit even without the athletes.
this is exactly what I was hoping to hear, someone who actually did it. Starting with one side where the cost/benefit was obvious and letting the other side follow makes a lot of sense. How long before the athlete side started coming in organically?
> This is the third time that I have to ask you to remove the issue that was there for more than 20 hours. What is going on here?
I don't know if you're giving this as something you've actually given Claude, but I don't think it's a good way of using Claude.
It's not a collaborator who's having a bad day where a little empathy might make him feel better and realize his error. It's a token generator based on a prompt which includes all chat history. If you have three examples of the bad approach in the history, in a format that looks like Claude doing work, it will totally pollute it! And even worse with auto-compaction where you don't know exactly what of those false starts is getting summarized into its context.
You have to treat this like a tool and understand how it works.
If Claude is going down a wrong path it's better to cancel and rewind and improve the previous addition to the prompt. You don't want it to generate a bunch of misleading tokens for itself and leave it in the context window indefinitely!
> I don't know if you're giving this as something you've actually given Claude, but I don't think it's a good way of using Claude.
That wasn't the full prompt, I trimmed it for clarity, but I agree with everything you said and that's how I actually use it.
I have a proxy logging everything sent to and from Claude in a structured way, which is precisely what let me do that compaction analysis in the first place.
When Claude goes off track, I don't tell it "you did something wrong". I ask it to analyze the tool outputs and the exchange so far and let it reconcile the discrepancy itself. That tends to work better than narrating the error to it.
The venting messages like that one are honestly for me, not for Claude. I know it's a tool. But it also behaves and communicates like a person, and that's a design choice that comes from Anthropic, not from me. What I've found is that writing something like that and then following it with proper instructions works fine in practice: Claude either ignores the venting or briefly acknowledges it and moves on. The actual output isn't affected. It's just how I process frustration without breaking the workflow.
It wouldn't prevent the admin page from exfiltrating data, though, right? Like, POSTing whatever data is loaded on the page to an arbitrary attacker controlled website.
That would require the logged in user to do something stupid. That’s like saying what’s to prevent the authorized user from emailing his credentials to a random person.
Does anyone know what's included in "datacenter capex"? In particular, does that include spending for associated power generation? Because whether or not the AI craze pans out, if we've built a whole bunch of power plants (and especially solar, wind, hydro, etc) that would be a big win.
You can't run a data center on solar or wind (even w/ batteries included). Everything they're building runs on gas & coal like what Musk got running for xAI.
You can and _must_ if you want competitive costs. Musk famously overpaid in order to get speed of deployment.
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
That’s because Rs let NIMBYs and the fossil fuel lobby call the shots, and Ds let NIMBYs and degrowthers call the shots. I bet China isn’t powering their datacenters with gas turbines
reply