For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | saagarjha's commentsregister

Because I assume they want to be able to use it, not be banned forever.

I don’t know if I’d want to do business with a company after being treated like that.

That’s because Anthropic does not consider their model as having personality but rather that it simulates the experience of an abstract entity named Claude.

That sounds really interesting, but my google-fu is not up to task here, I'm getting pages and pages of nonsense asking if Claude is conscious. Can you elaborate?

I actually think this is pretty straightforward if you think of it something like

  class Claude {}
  
  Claude anthropicInstance = new Claude();
  anthropicInstance.greet();
Just like a "Cat" object in Java is supposed to behave like a cat, but is not a cat, and there is no way for Cat@439f5b3d to "be" a cat. However, it is supposed to act like a cat. When Anthropic spins up a model and "runs" it they are asking the matrix multipliers to simulate the concept of a person named Claude. It is not conscious, but it is supposed to simulate a person who is conscious. At least that is how they view it, anyway.

You can read the latest Claude Constitution plus more info here:

https://www.anthropic.com/news/claude-new-constitution


Super Smash Bros Brawl does this too for replays. I remember being a child and just learning about how computers worked and being very confused at how such a long video (which I knew to be "big") could possibly fit in such a small number of "blocks" on the Wii while screenshots were larger. I think the newer games do this too but they have issues because the game can be updated and then the replays no longer work.

It's probably downvoted because it sounds somewhat nonorganic.

Even with the em-dash. New account and other comments seem to be here-and-there. Maybe LLM with some editing after.

Fixed

I once left a company after deploying a fix to solve a rare crash due to a data race and only figured out if it worked after I had started the new job by poking my old coworkers about it.

I'm curious what this offers over just building the host side code to be native?

My quick guess is that this approach offers near zero overhead for gpu to access data inside sandbox with all the security/privacy benefit of sandbox.

Yes, simply for local inference -- not much, native is the obvious choice.

The value would be in actor processes, where you can delegate inference without paying the 'copy tax' for crossing the sandbox boundary.

So, less "inference engine" and more "Tmux for AI agents"

Think pausing, moving, resuming, swapping model backend.

I scoped the post to memory architecture, since it was the least obvious part ... will follow up with one about the actor model aspect.


I'm a little confused what an actor process is. To me a process is inherently local?

For one thing, it's a lot easier to distribute a webpage than a native app

This doesn't work with webpages though

I somehow missed that tidbit

Wrong acquisition.

You can confirm that the people who say things are in a position to know.

There are better ways to analyze on-screen content than images.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You