For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | quotemstr's commentsregister

This is why the EAs, and their almost comic-book-villain projects like "control AI dot com" cannot be allowed to win. One private company gatekeeping access to revolutionary technology is riskier than any consequence of the technology itself.

Having done a quick search of "control AI dot com", it seems their intent is educate lawmakers & government in order to aid development of a strong regulatory framework around frontier AI development.

Not sure how this is consistent with "One private company gatekeeping access to revolutionary technology"?


> strong regulatory framework around frontier AI development

You have to decode feel-good words into the concrete policy. The EAs believe that the state should prohibit entities not aligned with their philosophy to develop AIs beyond a certain power level.


And what is malicious about that ideology? I think EAs tend to like the smell of their farts way too much, but their views on AI safety don't seem so bad. I think their thoughts on hypothetical super intelligence or AGI are too focused on control (alignment) and should also focus on AI welfare, but that's more a point of disagreement that I doubt they'd try to forbid.

Couldn't agree more. The "safest" AI company is actually the biggest liability. I hope other companies make a move soon.

No it isn't lol. The consequence of the technology literally includes human extinction. I prefer 0 companies, but I'll take 1 over 5.

> Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available.

All the more reason somebody else will.

Thank God for capitalism.


Come on, Anthropic, I desperately need this better model to debug my print function /s

Pick a technology --- AR, robotics, AVs, SMRs, the cookie header --- and you'll find a well-funded and sanctimonious ecosystem of NGOs, regulatory bodies, and compliance departments dedicated to ensuring nobody uses it.

The pretext for these bans is always that unassailable cluster of feel-good yet vague virtues like privacy or the environment that you can make mean anything you want, but the reality on the Continent is just a rotating series of excuses for the catechism of "no, non, nein".

And it's never enough to just regulate the EU. Oh, no. The EU is the world's moral guardian, a "regulatory superpower", humanity's conscience. Obviously EU regulation should apply worldwide. The rest of humanity can't be trusted to care about privacy and the environment enough, right?

Well, I'm sick of it. How about they start saying ja to something? How about they walk about HOW we incorporate fledging technological capabilities into society instead of trying to freeze our information environment in 2008 amber?

At this point, when thinking about how we deploy new technology, I'm inclined to just leave Europe behind. Seal it off from the world of innovation with firewall rules and geofencing. The alternative is to suspend technology, the only thing that's ever in all history improved the human condition, for the sake of small-minded, small-hearted people who like mankind less than they love nein.


Saying ja/oui to something, is saying nein/non to something else.

If all you have is taking sides with what ought to be dismissed, or rather, discussed and controlled, rather than let alone wild at the expense of most people, that's a choice that is yours.


https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...

https://cybernews.com/privacy/meta-flo-period-data-privacy-l...

https://violationtracker.goodjobsfirst.org/mega_scandal/Onli...

https://apnews.com/article/google-smartphone-surveillance-ve...

https://www.security.org/identity-theft/breach/equifax/

4 major incidents and a site referring to dozens more, and that's just a few minutes of searching.

And that's only tech companies'(or tech related) misconducted. If we broaden the scope to corporations in general I'm pretty sure I would hit a post text limit before I even got through a quarter of them.

It's like the old saying goes "Every regulation is written in blood". Regs don't exist because someone doesn't want technology to progress. They exist because companies have shown time and again, as far back as you'd like to go, that they are not responsible members or society. They're willing to do anything in the name of profits, including mass privacy violations, abusing customers, and in extreme cases allowing people to die.


And who was harmed, precisely, and how? The EU sanctimony complex regularly cites these things as if each were an infosec Chernobyl, but I've yet to see a real-world harm come from these incidents. The advocates say they're harmful because they violate privacy rules, and we need the privacy rules lest companies cause harm by violating them. It's circular. The rules are made up. They do not correspond to the prevention of suffering on the part of real people in the real world.

Even if we were to grant that these alleged privacy disasters causes harm, we'd have to balance them against the lost advantages of refusal to deploy the enabling technologies. It's like banning telephones on the account of everything crime anyone's ever organized over a phone call.


I personally know at least half a dozen people who were victims of identity theft thanks to data breaches. Costing them thousands of dollars and countless hours...

Regardless, your argument is predicated on the idea that violations of privacy and data collection is somehow fundamental to these services, in most cases it is not. Google and Facebook don't need to hoover up all your data to sell or use to advertise to you. They choose to, and the vast majority of users were/are unaware of it.

Beyond that, several of the articles I linked are for either negligence (failing to fix known issues) or collecting/using data without consent.


Rules are all made up (as tech is) for the purpose of enabling society and lowering suffering. Who was harmed? Everyone whose private personal information have been leaked without consent. Who was harmed? Who have been manipulated into voting? How has the damage not been diffuse and probabilistically significant? (otherwise, why would Cambridge Analytica even funded and paid for? As well as the whole advertising industry?)

And, a fundamental right does not need an existing harm to be justified into existence: it is a right as first principle.


> [Privacy] is a right as first principle

If you want to axiomize privacy, you can: that's a coherent philosophical position: but it's one I find curious. You're arguing that privacy breaches are harmful not because they cause harm, but because they are harm. Why is privacy, not progress, the summum bonum?


Privacy is a fundamental right, not the end of everything.

And you axiomize progress.

Although the question isn’t one against the other. It is whether progress justifies treating people as objects, as data providers without consent. That’s not a curious axiom, that’s the basis of all rights-based systems since 1948. Or 1785 (Kant). Or 1215 (Habeas Corpus). Or 1750 BCE (Hammurabi code).


While I agree on some points - the sanctimonious regulatory-industrial complex feeding a large army of NGOs etc - I don't see this issue - always-on camera glasses feeding their data to the data parasites out to monetise every aspect of your life and death - as the best one to vent your frustrations. I shun these companies - from Google to the Fruit Factory, from TCFKAFacebook to the current crop of 'AI' incumbents - and I certainly don't want other with their always-on faceware to fill the void my absence in their registers has left. Yes, I know about 'shadow accounts' which these companies supposedly have on people like me but at least I can try to reduce their ability to build up a profile on me. I can not do that if the denizens of whatever locality I happen to show my face happily signal my presence to their digital overlords.

Neat!

> This is probably due to the way larger numbers are tokenised, as big numbers can be split up into arbitrary forms. Take the integer 123456789. A BPE tokenizer (e.g., GPT-style) might split it like: ‘123’ ‘456’ ‘789’ or: ‘12’ ‘345’ ‘67’ ‘89’

One of the craziest LLM hacks that doesn't get love is https://polymathic-ai.org/blog/xval/

xVal basically says "tokenizing numbers is hard: what if instead of outputting tokens that combine to represent numbers, we just output the numbers themselves, right there in the output embedding?"

It works! Imagine you're discussing math with someone. Instead of saying "x is twenty five, which is large" in words, you'd say "x is", then switch to making a whistling noise in which the pitch of your whistle, in its position within your output frequency range, communicated the concept of 25.00 +/- epsilon. Then you'd resume speech and say "which is large".

I think the sentiment is that today's models are big and well-trained enough that receiving and delivering quantities as tokens representing numbers doesn't hurt capabilities much, but I'm still fascinated by xVal's much more elegant approach.


I was having some issues with IP addresses representation, this might solve it

> I wonder if there is a more general solution that can make models spend more compute on making important choices

There's a lot of work going on in various streams towards making it possible to vary compute per-token, dynamically, e.g. universal transformers. Maybe one day it'll work well enough to beat conventional techniques.


It's usually the case that the more strident someone is in a blog post decrying innovation, the more wrong he is. The current article is no exception.

It's possible to define your own string_view workalike that has a c_str() and binds to whatever is stringlike can has a c_str. It's a few hundred lines of code. You don't have to live with the double indirection.


I think the article is trying to address the question from the title. Definining your own string_view workalike is probably possible, but I'm not sure if it's OK to use it in a public API, for example. Choosing to use const string & may be more suitable.

Or wait until P3655 ships, which will bring std::wcstring_view.

Am I the only one who doesn't like PEGs and prefers EBNF-style parser generators? The order-dependence of PEG alternatives and the lack of ambiguity detection are footguns, IMHO

We really need ZKPs of humanity

No, we really don't. We don't need worldcoin, we don't need papers, please. We just don't.

"Prove your humanity/age/other properties" with this mechanism quickly goes places you do not want it to go.


No, it doesn't go places we "do not want it to go". What part of zero knowledge doesn't make sense? How precisely does a free, unlinkable, multi-vendor, open-source cryptographic attestation of recent humanity create something terrible?

It would behoove people to engage with the substance of attestation proposals. It's lazy to state that any verification scheme whatsoever is equivalent to a panopticon, dystopia as thought-terminating cliche.

We really do have the technology now to attest biographical details in such a way that whoever attests to a fact about you can't learn the use to which you put that attestation and in such a way that the person who verifies your attestation can see it's genuine without learning anything about you except that one bit of information you disclose.

And no, such a ZK scheme does not turn instantly into some megacorp extracting monopoly rents from some kind of internet participation toll booth. Why would this outcome be inevitable? We have plenty of examples of fair and open ecosystems. It's just lazy to assert right out of the gate that any attestation scheme is going to be captured.

So, please, can we stop matching every scheme whatsoever for verifying facts as actors as the East German villain in a cold war movie? We're talking about something totally different.


The ZK part isn't the problem. The "attestation of recent humanity" part is. Who attests? What happens when someone can't get attested?

You've been to the doctor recently, right? Given them your SSN? Every identity system ever built was going to be scoped || voluntary. None of them stayed that way.

Once you have the identity mechanism, "Oh it's zero knowledge! So let's use it for your age! Have you ever been convicted?" which leads to "mandated by employers" which leads to...

We've seen this goddamn movie before. Let's just skip it this time? Please?


The part where FAANG does usual Embrace, Extend, Extinguish, masses don't care/understand and we have yet another "sign in with... " that isn't open source nor zero-knowledge in practice and monetizes your every move. And probably at least one of the vendors has massive leak that shows half-assed or even flawed on purpose implementation.

> quickly goes places you do not want it to go.

Which places?


Sure. I'll provide an API to provide mine to your bot for $1 each time.

That was ballsy! But, sadly, it was a temporary hack. Both Voyager have degrading, unfixable thrusters. The rubber diaphragms in the hydrazine fuel tanks are degrading, shedding silicon dioxide (i.e. sand) microparticles into the thruster fuel. These particles are gradually clogging the thruster nozzles and reducing their thrust. Eventually, thrust will decline to the point that they could fire the thrusters all day long and still not impart enough momentum to point the probes at Earth. Once that happens, we'll lose contact with the probes.

They'd switched away from the primary thrusters in 2004 due to this degradation. Now the backups are so degraded that the primary thrusters are better again in comparison.

Thruster clogging will kill Voyagers in about five years if nothing else gets them first. The least degraded thrusters nozzles are down to 2% of their diameter --- 0.035mm of free-flow area remaining.

The Voyagers will probably celebrate their 50th anniversary, but not much beyond that. :-(

Kind of ignominious to be done in not by the inexorable decline of radioactivity but by an everyday materials science error of the sort we make on earth all the time. In the 1970s, we knew how to make hydrazine-compatible rubber. We just didn't use it for the Voyagers.


They're still functioning after ten times of the Voyager's projected lifetime, I can't call that an error.

Upvoted. Sooner or later the Grim Reaper comes for us all.

But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax.

Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation.

Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.


Is a pointer parameter an input, output, or both?

Input.

You are passing in a memory location that can be read or written too.

That’s it.


In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it...

I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.

This seems like an extension of that.


Annotation sounds good. (As long as it is enforced or honored.) which is the best you can do in C++.

A language like C# has true directional parameters. C only truly has “input”


A pointer doesn't necessarily point to memory.

A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.

No. Neither in the language (NULL exists) nor necessarily on real CPUs.

NULL exists on real CPUs. Maybe you meant nullptr which is a very different thing, don't confuse the two.

I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer.

Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant.


Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal.

Historically C's null pointer literal, provided as the pre-processor constant NULL, is the integer literal 0 (optionally cast to a void pointer in newer standards) even though the hardware representation may not be the zero address.

It's OK that you didn't know this if you mostly write C++ and somewhat OK that you didn't know this even if you mostly write C but stick to pre-defined stuff like that NULL constant, if you write important tools in or for C this was a pretty important gap in your understanding.

In C23 the committee gave C the C++ nullptr constant, and the associated nullptr_t type, and basically rewrote history to make this entire mess, in reality the fault of C++ now "because it's for compatibility with C". This is a pretty routine outcome, you can see that WG14 members who are sick of this tend to just walk away from the committee because fighting it is largely futile and they could just retire and write in C89 or even K&R C without thinking about Bjarne at all.


You can point to a register which is certainly not memory.

Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks.

The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.

But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.


DYI contracts don't compose when mixing code using different DYI implementations. Some aspects of contracts have global semantics.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You