For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | pxc's commentsregister

Does this affect editor integrations that use the SDK for ACP support via claude-agent-acp¹, like Zed's Claude integration and Emacs' agent-shell?

--

https://github.com/agentclientprotocol/claude-agent-acp/rele...


I was pretty interested until I got to this part:

> Another reason that a no-exceptions policy is important: If students with disabilities are permitted to use laptops and AI, a significant percentage of other students will most likely find a way to get the same allowances, rendering the ban useless. I witnessed this time and again when I was a professor—students without disabilities finding ways to use disability accommodations for their own benefit. Professors I know who are still in the classroom have told me that this remains a serious problem.

This would be a huge problem for students with severe and uncorrectable visual impairments. People with degenerative eye diseases already have to relearn how to do every single thing in their life over and over and over. What works for them today will inevitably fail, and they have to start over.

But physical impairments like this are also difficult to fake and easy to discern accurately. It's already the case that disability services at many universities only grants you accommodations that have something to do with your actual condition.

There are also some things that are just difficult to accommodate without technology. For instance, my sister physically cannot read paper. Paper is not capable of contrast ratios that work for her. The only things she can even sometimes read are OLED screens in dark mode, with absolutely black backgrounds; she requires an extremely high contrast ratio. She doesn't know braille (which most blind people don't, these days) because she was not blind as a little girl.

Committed cheaters will be able to cheat anyway; contemporary AI is great at OCR. You'll successfully punish honest disabled people with a policy like this but you won't stop serious cheaters.


The author did not outright suggest the banning of all technology. They even linked to a digital typewriter. After the very paragraph you quote, they suggest instead to offer a more human centric approach to helping disabled people. It's not a huge leap to suggest that your sister could continue to learn with the above two solutions; a disability tutor combined with a OLED screen.

You don't have to agree with his precise solution, and in fact I'm not sure whether I do. However, I found the article useful because it got me thinking about the universe of things we could be considering, if we really do think AI is poised to destroy education as we know it.

I don’t know about anyone else here, but college was not educating because I was at college. I did all of the reading and studying on my own. The classes weren’t very interesting, most of my TAs didn’t speak the native language well at all, nor did half the professors.

I enjoyed my time, I made a lot of lifelong friends, and figured out how to live on my own. My buddies that enrolled in boot camp instead of college learned all those same skills, for free.

Education won’t be ruined or blemished my LLMs, the whole thing was a joke to begin with. The bit that ruined college was unlimited student loans… and all of our best and brightest folks running the colleges raping students for money. It’s pathetic, evil, and somehow espoused.


Sounds like you just had terrible professors because most of mine were good and we learned quite a bit in classes, at least I did. I distinctly remember one professor who, every class, would meander discussion over many topics then find a way to bring them all together at the very end, crystalizing all of these disparate thoughts into one cohesive theory. And he did that every single class that semester. It was a marvel to behold.

What he is referring to are perfectly good students whose parents will go shopping for a medical diagnosis so that their child can get "accommodation" like extra time to complete tests.

The problem is that this is treating the symptom rather than the cause. The symptom is that cheating for college admission and achievement is too effective. The cause is that college admission and achievement has become high stakes, and it absolutely should not be.


Yeah, this proposal is likely straight up illegal.

If this is true, shouldn't LLMs perform way worse when working in Chinese than in English? Seems like an easy thing to study since there are so many Chinese LLMs that can work in both Cbinese and English.

Do LLMs generally perform better in verbose languages than they do in concise ones?


Are you saying Chinese is more concise than English? Chinese poetry is concise, but that can be true in any language. For LLMs, it depends on the tokenizer. Chinese models are of course more Chinese-friendly and so would encode the same sentence with fewer tokens than Western models.

> Are you saying Chinese is more concise than English?

Yeah, definitely. It lacks case and verb conjugations, plus whole classes of filler words, and words themselves are on average substantially shorter. If you listen to or read a hyper-literal transliteration of Chinese speech into English (you can find fun videos of this on Chinese social media), it even resembles "caveman speech" for those reasons.

If you look at translated texts and compare the English versions to the Chinese ones, the Chinese versions are substantially shorter. Same if you compare localization strings in your favorite open-source project.

It's also part of why Chinese apps are so information-dense, and why localizing to other languages often requires reorganizing the layout itself— languages like English just aren't as information-dense, pixel for pixel.

The difference is especially profound for vernacular Chinese, which is why Chinese people often note that text which "has a machine translation flavor" is over-specified and gratuitously prolix.

Maybe some of this washes out in LLMs due to tokenization differences. But Chinese texts are typically shorter than English texts and it extends to prose as well as poetry.

But yeah this is standard stuff: Chinese is more concise and more contextual/ambiguous. More semantic work is allocated in interpretation than with English, less is allocated in the writing/speaking.

Do you speak Chinese and experience the differences between Chinese and English differently? I'm a native English speaker and only a beginner in Chinese but I've formed these views in discussion with Chinese people who know some English as well.


Chinese omits articles, verbs aren't conjugated, and individual characters carry more meaning than English letters, but other than those differences I don't have the impression that Chinese communication is inherently more concise. Some forms of official speech are wordy. Writing is denser, but the amount of information conveyed through speech is about the same. There are jokes about ambiguous words or phrases in both Chinese and English. So I was surprised at your take, but no objection to your points above. Ancient Chinese, on the other hand, is extremely concise, but so are other ancient languages like Hebrew, although in a different way. So it seems that ancient languages are compressed but challenging and modern languages have unpacked the compression for ease of understanding.

That's a really interesting point about Ancient Chinese and other ancient scripts. I'd love to learn more about that.

I'm also more curious about tokenizers for LLMs than I've ever been before, both for Chinese and English. I feel like to understand I'll need to look at some concrete examples, since sometimes tokenization can be per word or per character or sometimes chunks that are in between.


Maybe it should be treated like on-call duty and have the load spread between existing engineers on some kind of schedule, maybe with some extra comp as incentive because it's boring and will take more effort/time in the "easy case" compared to pager duty.

Humans just don't commit the same kinds of booboos as LLMs do. My team at work recently started using LLM agents for coding and I have since seen WTFs that I know no human would ever write.

It's not all bad! It's also enormously fun. I've been able to work on things I'd been putting off forever. When I can use LLM agents, I less often feel paralyzed by perfectionism, which is probably the biggest productivity boost I get. My own code has not decreased in quality, and I think that for the truly important things, neither has that of my colleagues.

But LLMs don't make junior dev mistakes. They make "my brain has worms in it" mistakes.


Some editor integrations are a bit like this already, where during use you don't actually touch the built-in TUI even for prompting or viewing the output and approving permissions requests.

I imagine how they treat these things will be contextual and maybe inconsistent. There aren't really hard lines between what they probably want editors that integrate with them to do and generic tools that try to sit a layer above the vendors' agent TUIs.


And you can also use the long context version on their cheaper plans, where as with Anthropic it's only available to enterprise and Max customers

I feel the same way about Cisco Umbrella where I work.

The worst breakage by far is protocol breakage; basically anything that uses HTTP as a basis for building some other protocol gets broken all the time. None of the people implementing it seem aware. They buy the vendor's claim that it's "transparent", when in fact even "inspect/trace-only" modes often break all kinds of shit.

I've seen Umbrella break:

  - Git
  - RubyGems
  - `go mod`
  - OrbStack
  - Matrix
  - Cargo
  - all JDKs
  - Nix
  - Pkgsrc
  - all VMs
and probably some other things I'm forgetting. When this breakage is reported, the first round of replies is typically "I visited that domain in my enterprise-managed browser and it's not blocked". That is, of course, a useless and irrelevant test.

Often it takes hours to even fully diagnose the breakage with enough confidence to point the finger at that tool and not some other endpoint security tool.

I'm not sure if the people buying and deploying tools in this category don't know how much stuff it breaks or just don't care. But the breakage is everywhere and nobody seems prepared for it.


If they pass what closed models today can do by much, they'll be "good enough" for what I want to do with them. I imagine that's true for many people.

If you're visually impaired, you can hit it even with just a few icons on a 14" laptop. Fonts anything other than tiny + overloaded menus + even a handful of app icons means I always hit this unless I'm docked.

Hacky menu bar modification tools are basically an accessibility requirement for me, and my vision isn't even that bad. (Best corrected is 20/30 or 20/40 or so.) People with serious impairments are totally screwed by this on macOS, sometimes even with large external monitors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You