I think maybe infra is limited only at hyperscalers. For the rest of us it's just how much capacity to we want to rent from the hyperscalars.
It's kind of a recent cloud-native mindset, since back in the day when you ran your own hardware scaling and capacity was always top of mind. Looks like AI compute might be like that again, for the time being.
Honestly, the only thing really keeping me from watching shorts is the perplexing UX decision to not show the channel name as part of the preview tile. As basic Internet hygiene it just feels real bad to click on a video without the tiniest bit of idea about its provenance. For that reason I never do and have always just wanted to hide Shorts altogether.
I don't watch shorts because of the type of content it encourages - short clickbait content for the attention deficit.
I want long form, well researched, well put together content. These types of content takes a long time to produce, unfortunately, and the youtube algorithm doesn't favour it.
That's where off-platform, trust worthy sources are important.
Over the years, i've curated a lot of subscriptions on youtube of such long form content (talking about things like Primitive Technology, Smarter Everyday, etc).
It simply sucks that youtube has gutted the UI for subscriptions - the layout is horrible, i can't scan it as they removed the list view, etc. And sometimes subscribed channel's videos don't even show up there - you have to visit the channel page directly to see the new videos.
it's almost as if they want people to rely only on the home page and the recommended videos from their algo.
Honestly I'd sometimes like a checkbox to ignore any results uploaded after, say 2023. Or else to only see 'verified non-AI' content. There is nothing I'd ever search Youtube for where an AI generated video would be an acceptable answer.
I use the control panel for youtube plugin - there is an option that any short plays in the traditional player. It also has lets you customize the thumbnail grid sizes. it is a little over aggressive with its defaults but you can turn them off. it has a lot of options
If TikTok does it that way (I have no idea if it does actually), YouTube obviously has to copy that! If a fishy channel name stops you from clicking on a short, that hurts your engagement, and that's the last thing social media companies want.
As a way to get my feet wet vibe coding, I made https://seeitwritten.com with that idea. That by capturing how you write with all its fits and starts you can show that a human wrote it. So, sort of recorded live-streaming. But I'm thinking that a sufficiently cute agent could be prompted to write something and re-write something in a convincing manner. I'm not so sure about that, though, since their corpus is completed text rather than text in action.
Most of what makes writing a medium worth engaging with is how its presentation is causally insulated from its creation. Well, that is true of other media: film has a whole history of production that you, the viewer, don't witness - being its main difference from theater. But with writing said production cost is trivial, and so is editing: the author doesn't need to commit to a sentence like a director must commit to a shot. This is integral to the identity of the medium, and is what allows writing to be what speech is to cinema: considerably more polished, high-budget, and well-edited conversation with an assumed reader.
When you take that away, or make the writer conscious of how their each edit is being surveilled, you do lose that ability to freely revise your thoughts, degrading it back into a form of lightly edited monologue. Whether it is a good or bad thing is irrelevant, but it does result in a much different kind of writing. All the while, the collected writing history itself offers very low SNR: it does contain certain some divergent possibilities, but so does orders more meaningless mistakes, attention lapses, and runaway sentences - all that writing is defined by omitting.
But assuming most writers use the keyboard just as some use the cursor to follow their gaze, it does at least impart a cognitive fingerprint, useful for light authenticity detection (unless the author is just rewriting a finished thought they plagiarized from memory) but also profiling.
Wow, I read the whole thing without noticing that.
But as someone who came of age in the AIM / ICQ / IRC days, it feels pretty normal. That's just how we wrote. I still fall into it by accident when the context is right and I'm not thinking about it (eg Slack at work). I hope youngsters aren't judging me for it.
Haha. yeah my eyes glazed over immediately on the issue. Absolutely this was someone telling their Claude Code to investigate why they ran out of tokens and open the issue.
Good chance it's not real or misdiagnosed. But it gives me some degree of schadenfreude to see it happening to the Claude Code repo.
Its your claude speaking to their claude, which is fair, but it makes this whole discussion a bit dumb since we are basically talking about two bots arguing with each other.
This was part of Sam Altman's (supposed) concerns about AI not being open and equally available. It a dystopian future it might be their cluster of 1000 agents using a GWhr of power to argue against your open weights agent who has to run on a M5.
I know exactly how you feel. Looks like it was 10 (?!) years ago [0]
> Ah, APL/J/K. Time for my annual crisis of thinking everything I've ever learned about programming is wrong...
Still, though, I'm always happy when it comes up on HN for a little discussion. As I recall there were a couple people working on a new OS or something based on K, I think. I wonder whatever happened to that.
As for this particular post, I get how `x^x*/:x:2_!100` works now (it's cute!), but it seems pretty wasteful. It's generating 10,000 products to filter out of the list of 100 integers. But 99 x 99 isn't anywhere near a number in the original 2..100 list! You only need to go up to 2 x 49, 3 x 33, etc. I wonder if there's more of a "triangular" shape you could generate instead of the full table.
For my use I prefer just a raw CLI. As long as it's built following conventions (e.g. using cobra for a Go app) then the agent will just natively know how to use it, by which I mean how to progressively learn what it needs by reading the `help` output. In that case you don't need a skill or anything. Just say "I want this information, use the xyz app". It will then try `xyz --help` or `xyz help` or a variant, just like a human would, see the subcommands, do `xyz help subcommand` and eventually find what it needs to do the job. Good tools provide an OAuth flow like `xyz login`, which will open a browser window where you can determine which resources you want to give the CLI (and thereby the agent) access to.
This only works for people using agents themselves on computers they control, rather than, e.g., the Claude web app, but is a good chunk of my usage.
I think people are either over or under thinking the auth piece, though. The agent should have access to their own token. Both CLIs and MCPs and even raw API requests work this way. I don't think MCPs provide any further security. You should assume the agent can access anything in its environment and do everything up to what the credential permits. You don't want to give your more powerful credential to the MCP server and hope that the MCP server somehow restricts the agent to doing less (it can probably find the credential and make out-of-band calls if it wants). The only way I think it could work like that is how... is it Sprite does it?... where you give use a fake token and have an off-machine proxy that it goes through where it MitMs the request and injects the real credential.
But in the context of this discussion, Atlassian has a CLI tool, acli. I'm not quite following why that wouldn't have worked here. As a normal CLI you have all the power you need over it, and the LLM could have used it to fetch all the relevant pages and save to disk, sample a couple to determine the regular format, and then write a script to extract out what they needed, right? Maybe I don't understand the use case you're describing.
Not all agents are running in your CLI or even in any CLI, which is why people are arguing past each other all over the topic of MCP.
I implemented this in an agent which runs in the browser (in our internal equivalent of ChatGPT or Claude's web UI), connecting directly to Atlassian MCP.
I think maybe infra is limited only at hyperscalers. For the rest of us it's just how much capacity to we want to rent from the hyperscalars.
It's kind of a recent cloud-native mindset, since back in the day when you ran your own hardware scaling and capacity was always top of mind. Looks like AI compute might be like that again, for the time being.
reply