For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | moojacob's commentsregister

For reference the author's (Matt Shumer) AI business (hyperwrite ai) is a hundred small LLM wrappers that do things like:

- "transform complex topics into easy-to-understand explanations."

- "edit and transform images using simple text descriptions."

- "summarizes a research article, and answers specific questions about it."

You can see all of them here: https://www.hyperwriteai.com/aitools.

Hyperwrite does also have a markdown editor with an ai copilot sidebar that seems a little more substantial: https://www.hyperwriteai.com/ai-document-editor.

I don't know enough to disprove Matt, but I don't know why anyone should listen to him. There are far smarter people who have come up with similar conclusions.


Well, I did read it, I did listen to him. Assuming he isn't lying about his anecdotal evidence, he did a very good job opening my eyes to just how fast the AI models are moving and how the disruption can and likely will hit the public before they realize it.


My guess is it's based on Arch Linux. Om-arch-y.


They call it chat.


The author can't fork Rails because "the amount of work that goes into maintaining this ecosystem is enormous and expensive."

Rails IS FREE TO USE. If you want to improve test driven development, do the work yourself. Or start a company and dedicate 40% of your extremely well paid engineers time open source code others can use for free.

37signals and Shopify make the decisions because THEY DO THE WORK. I am happy to sit back and free load off of their contributions even if I disagree with DHH and Tobi's political opinions.


This happened to me over a decade ago. Medication was a godsend, and then I burned out. I remember sitting down to do work and not being able to start anything so I would pull up a dumb io game.

So I went off, and for the next 5 years I still couldn't focus. It got worse actually. I did a lot of caffeine. After COVID I started to work out and then suddenly for the first time ever I could focus. As long as I don't do caffeine, workout, and sleep I am sharp. I've done great work in the past couple years but I do feel cheated that Adderall stole time from me. I wonder where I would be with my career if I hadn't burned out.


Enough with big data! Who's working on small data? https://www.youtube.com/watch?v=eDr6_cMtfdA&pp=ygUKc21hbGwgZ...


I think it's fun that they always get in the media like this. They know how to play the game.


I've been working on an app that writes customer emails so small businesses can focus on what energizes them.

It makes answering customer emails 10x easier.

The magic are training templates which are templates that get suggested (and eventually auto-selected) and personalized by LLM for every reply.

Every reply sent trains it to auto select that training template for future similar customer emails.

The stack is Ruby on Rails and Postgres hosted on DigitalOcean. The LLM currently is Kimi K2 hosted on Groq.

https://vipreply.ai


They already are. I have been using Kimi k2. It is 90% as good as Sonnet and on Groq 3x faster and 1/5th the price.


What kind of GPU setup are you using for Kimi?


Sounds like he's using it on Groq, no self hosting.


No, I don't self host. I use Groq.com as my provider.


That's an interesting name to choose. For second there, I thought GroK enabled third-party models.


Grok 1 was released in 2024, Groq was founded in 2016.


isn't it Q4 quantized on groq?


Groq says it's FP16.


This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)


I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:

  In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.

  Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.

  “My bank account hates me now,” she typed into ChatGPT.

  “You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

Via https://news.ycombinator.com/item?id=42710976


You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.

You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.

Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.

You write in a similar manner as the author.


I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”

Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.


>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”

I think the conversation is about the reverse scenario.

As you say, people are just pulling the levers to raise "average messages per day".

One day, someone noticed that vulnerable people were being impacted.

When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".

So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".


This is a purposefully naive take.

They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).

OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.


> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.


The solution is regulation

It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people


I think so. Such a situation is a market failure.


I'd be interested to learn what fraction of ChatGPT revenue is from this kind of user.


OpenAI absolutely does that. That's what led to the absurd sycophancy (https://www.bbc.com/news/articles/cn4jnwdvg9qo) that they then pulled back on.


One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You