For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | dgrin91's commentsregister

Here is a hard question - how could Stack Overflow succeed in a post-chatgpt eta? I mean obviously the new CEO and leadership has been total trash and has squandered their goodwill and user loyalty, but if I was CEO instead I don't know how I would save the ship.

Doubling down on how it was done in the 'good old days' probably wouldn't work because you would slowly bleed user to AI. Selling data to AI companies might work for a bit, but I would guess that the sales value of SO's data has quickly diminishing returns. So what is their path forward?


That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.

In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.

SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.

Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.


> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.

I think the deeper question is how SO would get paid for that.

Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)

Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.

The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.

This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.


Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.

There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha

the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better

Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs


Depends what you mean by "succeed". Commercial viability for anything remotely approaching the design intent of SO is probably impossible. But anyone can just start building a useful Q&A database and hope others stumble on it. The point is that experts, who have been through many years of trying to help beginners in the pre-LLM era, also know what questions to ask, and how to phrase them, and how to disentangle the concerns that beginners have.

Or at least they should. I think too many people get into a routine of letting themselves get angry about the repetitiveness of the questions they're answering, and then somehow getting addicted to that.


Be chatbot first ig. I had envisioned a portal where you land on the front page and drop your question in the box. It would do some rag thing over the SO question database then try to answer your question. You could chat back and forth with it. If you figured out your problem then you would have the option to turn it into a question answer pair with help from the ai. If you didn't figure out your problem, then it would turn it into just a question, which would then show up for the experts of SO to answer. Something like that.

They should focus on high-quality expert answers.

Now that we have LLMs I don't need basic questions answered. I do still need hard questions answered by experts and AI has normalized paying money for QA.

I would definitely pay for a "human ChatGPT" service where the answers are written by experts who get paid per answer, e.g. grad students. Then they can resell this data to AI companies. Or maybe the economics are such that they can take enough money from AI companies to pay the experts and I don't need to pay anything at all.

This won't bring in as much money as advertising used to, but that business model is dead anyway. There's no future for a QA site at the low end.


ideally, slowly grinding down duplicates into canonicals, keeping the ones whose answers are subject to change (with developments in languages and tools) up-to-date, removing cruft and making it more like a library (à la Rosetta Code) that's easy to find things in

and a change of form from (questions being asked primarily as a means to an end for one person) to (Q&A pairs being written as reference materials)

and requests for comment on which approach would be the most idiomatic or whether one has fallen into an XY trap or other things that rely on human 'taste' rather than LLMs' blithe march of obedience


I’m not aware of SO’s plans to remain profitable and relevant, but I do know they have an enterprise offering. I’ve seen ads on LinkedIn recently for MCP functionality tied to the enterprise SO offering that lets you use it as a knowledge base. I could see that potentially being a path to stay relevant.

The place I work at tried using an SO enterprise instance and it was quite ineffective. We didn't have the toxicity of the public instance, but generally having a Q&A forum double as a knowledge base is an oddball format that doesn't work out. Adding AI integration is not likely to compensate for that.

> How could Stack Overflow succeed in a post-chatgpt eta?

As a data source for LLMs, and by becoming the place someone goes where ChatGPT can't produce a sufficient answer.


It will turn into a meme subreddit and/or die. What else is there?

Allow AI to ask questions. Since the point of the site is to build a knowledge base you don't really need humans to be that involved. Humans running into problems and then asking question was just one way to do this in the pre AI era. Now with AI we can reevaluate if we really need humans as much as we did.

What I don't get is, how does this economically make sense? Isn't there a 100k fee for h1bs now? So 3k h1bs would cost $300 Mil... Before you even start paying salary

These numbers can get wonky fast. E.g. devs salaries in Ukraine are pegged to the dollar, so they get free raises as the exchange rates plummet. At the same time the cost of living in Ukraine was low before the war and only got lower. Oh and taxes are pretty low in Ukraine.

So I'm pretty sure Ukranian devs will end up with one of the top salary rates in all of Ukraine, but the externalities of that are large.


>> free raises as the exchange rates plummet

Also developers use official exchange rate while everything is imported using market rate. 10% cut here.

>> the cost of living in Ukraine was low before the war and only got lower

This is not true. Some low skilled services are cheap, but high skilled are not.

Or are you talking about cost of housing near the frontline?

Dev salaries in Ukraine also down since the start of the war as no new projects are outsourced to the country.


I'd call "getting conscripted into joining a fucking war" a rather large externality!

This is pretty low quality marketing spam.

Offline signing? All signing is offline. Its not even a bitcoin thing... its just how the math works.


Yup. Is this post from 2019?


Absolutely fantastic. I actually laughed out loud a few times.

My only suggestion is make the shuffle animation shorter. At first I thought you were actually doing some server work when I clicked it and got concerned.

Also if you sell these in real life I would buy them.


Speaking about cards purchase. Gimme some time, I will do my best.

I had just assumed the slow animation was buying time to call out to an LLM.

Shuffle animation speed was addressed :)

Satellite images are not always real time. Also satellites can be affected by things like cloud cover.


For tracking of military ships it's much better to use radar imaging satellites (e.g. see [0]). They can cover a larger area, see ships really well, and almost not affected by weather.

I will not be surprised if China has a constellation of such satellites to track US carriers and it's why Pentagon keeps them relatively far from Iran, since it's likely that China confidentially shares targeting information with them.

[0]: https://www.esa.int/Applications/Observing_the_Earth/Coperni...


China has Huanjing [0], which is officially for "environmental monitoring", but almost certainly has enough resolution to track large ships (at least the later versions, apparently the early versions had poor resolution)

And even if they didn't, Russia have Kondor, [1] which is explicitly military, and we know they have been sharing data with Iran.

[0] https://en.wikipedia.org/wiki/Huanjing_(satellite) [1] https://en.wikipedia.org/wiki/Kondor_(satellite)


Strava tracks can also be spoofed and you have no guarantee for them to appear on a schedule either. I just find this to be on the sensationalist side of "data" journalism lacking any sort of contextualization or threat level assessment. Unless there was evidence of some more sensitive locations that have not been published along this story, it looks like some serious unserious case of journalism to me.


Heh, establishing an "opsec failure guy" on the boat with software on his Garmin that can be activated on days with special secrecy demands to translate his runs to a plausible fake location? I like that idea. It would actually fit a one-off like the Charles de Gaulle quite nicely!


They are usually called Public Affairs Officers :D


Clouds only affect a narrow range of the electromagnetic spectrum. Plenty of satellite constellations use synthetic aperture radar, for example, which can see ships regardless of cloud cover. There are gaps in revisit rates, especially over the ocean, but even that has come way down.


And its $100 minimum... at least in NYC. Right now its 20-25 a head and that doesn't include transportation or food.


I think thats pretty unlikely. The nuke areas are very far inland. Its impractical from a political point of view to put that many troops that far inland.

I think more likely they may seize Kharg island. It has direct strategic value (~95% of oil exports) and is a much simpler target. This is why they bombed it today - shaping operations.


One of the scary things is that not even this really works. Ignoring supply chain attacks, most people treat any client as effectively black box. When was the last time you read through the code of a messaging app? How do you know its safe? Maybe _you_ read through it, but 99% of people don't.


And even if you did read through every line of code, it is super easy to hide a deliberate bug which entirely breaks encryption.

Eg. The Debian random number generator bug.


Honestly I was surprised about this. It accurately got my GPU and specs without asking for any permissions. I didnt realize I was exposing this info.


Why were you surprised?

You can check out here how it does that: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s...

To detect NVIDIA GPUs, for example: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s...

In this case it just runs the command "nvidia-smi".

Note: llmfit is not web-based.


I run LibreWolf, which is configured to ask me before a site can use WebGL, which is commonly used for fingerprinting. I got the popup on this site, so I assume that's how they're doing it.


How could it not? That information is always available to userspace.


"Available to userspace" is a much different thing than "available to every website that wants it, even in private mode".

I too was a little surprised by this. My browser (Vivladi) makes a big deal about how privacy-conscious they are, but apparently browser fingerprinting is not on their radar.


We switched to talking about llmfit in this subthread, it runs as native code.


It's pretty hard to avoid GPU fingerprinting if you have webgl/webgpu enabled


Do you mean the OPs website? Mine's way off.

> Estimates based on browser APIs. Actual specs may vary


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You