Imagine you elect the president because he promises he will finish wars instead of starting them. And after it's elected, he's doing exactly the opposite. Philosophical question: ws he elected 'duly'?
Executive orders and other decrees that read like a demented toddler, announced in the middle of the night, on a a third rate social media site.... as its always been.
Who should have never been allowed to run, both for his felony criminal conviction, his plot to tamper with the election, and of course the assault on the capitol. This is before we even add on the Epstein stuff, or his latest war.
It’s pretty clear Trump is a threat and everyone in his orbit needs to end up in prison.
I’m not really surprised. The US (and their allies) has made a concerted effort over a number of decades to turn them into to the third world. The current sitting US president has threatened to blast them into “oblivion” and “back to the Stone Ages, where they belong”. A lot of imagery of middle eastern countries seen in the west is of the places they’ve collectively destroyed.
I mean... You could? AI comes in all kinds of forms. It's been around practically since Eliza. What is (not) here to stay are the techbros who think every problem can be solved with LLMs. I imagine that once the bubble bursts and the LLM hype is gone, AI will go back to exactly what it was before ChatGPT came along. After all, IMO it's quite true that the AIs nobody talks about are the AIs that are actually doing good or interesting things. All of those AIs have been pushed to the backseat because LLMs have taken the driver and passenger seats, but the AIs working on cures for cancer (assuming we don't already have said cure and it just isn't profitable enough to talk about/market) for example are still being advanced.
I agree on that part as well, but saying that AI will go back at what it was before ChatGPT came along is false. LLM will still be a standalone product and will be taken for granted. People will (maybe? hopefully?) eventually learn to use them properly and not generate tons of slop for the sake of using AI. Many "AI companies" will disappear from the face of Earth. But our reality has changed.
LLMs will not be just a standalone product. The models will continue to get embedded deep into software stacks, as they're already being today. For example, if you're using a relatively modern smartphone, you have a bunch of transformer models powering local inference for things like image recognition and classification, segmentation, autocomplete, typing suggestions, search suggestions, etc. If you're using Firefox and opted into it, you have local models used to e.g. summarize contents of a page when you long-click on a link. Etc.
LLMs are "little people on a chip", a new kind of component, capable of general problem-solving. They can be tuned and trimmed to specialize in specific classes of problems, at great reduction of size and compute requirements. The big models will be around as part of user interface, but small models are going to be increasingly showing up everywhere in computational paths, as we test out and try new use cases. There's so many low-hanging fruits to pick, we're still going to be seeing massive transformations in our computing experience, even if new model R&D stalled today.
Um, what makes this "just asking questions"-style dig "legitimate," while my point that nobody ever asks that about any other software is dismissed as "unproductive"? If anything, slapping labels on things without giving any reason at all is what's unproductive.
reply