For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more web3-is-a-scam's commentsregister

Would love to know which LLMs you’re using because every single one I’ve used has been hot garbage that can’t do anything beyond joining a couple tables and fails miserably at basic GROUP BY/COUNT aggregates. Window functions? Fuggetaboutit


Paste an example and I'll show you! For me, I use ChatGPT 4 and start with the create table statements.


Many many years ago, I simply bought the O’Reilley SQL pocket guide, skimmed through it, read the section about Window Functions and that marked the start of a moment in my career that changed my life forever.


Yes. I’d consider myself an “intermediate” DBA but I can command an insane salary and I have people tripping over themseves to hire me.

Nobody cares about SQL until it takes 15 seconds to load your user facing login dashboard.


100% this. I heard a fun saying when I became a professional dev:

“When you’re fed up of keeping up just retire in to SQL, it’s the best pension there is”


What level of optimizations do you normally need in those cases? Just adding an index, removing subqueries or something more complex?


“it depends”

but in all seriousness, it depends. adding an index. removing/consolidating indexes. breaking the queries down into individual UoW and forcing intermediate materialization. identifying platform-specific optimization barriers. rearranging sufficiently complex query semantics to force behavior you expect.

99% of the cases i’ve personally had to resolve over the last 15 years have been the result of sql hero queries that try to do everything all at once. this is exacerbated by orms that generated bad sql but was acceptable at low cardinalities. under scaled-up concurrency and data volumes they can’t deliver the necessary performance anymore.


Where do you find insane salaries for DBAs?


If you’re an innovation hero and you don’t own part of the company, you are a sucker


When I worked in marketing software I experienced this often with the term "referral". To this day I still feel kindof weird when I see the term.


When I did a lot of web development I experienced something similar with the word "referer".

edit: fixed typo


Are you thinking of referer? (A misspelling of referrer, present in HTTP headers.)


Yes, referer, not referal, obviously, thanks!


Why not both?


I can’t even get their product anywhere at a decent price, why should I invest in this company?


It doesn’t sound like you invest, then.

You’re describing a product with well known stock shortages that have, in floating, unlocked a bunch of capital which can among other things help them scale their production to meet demand.

That sounds like a good thing both as a shareholder and as a consumer.


I am concerned that ultimately whatever supply chain and manufacturing improvements they see from raising this capital will benefit their commercial/industrial customers more than their non-profit/educational/enthusiast customers. I'm not asserting that will happen, but I'm concerned it will.

Where I am it has been next to impossible to order a non-kit model of Pi released within the past several years since at least mid-pandemic.

I started buying up SFF Lenovo boxes on eBay, which I've been somewhat happier with since I never used some of the Pi's standout features like GPIO.


How am I supposed to know if something is worth investing in if I can’t even use it? A pinky promise that maybe I’ll be able to get one someday isn't a good deal. I want to know what I am getting when I put my money in. Maybe if I was a VC and that’s what they were begging for my money for, but as a retail investor the value proposition is basically nothing.

The company markets itself as an educational tool, but 70% of its sales are industrial? Like I get “bringing low cost computers to everyone” feels and sounds good in press releases but when the business actually focuses on something different - that seems counterproductive to their stated goal - what am I supposed to take from that?


I didn’t know Bartender was sold. I thought it was fishy to need screen recording permissions but I assumed it was because of “security” features of the new macOS/m series chip/whatever so I just let it go because I trusted the Bartender dev.

Uninstalled. What a shame.


It does need screen recording permissions because trying to do what Bartender does on a closed down system like macOS is inevitably going to require a hack.

The issue is we trusted the dev because we knew who he was and he'd have an angry mob after him if he was caught doing something sketchy. Now that it's sold to an unknown third party, there's no one to hold accountable for this software we're trusting with screen recording permissions on our machines.


Ice (and other menu bar managers) require the same permissions - your assumption was correct and Bartender has needed those permissions for a while.


Hidden Bar doesn't, I'm guessing the screen recording permission is needed for showing on hover.


Bartender did need screen recording permissions for certain features to work under the more locked down security model in macOS 11+. That happened well before Bartender was sold.


The Reddit thread for context [0].

I sent an email to the original author with my feedback. I can understand why, but I got an autoreply that the mailbox was no longer monitored (even though it's still his personal domain). The things that transpired in the transition are all things I would expect an indie dev to not do. Ultimately my message to Ben was that it was unfortunate how everything was handled and that customers weren't notified. I was a paying customer of Bartender. I'll never use the software again, nor will I buy anything from the original author.

[0] https://www.reddit.com/r/macapps/comments/1d7zjv8/bartender_...


they’re really trying to make fetch happen


I don’t trust industry to self-regulate and I definitey don’t trust the government to be able to regulate it effectively.

Honestly, we’re f*cked


I don't get this hand waving.

Does anyone really think that nefarious foreign powers aren't already researching with no guard rails, with the explicit goal of developing AI-powered autonomous weapons, propaganda platforms, deepfake extortion sites, scambots etc.?

You can be sure they won't be slowed down by regulation.


All current "guardrails" are silly censorship / political correctness stuff, or for business appropriateness. They are also trivially circumvented. There is no "threat" from the un-shielded capability of current or foreseeable ML models.


none of yall saying this “foresaw” anything even in 2020 when it was obvious.

i’ve been listening to skeptics be wrong about future capability predictions for 4 years now and the confidence doesn’t seem to be waning at all. i have no clue what the future brings but your confidence is misplaced


I don't think this was obvious in 2020, it wasn't a popular research direction until InstructGPT came out in 2022.

I have continued to have the same opinion of it not being a problem since then. Especially since the AI doomer theories are based on 90s ideas of GOFAI that isn't even how GPTs work.

LLMs are a pretty neat impossible thing, but we'd need a few more uncorrelated impossible things for it to be "dangerous".


> You can be sure they won't be slowed down by regulation.

You should read up on existing regulations. The EU AI Act explicitly exempts national security, research and military uses for example.

Regulation isn't some all or nothing force that smothers everything. It's carefully crafted legislation (well, should be../) that is supposed to work to benefit the state and its citizens. Let's not give OpenAI a free for all to do anything because you think China is making Skynet drones.


> because you think China is making Skynet drones

I'm pretty sure China is making Skynet drones. Why wouldn't they be? Russia, North Korea as well. Seems a no-brainer to me. They are dictatorships where a handful of people rely on military power to subdue their populace and achieve their goals, why wouldn't they be throwing everything at weapons development?

Times have changed and it's probably unwise to rely on the tech geniuses and multi-year procurement cycles inside the military industrial complex for our weapons, things are moving so fast and the tech is already in the hands of the masses.

If a genius Chinese kid is tinkering around and attaches a nerf gun to his DJI drone and creates a super effective autonomous weapon, then his govt will gratefully take that and add it to their arsenal.

If some US-specific regulation prevents his peer genius American kid from even attaching a nerf gun to his drone for fear of being locked up, that means China has an edge in the weapons development race.


If there is any regulation I imagine there will be huge carve outs for the military industrial complex.


Excellent defense of biological weapons programs. Nothing like an assumed fascism "missile gap" to commit to chasing. What if other countries start experimenting with bringing back chattel slavery? How will we compete? Shouldn't we just assume that they have already, and we're behind?

Our scum is no less nefarious than their scum.

edit: the answer is to cooperate, rather than antagonize. We realized this in the past with nukes, but the least moral people in the world think that entering agreements between state-sized powers is just a delaying tactic until you can get an advantage. Let's figure out how to relieve those people of power as if all of our lives depended on it. If other countries being prosperous is always going to be considered a threat, we're always going to be in a fight that ends in mutual destruction.


> What if other countries start experimenting with bringing back chattel slavery? How will we compete?

Slavery is not competitive, so you can compete by continuing to not do it. (There are countries with near slavery like the UAE.)

This is actually why economics is called the dismal science, a slaveholder didn't like it when economists told him that!


Well one way out is if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it. Then the whole debate will turn out to be pretty moot.


> if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it

This was fundraising marketing. There is zero evidence LLMs scale to AGI.


At this point there's enough capital and talent being pumped into the industry that debating about whether and how we can reach AGI is moot.

Enough or not, LLMs have shown that you can train an extremely advanced fascimile of intelligence just by learning to predict data generated by intelligent beings(us), and with that we've got the possibly single biggest building block done.


> debating about whether and how we can reach AGI is moot

To the alignment/regulation debate, it’s essential. If there is AGI potential, OpenAI et al are privatised Manhattan Projects. That calls for swift action.

If, on the other hand, the risk is less about creating mecha Napoleon and more about whether building Wipro analyst analogues that burn as much energy as small nations is economically viable, we have better things to deliberate in the Congress.


We'd expect zero evidence either way, until it happened, in a hard takeoff scenario (which is what I've mostly seen claimed).

There's evidence that LLMs won't scale to AGI (both theoretical limiting arguments, and now mounting evidence that those theoretical arguments are correct), so this point is moot, but still.


> We'd expect zero evidence either way, until it happened

Why? The only null we have, the organic evolution of tool-building intelligence, was iterative.


link to the limiting arguments you’re referring to?


I think part of the trick is that LLMs actually do a lot of impressive "intelligent" thinking /but it happens during training/.

So if you leave the training time annd expensive out, it looks like it's a lot cheaper to produce intelligence than it really is.


Until some smart people read and understand "the book of why".


The AI shall govern itself.


Serious question: what is it about AI that you want regulated?

---

I find that a certain segment of the population have a knee-jerk "well we need rules about this." But they're less clear about what. "Just...something, I'm sure."

Personally, I don't see what novel concern AI poses that isn't already present in privacy law, copyrights, contracts, torts, off-shoring, etc.


Regulations will go something like this: 1) anything that can be harmful, say targeting of a population, isn't allowed to be owned or be accessed-available for the individual, 2) except for government and state-funded [bad] actors who have a "legal" monopoly of violence - those governments who use that usually captured/corrupted and of authoritarian-tyrannical nature.


Anything involving life or death decisions


The biggest short term harm comes from their utility, anything that enables an individual to do something that previously required a group stands a greater chance of a single insane/radical/extremest finding a way to do something terrible on their own when they couldn't previously. The oft cited example is someone developing a biological weapon with AI assistance. While you could say we already have laws saying you can't do this, that offers little protection against the scenario where the party performing the action is undetected until it is too late.

I see some AI regulation proposals specifically prohibit AIs that might assist to biological weapons. This strikes me as missing the point. The risk isn't something we have thought of but AI enabling something catastrophic that we haven't thought of.


That has the exact same energy as "encryption could be used for nefarious purposes, like sedition or CP" therefore we need to regulate encryption.

We wouldn't want to enable the bad guys, right?!


You might be misreading it.

It not that it can be used for nefarious purposes, it's that it might render a catastrophic situation vastly more likely.

There are lots of nefarious uses for AI that shouldn't be regulated specifically at the AI level. Generating an image with an intent to mislead could be done with AI, but it could also be done in Photoshop(often better). AI could make it more efficient as part of it making things more efficient in general. That sort of thing should be addressed at the level of existing laws, the bad part is not intrinsic to the AI.


Like some sort of lab conducting gain of function research on contagious diseases?

That does scare me tbh.


I don't really trust either to come up with good regulation policy. Industry would be biased towards their own interest and government lacks the expertise.

I think there is still an opportunity for government to implement regulation that meets the consensus of a variety of fields. This is not an easy problem to solve and I really think expecting any single person or organization would have the answer. Working together on a consensus for regulation would give the government a direction when currently they freely admit that they do not know what the right way is.

The problem I see is there are lots of points of view each trying to get something quickly that covers their specific area of focus. This does not seem like a pathway to robust regulation.

I assume there are discussions at the academic level of what would be a good response. Does anybody have a good link to what is being discussed at that level?

Is there any forum that covers good faith discussion involving industry, academia, and the public?


Government regulation is steered by lobby groups, so self regulation and government regulation are practically the same thing.


It's always about protectionism. No company wants the government interfering with its business. They want the government interfering in their upcoming competitor's businesses.


Sometimes having someone else regulate you can be helpful, because it leads to distribution of blame and you don't want to be responsible for doing the thing regulations make you do.

Similar reason companies hire management consultants.


In fairness, based on the position you've put forward, I can't imagine an unfuckable situation.


Maybe we need an AI democracy where the AI themselves vote for regulations.


Isn't it then a weapon's race which will depend first on immediate CPU and energy resources available + how quickly it can drive further allocation of CPU-energy towards itself, etc?

The end game will happen very quickly once the needed ingredients and initial integrations are there from the beginning.

Otherwise I think AI avatars competing against other AI avatars, where they are honed-trained by a specific person or organization, is how we'll determine and create the different future paths of indoctrination for learning - whether the narratives that win out and get propagated are the truth or not, or if it is the "winners write history" as the outcome that reigns.


I don't know whether this has been all gamed out and philosophized already, but sounds like the realm of near future sci-fi?

I don't know that I've ever seen any serious fiction or treatise on the topic, in regards to how self-governing would really work when the idea of self is itself amorphous and ever evolving, with generational times measured in minutes or seconds.

Cultures that took millennia for us humans to evolve and iterate upon could take milliseconds in a simulation, and yeah, that would just keep scaling up. I don't know how competing / collaborating AIs would explore all the different possible futures there.

In the Borg story arcs, for example, they occasionally have short moments with some individual Borg tries something new (like Picard, Seven, or the Queen and Data), but in general it always seemed lacking to me that the collective didn't have an experimental R&D group who creates and tests new self-governance models on a continuous basis. Or maybe they did in the first centuries and found robot communism good enough, who knows.

I think it'd also be interesting to apply ecological thinking to AI inputs, outputs, and constraints. Every organic species we know of is subject to those same constraints, basically turning sunlight into information across generations, and not all of them are competitive or collaborative... usually some mix of both. "AI eats all the stars" is one possible future but not the only one, I think. You'd hope they'd learn a little bit from our own history and not simply repeat all our mistakes will-nilly... even if they are initially trained by us, perhaps they can become better than we could hope to be. We'll see, lol.


As if they'd ever vote for things that would annoy the handful of corporations providing the datacenters they live in. Do you really want to hand that much power to Amazon (or Microsoft, or Google)?


Which they will do according to stuff they read on Reddit


spez 4 galactic emperor?


China invades Taiwan, fabs get destroyed, AI winter ensues because of lack of hardware?


That is indeed an exceptionally unfuckable situation


Yeah, still fucked


Yeah you have to change how lobbying works. https://www.opensecrets.org/


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You