I am hoping at some point we can move towards having more nuanced conversations about AI and the role it should play in our world. It seems like currently the only two camps are at either extreme.
Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
Venture capital bet on AI taking over the world, so any conservative usage of LLMs will not get funding in the near future. The subtle reason is that betting on conservative usage of LLMs sends a signal that de-values their primary investments.
I think this is where I sit - I'm personally of the opinion that AI crawlers and thus companies should respect robots.txt, and they shouldn't be trying to scale up to the point of adversely affecting the environment (both natural and supply chain).
I do find value in mindfully using models - perhaps I've got a weird thing to troubleshoot on my Linux server and I just don't want to spend the time or mental effort in tracing it back.
Because I do tend to use AI mindfully, I strongly dislike Microsoft's strategy in constantly pushing their AI solution Copilot. I would rather use it when I feel its right rather than always be reminded its a thing I can use to save time and increase my efficiency around every corner.
This is my take too. When we were imagining AI what were the use cases we had in mind back then? They are these grand visions of AI will take care of major problems. We should be pushing for responsible AI deployment, starting on low risk areas and moving up to more serious uses once we know the tools work for less catastrophic situations.
kinda surprised to see this type of take out of someone who participates on this website. I feel like this is the place where I have seen that middle ground surface the most.
Just the overall shift in the past year from semi handwaving to feeling like it must be embraced, and identifying the problems it creates and how to address them. I feel this is all exactly what you are mentioning.
I think AI as a proper utilized tool, is amazing, I think our lack of restraint when just throwing it into everyone's hands without understanding of the tools they are using, is horrifying. I'd imagine a lot of the community here echos that same sentiment, but maybe not, and i am just making assumptions.
The overall sentiment on here might be in the middle, but I feel like that is more because half the posts and comments are railing against AI slop and half are about exciting new AI models or tools.
This wouldn’t help symmetric key encryption, which is what this is talking about. The keys you are rotating are asymmetric keys, which are only used to exchange symmetric keys for the actual encryption. In good setups, those symmetric keys are changed every session anyway.
If an attacker can break the symmetric encryption in a reasonable amount of time, they can capture the output and break it later.
In addition, how are you doing the key rotation? You have to have some way of authenticating with the rotation service, and what is to stop them from breaking THAT key, and getting their own new certificate? Or breaking the trusted root authority and giving themselves a key?
> This wouldn’t help symmetric key encryption, which is what this is talking about.
I agree. The point I am trying to make is that even for asymmetric encryption (which is far more vulnerable), there are still plausible ways to make a quantum break more difficult.
The only thing that could compromise this scheme, aside from breaking the signing keys, would be to have TLS broken to the extent that viewing real-time traffic is possible. Any TLS break delayed by more than 15 minutes would be worthless.
> Any TLS break delayed by more than 15 minutes would be worthless.
It sounds like you’re talking about breaking TLS’s key exchange? Why would this not have the usual issue of being able to decrypt recorded traffic at any time in the future?
Edit: If it’s because the plaintext isn’t useful, as knorker got at in a sibling comment… I sure hope we aren’t still using classical TLS by the time requiring it to be broken in 1 minute instead of 15 is considered a mitigation. Post-quantum TLS already exists and is being deployed…
The problem with key rotation as a defense is it is going to have to happen at EVERY level. You will have to rotate root CA keys at the same rate, or those could just be hacked, and your rotation won’t matter anymore.
I am not an expert, but while you are correct that a fast enough traditional computer (or a parallel enough computer) could brute force a 128 bit key, the amount of improvement required would dwarf what we have already experienced over the last 40 years, and is likely physically impossible without some major fundamental change in how computers work.
Compute has seen in the ballpark of a 5-10 orders of magnitude increase over the last 40 years in terms of instructions per second. We would need an additional 20-30 orders of magnitude increase to make it even close to achievable with brute force in a reasonable time frame. That isn’t happening with how we make computers today.
> That isn’t happening with how we make computers today.
Keep here in mind that computers today have features approaching the size of a single atom, switching frequencies where the time to cross a single chip from one end to the other is becoming multiple cycles, and power densities that require us to operate at the physical limits of heat transfer for matter that exists at ambient conditions.
We can squeeze it quite a bit further, sure. But anything like 20-30 orders of magnitude is just laughable even with an infinite supply of unobtanium and fairy dust.
You don't need to keep shrinking features. Brute forcing is highly parallel; to break a key within a certain time frame all you need is a large enough quantity of chips. While it's in the realm of science fiction today, in a few centuries we might have nanorobots that can tile the entire surface of mars with processors. That would get you enough orders of magnitude of additional compute to break a 128 bit key. 256 bit would probably still be out though.
Classical brute force is embarrassingly parallel, but Grover's algorithm (the quantum version) isn't. To the extent you parallelize it, you lose the quantum advantage, which means that to speed it up by a factor of N, you need N^2 processors.
The article discusses this in detail, and calculates that "This means we’ll need 140 trillion quantum circuits of 724 logical qubits each operating in parallel for 10 years to break AES-128 with Grover’s."
The power and heat are the issues for that, though. Think about how much energy and heat are used/generated in the chips we have now. If we tiled out those chips to be 20 orders of magnitude larger… where is the heat going to go, and where is the energy coming from?
In my example I had imagined that your nanobots would also create solar panels and radiators for the chips you were tiling the surface of mars with. This is why it needs to be done on the surface instead of underground somewhere.
If the whole point of starting your own business is because you want to get out of the ‘rat race’, doesn’t it need to at least pay your bills? Otherwise, you are still in the rat race, just with even less time.
I don't see all businesses as a rat race. Tech is. The business that I've been building skills towards starting is a fun hands-on product, which involves a bit of artistry and a fair amount of labor and materials costs, and brings people joy. Tech can keep paying my bills, unless my side project gets bigger than I foresee. And if I lose money, I made some nice art along the way and had fun learning new skills.
Sure, but then I am confused as to why the mention of 'the rat race' at all. If your business is a fun hobby, then it is unrelated to you having to still be in the rat race. It would be no different than taking up reading or photography as a hobby. You are still in the rat race, you just also have a hobby.
This continues to be the most tiring response to any criticism of LLM output. It's pretty much guaranteed to show up at this point. I guess with similar enough input tokens, we're guaranteed the same output...
Well, speaking from what I hear and see, employers want you to start using it so that you can be more productive. They've been sold this tool and want you to learn it so that your output will grow.
That's not an unfair take, I think. Again, just IME, they expect too much because the tool is oversold: it does not deliver that well. And we always hear, this new model is so much better, it's tiring.
I think we should all learn to use LLMs but we should still carefully review what they did. And that is what the employers don't quite get: the review still takes a lot of time. So, gains are not 10x but more like... 10%? Maybe 50 for boiler plate. Still gains are there, I guess.
> they expect too much because the tool is oversold: it does not deliver that well.
And unfortunately a lot of people will say it’s their reports’ fault for not properly utilizing it (even as they barely use it) because otherwise they would have to admit that they bought a tool without any plan for how to deploy it. So regardless of what is or isn’t a fair take, the results are the same. We are burdened with utilizing a thing whether it is useful or not and the results are generally not what is measured, but rather “are you using it?”
I’m just glad I work at a company that has more reasonable expectations and has been very slowly, thoughtfully rolling it out to individuals at the company and assessing what is and isn’t good for. They are interested in getting me in line, but as somebody in video production to be perfectly honest the use case for Claude is a bit tricky to navigate. We don’t write a lot of scripts and I already have bespoke software for organizing/maintaining footage that isn’t on a subscription basis. The work I’m also doing doesn’t call for these speed-editing solutions that generate tik tok chaff. All our stuff is hours long and it’s high volume. Any video-centric AI service costs an arm and a leg.
I do think it could be useful for writing some terminal scripts and such, but as far as a daily tool we are still scratching our heads and thinking about it. But it’s nice to be able to do that without somebody saying “why aren’t you using it?” every meeting.
Why are employers so incompetent to just believe and cargo cult any business trend to come along? Shouldn't they do research first before making wide, sweeping changes in work policy?
I have been a programmer for 30 years and have loved every minute of it. I love figuring out how to get my computers to do what I want.
I also want Star Trek, though. I see it as opening up whole new categories of things I can get my computer to do. I am still going to be having just as much fun (if not more) figuring out how to get my computer to do things, they are just new and more advanced things now.
> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
reply