Do you guys geek out like me about EDC/multitools? Pretty happy with this new mini flagship. Wish Victorinox would iterate/innovate on their stuff like NextTool does. SAK innovation is "removing a blade". NextTool is fooling with inventing better tiny scissors.
Taught myself to use a sewing machine. Then I made my own EDC wallet thing. Basically a zipper pouch that can fit a lot of things while keep them as spread out in my front pocket.
I've got a version of this now in my front pocket for like 9 months: https://share.zight.com/wbu487ew Yes, it's big, but it's the most comfortable from of a big wallet.
It's funny though. I can't help feel the pull to try and make the hobby a business. But then it probably becomes unfun. But my brain just can't not think that way.
Did the same two years ago, it's such an underrated skill. There's a good amount of complexity that goes into making an item without just following a pattern.
I recommend going through the basics (Tock Custom has a nice energy [0]), then picking up a fairly complex pattern for a common piece of clothing. Of course there's also r/myog.
Yeah, it's funny how many times I basically made the same damn thing just fine tuning a half inch wider, or seam allowance.
I also can't believe how tedious cutting fabric is. Even for a tiny project like this it was such a pain in the ass. Even with nice circular cutters and mats and rulers. I'm now tempted to get a cricut 4 to make the cutting easier.
Ah. That makes sense. Is this something where you do it once and you are done? Or is it something you re-finetune based on performance or reviews you get back from the client. i.e. Client doesn't like something so you go back for another cycle of
Also, is this something that's a pain in the ass to manage multiple versions of the model? One (maybe more in draft mode) for each client?
We do one finetune on the base model to iron out a few of its problems, like plastic skin and its poor understanding of visual terms and reproduction. It also really helps it understand the normal maps we use for perspective templating.
What we are mostly producing are LoRAs, and we put them through a staged training process. The first stage is all about the textures, the second stage focuses on the product itself, and the last stage dials in the exact perspectives we need.
Despite what the research out there says, we actually get better results sticking with LoRAs instead of LoKRs. The pain is generating the dataset because you have to adapt it for every product. The actual training is basically just fire and forget.
the morning after the launch i just randomly went onto their livestream and one of the astronauts was asking mission control for help on also using the gopros and iPhone cameras. i guess they have some. and he was struggling at getting a properly exposed photo with those. he said they were coming out super over exposed. but the D5 was working nominally. mission control said they'd get back to them about ideas on adjusting the gopros and iPhones. but it was funny to hear they're trying "new" tech and struggling with it up in space, and that 2005 D5 is still the champ :)
The SLR-like cameras have a bunch of manual modes so you can 'force' them to get something captured, and you can then perhaps 'fix it in post'.
Modern tech allows more people to capture more things more easily, but when the automation fails there aren't really many manual modes to fall back on.
he was struggling at getting a properly exposed photo with those. he said they were coming out super over exposed.
This is exactly what newbies experience when trying to photograph the moon from Earth. It's not intuitively obvious, but the light coming off the moon is essentially full-daylight bright. But the moon is small against a very black background, and depending on how the auto-exposure is operating, this often leads to guessing that the scene as a whole needs a lot more exposure.
I imagine that trying to photograph the Earth when a significant part of what's in view is experiencing daytime, is very much the same thing.
You have to wonder how unserious this can get. Given the unimaginable cost of this mission, they are faffing around as your typical aunt with Windows Home laptops and iPhones? Seriously?
I'll echo that "sheesh" in the other comment, too. They're so unserious compared to those super serious Apollo guys[1], right? After all, the Apollo folk never would've smuggled contraband for fun on the Moon[2]!
Similarly, I feel like book publishers are about to become a thriving business soon again. With any book being most likely just a bot creation, trusting "Random House" sounds like a thing more of us will start paying attention to to make sure we're buying a human made thing.
Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
i am absolutely on the fence here. I do like the ai cleanup of my rambling can do. but yes, i'm tempted to just leave it rambly, misspelled, etc. i find myself swearing more in my writing, just to give it more signal that: yeah, this probably aint an ai talking (writing) like this to you :) and yes, caps, barely.
sorry. i didn't mean to say that's the only thing this agent is doing is screenshotting. just that it was a thing my agent is doing which has this neat property. i also have a host of other things going on when it does need to grab and understand the contents of the page. the screenshot is used in conjunction with the html to navigate and find things. but it's also doing things this particular test tries (hidden divs, aria=hidden, etc.). also tries to message the model about what's trusted and untrusted.
but the big thing I have in here is simply a cross domain check. if the domain is about to be navigated away from, we alert the user to changing domains. this is all in a browser context too so a browsers csrf protection is also being relied on. but its the cross domain navigation i'm really worried about. and trying to make sure i've gotten super hardened. but this is the trickiest part in a browser admittedly. i feel like browsers are going to need a new "non-origin" kind of flow that knows an agent is browsing and does something like blocking and confirming natively.
I'm about to launch an agent I made. Got an A+. One big reason it did so well though, right or wrong, is the agent screenshots sites and uses those to interpret what the hell is going on. So obviously removes the secret injections you can't see visibly. But also has some nice properties of understanding the structure of the page after it's rendered and messed with javascript wise. e.g. "Click on an article" makes more sense from the image than traversing the page content looking for random links to click. Of course, it's kinda slow :)
That's a really interesting edge case - screenshot-based agents sidestep the entire attack surface because they never process raw HTML. All 10 attacks here are text/DOM-level. A visual-only agent would need a completely different attack vector (like rendered misleading text or optical tricks). Might be worth exploring as a v2.
Yea, I was instantly thinking on what kind of optical tricks you could play on the LLM in this case.
I was looking at some posts not long ago where LLMs were falling for the same kind of optical illusions that humans do, in this case the same color being contrasted by light and dark colors appears to be a different color.
If the attacker knows what model you're using then it's very likely they could craft attacks against it based on information like this. What those attacks are still need explored. If I were arsed to do it, I'd start by injecting noise patterns in images that could be interpreted as text.
author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.
i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).
of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.
but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:
1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened.
2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast".
3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way".
4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.
Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.
reply