> barring the few missteps like the crush ad
Ideally I should be one of the people offended but the ad but I actually liked it and the concept. Of course ymmv
I doubt I'll upgrade anytime soon, but at this rate when I do need to, I feel like I'm just going to get a Max because of how happy I've been with the base M1 Pro (just upgraded the RAM a step).
That is if I don't just suddenly decide to replace the batteries (relatively cheap from the apple authorized store compared to a full upgrade).
My guess is the leap from intel to M1 was significant for an upgrade and M1 vs M2/M3 wasn’t really. I’m personally on an M1 and use it heavily but I don’t think I need the M4 jump still.
at least in the image generators that I've used it takes quite a bit of experimentation to get both positive and negative prompts that produce both the result you want and a quality one as well.
Part of it, if I understand correctly, is that the data has no actual concept of grammar and meaning. So if you say "a table with 6 legs" it doesn't see that as a full concept, it sees words or segments of words, so "legs" can end up being legs of people (it might make a table with human or animal legs) and similarly it just might insert random legs in the scene because it has no understanding of the description.
So, people find ways to coerce / influence it to get to the right place.
It's also important to know what models/loras/etc were used as not all data sets generate good images for whatever your topic is.
If you’re using chatGPT, you can add a prompt (I’ve set it in my rules) to “fact check from other websites” when I’m asking it things I’m not an expert in. It then provides some links which I then open up. I’ve found that to be a lot more efficient than searching from google straight up especially with very specific questions I sometimes query.
Half the websites it shows are those I wouldn’t have found on google and are relatively high signal for what I’m looking for (including very niche blogs from experts from the field).
Two of my research advisers whose dissertations in PhD Physics had Hopfield as their primary reference. I'm also a PhD candidate working on one right now (no longer my primary reference because of all the developments) but I can trace several of my main references back to them.
> an llm invented a novel solution to an important problem in programming.
This will likely not happen and is a horrible benchmark for productivity. The advantage of using LLMs as a partner for coding is that I myself will have more time to generate “novel solutions” since a lot of the low level stuff that I still need to write can be done in less than half the time.
I never trust it to write code I can’t write for myself and things I can’t make tests or verify.
It’s not about generating more lines of code but having more time to think of the more mentally demanding stuff. It’s like having a junior developer that can work really fast but I will still need to check what it does.
As a very personal benchmark, at work I know how much time it takes me to build repeating things from scratch that can’t be automated. By my estimate, having the LLM saves me around an hour or two of coding a day. That doesn’t replace my time but those 20-40 hours a month of time saved is worth the 20 bucks on average I pay for it.
Tldr; it won’t make anything novel but it gives me more time to make things that are