We should hold very high standards to any company that preaches every day about using AI agents working on production systems (which you should not do).
Starting with the AI companies, then GitHub, and the rest.
Seems like a silly thing to say right when x86 is getting pummelled to death by Apple and Valve, maybe slowly, but steadily, while the rest of the gang also watches on.
I spent time looking into this a couple years ago as a startup founder with this problem. We are in the finance space so I saw how bad the treasury options were with our bank, given their fee cut versus plain T-bonds at the time. I looked into which brokerages allowed us to setup self-directed accounts (many banks don't offer that for businesses at all). I found the "correct" approach. But then there would be more paperwork and back and forth to set up that new account, then manage transferring money around when we needed it, logging into a different system. On a ski trip a friend in finance told me "you're being dumb, if your bank offers you a treasury plan with a one click button, even if it's not perfect, click that button now!" So I did.
Then, the benefit of saving 1-2% extra versus spending my time trying to actually running the business and doing things with our money in the real world, has meant I have never looked back. 1-2% on millions of dollars is significant but it's not nearly as impactful as finding Product-Market-Fit in your actual business.
All this to say: I'd be in your target market but I'm simply not interested in a "marginally better" treasury system versus just going with my bank's options that make it easy for me.
Since these things can be made into forks with minor difference relatively easily, instead of playing along with this tech company + creepy government data grab because California says so
So much for privacy I guess, hence pulling out this protecting children BS that I saw too many kids at my urban CA highschool get stabbed to fall for. The fact these tactics still work, where we limit our toothless privacy protections that the firms that don't comply the state might eventually sue when someone with more money than I presses the Attorney General, but then dial it back marketing it as protecting children and people still buy it? Absurd these same people work at such bleeding edge tech firms but then again LLMs can do that busy work they actually are doing better most of the time....
Do you really think people would have more sympathy for an org that gave the keys to the kingdom to an intern? I think it would be the same "How did you think this was going to go?" conversation.
Lol I remember back in the day when we used to replace the windows logo during boot with our own bitmaps (if I can appropriately remember the file format) at school. We showed kids and suddenly every computer was booting with unexpected media. Admins were not pleased.
But the real fun began when we showed people how to set bios passwords and that cascaded into kids rendering an entire classroom of computers unbootable.
And then there was the time I thought I was really clever because I realized I could open arbitrary files in notepad, so I attempted to edit save files for a learn-to-type program by replacing the score with my own. It seemed to work so I told my friends and then they started copying the same save file into their program files. I don't remember exactly what happened, but I do know the UI update did not propagate to the actual scores and also it introduced a bug into the prodgram which would cause it to crash at distant future epochs, so we destroyed the program for everyone, not just our user profiles. There was an investigation and I think they gave up because the same bad file somehow was on everyone's computer and everyone just told them they got the file from someone else and there was no root of the chain.
This is all to say, any bypass will be identified and implemented at the speed of virality.
I hope they added (or will add) that feature to other Macs too, on mine I had to try out different ports and check the settings to find the one which can go beyond 60Hz.
The CBP seems to be asserting that they lack the technical resources to issue the refunds in a timely fashion. Thus, when they finally comply, they (well, the US taxpayer) will end up paying more interest - probably around $20M/day (assuming 4% and $175B in illegal tariffs collected).
Perhaps this Administration should ask Musk to bring in a team to revamp the systems involved to get these refunds "in the mail" quickly. The DOGE team must be done with the Social Security system rewrite by now so may be available for this task. Maybe Big Balls is free this weekend to take care of this...
I've been working on a surprisingly similar project for the last week: plants grow cells on a grid by executing a raw chunk of memory according to a simple instruction set. I'm aiming more for an evolution simulator, where each plant gets a 1kb brain that is randomized a little when a new plant is spawned.
Most plants right now settle into a simple goto loop that places the requisite cells to survive and then spam seeds until they die. I have seen some interesting variety in body plans emerge where plants sort into discrete species regionally. I'm hoping to eventually get decision making to emerge organically. If things go well this system is theoretically capable of sexual selection (and maybe fisherian runaway) but that's a pipe dream right now.
Two reasons. One, they have already filled it internally but legally have to post the job. Two, they are gathering data on market trends and what salaries people will take, which is useful if they are considering firing people and rehiring with lower salaries.
I've applied for many jobs where I was perfectly qualified and got rejection notices immediately. I applied on a Sunday and got rejected on Sunday an hour later. No human reviewed that application I made, it was auto rejected, and if that's the case, what other explanation is there than "ghost jobs."
And this guy has the audacity to run an AI Engineering "buildcamp" and publishes an AI engineering newsletter. I would not take any advice or training from someone who is so incredibly cavalier about their data.
Not OP and I won't dispute your point exactly but I'd like to point to a book called Pixel Logic wherein the author makes the same point regarding pixel art. Even though you'll be using stuff like the Lasso and Paint Bucket tools the big thing about pixel art is the manual control and precision of pixel placement (by hand) where you employ techniques like anti aliasing (again by hand). Advanced techniques like sub-pixeling when doing animation frames are another thing that makes sense only when you can place pixels one by one.
I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.
For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.
In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.
We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.
That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.
And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.
For a good senior, yes you get massive returns, which is why those good seniors are in incredibly high demand right now.
For average to low-performing intermediates/seniors... there's not much difference in output between them and a good junior at this point. Claude really raised the skill floor for software development.
Copyright applies to the work artifact and not the execution context, which is why source code has copyright protection that is enforceable while copyright isn't really enforceable against binaries. Its also why binaries are licensed separately from the source code, such as EULA verses code license.
Secondly, existing US case law says bots and AI cannot receive copyright protection as code authors. In the US all source code is protected by copyright by default as is any original written work, but if it can be proven software was written by AI then the work becomes beyond defense, which is a void in law. Its not the same as public domain. The only distinction is in regards to second and third order consequences which is clear for public domain works, but not so clear for indefensible works.
Yes this sort of auto-regressive error propagation is a real concern for the same reason it's a real concern with LLMs in general.
If you force the output of an LLM to begin with an error, the LLM tends to continue down that erroneous path.
In practice, we didn't see much of this kind of EP. A solution to this would be to give some agent the task of occasionally reviewing the NERDs for contradictions as well as the ability to search through the source material as needed. That of course creates the possibility of catastrophic forgetting, where the agent rewrites a NERD in an effort to remove a contraction and end's up deleting something important.
We didn't see a lot of error propagation, but one example where we did: in Harry Potter, Prof Dumbledore is introduced as a mysterious hooded character. So the NERD-writer would create a NERD for "mysterious hooded man." There's no tool for the agent to change the title of a NERD, so the system is stuck with that title now. Sometimes the system would build the entire Dumbledore entry under "mysterious hooded man"; sometimes it would make a new Dumbledore entity and like a reference back to the "mysterious hooded man" entity, and sometimes it wouldn't link them. None of those outcomes are great.
You demand specific data points but respond with vague handwaving and general statements about the importance of calculating inflation in this discussion as if it represents more than a small fraction of the overall increase in ram cost
That's the right move. If a word changes its colloquial meaning, better drop it and find a new one. Happens all the time. From stuff like "agile" in a software development context (pretty meaningless at this point, can mean anything from the original definition to the systematic micro management it got to be commonly associated with), to previously neutral words that became offensive (because they were commonly used as such).
No individual holds power over connotations. Language just evolves.
I'm looking for job (in Rust) now and is absurd how many positions are for training LLMS -in Rust!- (yeah, lets help the people that wanna put everyone out of jobs)
This must be an omen given how I just this week watched a bunch of the Majuular videos on youtube (Highly recommend them) about the Ultima series of games, particularly Ultima Underworld The Stygian Abyss, and Ultima VII and VIII. That lead to me buying Underworld and VII last night on GOG as feel like I missed out on something wonderful in the 90's (Also need to grab System Shock and Crusader no remorse.)
My brother and I bought IX when it was released but it was a buggy nightmare so we gave up and never experienced Ultima proper. However, my brother and his friend got into UO and played a ton. His friend was a greifer at the time going by the name SirDarkSpell and supposedly made a bit of a name for himself. This was around 2000 or so? I bet the two of them would love to hear about this project as both of them have fond memories of UO.
Anyway. Might just throw my weekend into the Stygian Abyss...
Your terraform written by a person already doesn't have deterministic precision. Ai isn't messing these things up either.
If your Ai work flow is still dumping logs into a chat and saying search it for some pattern, then you should see what something like Claude code approaches problems. These agents aren't building scripts to solve problems. Which is your deterministic solution.
This would be a really nice product to start ups outside the US tech belt. Hubris of treading water in longterm a-series SUs elsewhere, this could be a viable solution if accessible.
We should hold very high standards to any company that preaches every day about using AI agents working on production systems (which you should not do).
Starting with the AI companies, then GitHub, and the rest.