My current favorite corp speak is “we can’t wait to share it with you”. Seems like it’s in literally every product announcement. At least this one is positive
Some notes:
- based on GPT3.5 - essentially, the test was “how well can GPT produce ML code” (tune hyper parameters, base off of case studies)
- did not compare to the human case, only to other ML models (unless “human” is considered perfect, in which case GPT got 86%. Although I don’t think a human would perform at 100% of the benchmark)
Alternatively LLM’s will have gotten good enough that prompts can be general enough to not need tuned skills. I.e the average person can prompt well enough to get great results
I’ve been unable to replicate this. Could you please show me an example? I ask everyone who makes this claim and have yet to see a concrete example. I just can’t get it to do anything useful for me. I feel like I’m missing the boat!
It wont write your entire program, and you have to already know enough code to know when it gave you garbage, but.. I find I can have it tackle small chunks and in some cases even glue them together in a usable way. It can often remind me of stratigies I would not have thought to use, good or bad. It can also do some basic debugging, including seeing things my tired eyes often miss. That said, you kinda already need to be able to code or you wont know the wheat from the chaff. It feels a lot like managing a remote esl person you will never meet.
Could you explain that “two tables” issue again? I’m trying to think of a way that DB constraints (I’m using Postgres, fwiw) couldn’t handle denormalization, but imo there’s a constraint for everything. Force a 1:1 relationship, or add a check constraint so you literally cannot denormalize.