I have a old ass GTX1070 and run it just fine, it's not the fastest but it works. A lot of these cases are just PEBCAK's and should just be removed from the internet, it's been made so easy that a literal child could install and use SD.
Well as long as they have a recent-ish NVidia card, rip AMD users.
The 3090 I had was actually somewhat slower than dreamstudio.ai, but the nice thing was the set of python tools I got with it was scriptable so I could do things like create an image, use that as input for another one, and so on, then make them into an animation. There's a bit of tax with initializing python and loading the ML model with each iteration, but if I ever get my hands on a 4090 I'm sure I can solve that.
>Take a look at the video demo. It takes natural text in a box and generates code. Copilot was super-autocomplete, so the interface was writing code in an IDE that it filled out for you.
No it wasn't, you can literally describe, in natural text, what you want in a comment and CoPilot will do its best to generate a complete method based on that comment. It seemed like it was so auto-compltely because that focussed on the "helping the developer" part.
I'm fairly sure CoPilot could have shown something similar if they had a demo where you could make something visual easily, like HTML + Javascript/Typescript/whatever scripting language. They're using exactly the same model (Codex) after all.
How did it mess up the "make every fifth line bold" prompt?
Also, to follow up on the original comment, AI demos are nice, but being a student of history there are still fundamental challenges with these systems. My skepticism is in how much prompting is really required and how can it understand higher level semantics like code refactoring, reproducible examples, large scale design patterns etc.
This synthesis of sequential symbolic processes and probabilistic neural generation is really exciting though.
When the amount of human code edits and tweaking for complex programs goes down from hours to seconds then that's when I'll be impressed and scared.
Yeah, definitely. I guess my point was that converting natural language to source code can be even more valuable for people who don't know how to code, but want to perform actions more complicated than a simple button press. For example, I often find myself doing regex based find-and-replace-alls in text files, and even that feels inefficient while also being over the head of the vast majority of users. I'd imagine there are a lot of people out there spending many hours manually editing documents and spreadsheets.
It is only able to translate small instructions into code. I think it will take a while to get to a situation where you can just give it a list of requirements and it spits a working program.
Hell it messed up when they gave it the instruction "make every fifth line bold" in their Word api part of the demo, where it made the first line of every paragraph (which is only 4 lines long in total) bold instead of every fifth line.
It didn't mess up the instruction "make every fifth line bold". The blank spaces between each "paragraph" are empty lines, so it counted them too. I think this is perfectly reasonable behavior, it's what I would have done absent further instructions too.
You can see it in the generated code on the bottom right during that part of the demo. It loops over the lines and bolds them when index % 5 == 0.
Edit: I guess with the 1-based indexing of natural languages, the code actually bolds lines number 1, 6, etc. So arguably it should have done index % 5 == 4 instead, to bold lines number 5, 10, etc. But funnily enough, if it had done that, it would have bolded all the empty lines, so it would have seemed like it didn't do anything.
I found the whole UI/sandbox they created the most interesting part. Now don't get me wrong, the tech is certainly great and all, but I really didn't had the feeling I watched/learned more than I already knew from what was shown with Github CoPilot, although I was kinda impressed, if it really is as simple as they stated, at how it is able to adapt to new apis.
It's a shame they only limited the demo to relatively simple instructions.
>I've also seen a couple demonstrations of people generating simple apps with just a description.
And yet none of those people have released a demo, even though some said they would, multiple times. I'm still quite sceptical at those demonstration until I get to try it myself.
While the human is doing most of the work, this looks more like teaching another human than coding in a formally specified, machine-parseable programming language.
On one hand, this could decrease the barrier of entry to programming. On the other hand, it seems to leverage (and train) the same skills that are needed to express ideas clearly to humans, which is arguably much more applicable than learning a traditional programming language.
That's actually kinda cute, and oddly off putting at the same time.
I really am curious now in what happens back-end, in the model itself, and if it is actually learning. Really should follow Gwern and his circle, it's a lot better than the blind and baseless hype/criticism floating around the internet.
They commercialize the API. The model is so large you can't run it on a single GPU, AFAIK. OpenAI developed an infrastructure to run the model efficiently. So many companies would rather use API than deploy the model on their own hardware.
They will probably release the baseline model, not model optimized for deployment. There are many optimizations possible such as precision reduction, pruning, distillation, and they don't have to share these optimizations.
> a fast Google search yielded no results and I'm curious.
Well as long as they have a recent-ish NVidia card, rip AMD users.