> Below are just some of the many inaccuracies in the story and then the truth.
> The Substack inaccurately said Delve relies on “Indian certification mills operating through front companies” and cannot pass legitimate audits. This too is not accurate.
At least it's not GPT but my goodness - you can definitely sense the panic. I think Karun is a little worried.
On firefox i use unhook for youtube. solves the shorts issue but im sure a lot of people would be less okay with what i prefer youtube to be, a search bar with nothing else.
I'm amazed no-one has used the term "Regulatory Drawbridge". It's a classic thing that happens in a number of industries - the big players push for more and more regulation. It costs them money and time, but it makes a massive barrier for new incumbents who don't have the cashflow and manpower to work through the regulatory process.
Medicine is the classic example, but it's happening in the tech industry too. The FAANGs of the world took advantage of an unregulated landscape, but now that they're in the castle they're pulling up the drawbridge behind them.
(sidenote - this is why regulation like the Digital Markets Act in the EU should be great. It's only a cost to larger businesses. In practice we're not yet seeing the changes that it should create).
The LLM didn't oneshot the mRNA treatment, it merely suggested the idea. Most of the steps in the process were done with specialized tools. And no novel treatments were invented wholesale, it's more applying a documented process with existing open-source tools that's just too personalized and expensive to be offered by any vet.
Why do you find the idea of a man complaining about having painstakingly hand typed a 100 page document over three month when he claims he can use an LLM in a way pretty much no-one has before him?
It's several orders of magnitude easier to get an LLM fill some kind of red tape than it could be to use it in the way he claims to have used it.
He is not using an LLM in some new and exciting way. The process of making a personalized mRNA vaccine looks something like this:
1. Collect and sequence patient's normal and tumor genomes
2. Predict immunogenic neoantigens from genome
3. Generate optimized mRNA sequence from neoantigens
4. Create vaccine from sequence
modulo some variations, which I wrote off the top of my head because I understand this technology.
Steps 1 and 4 are done by contracted labs. Steps 2 and 3 are doable through open-source computational tools and a little engineering. What does ChatGPT do here? ChatGPT explains the process, finds labs that will do 1 and 4 for pay, finds published algorithms and data for steps 2 and 3. It's barely more complicated than what ChatGPT would do to help a student with their homework.
Legal documents, on the other hand? Have you ever tried to get an LLM to do your taxes? It's not easy.
> Legal documents, on the other hand? Have you ever tried to get an LLM to do your taxes? It's not easy.
Taxes are numerate, which is where LLMs fuck up.
Legal documents are structured texts, which is where LLMs shine. Should you blindly trust the outcome? fuck no, but a good first pass is trivially achievable if you set the right parameters. and make sure its relevant to the right jurisdiction.
I think the biggest thing in question when the Patterson film is discussed is who actually made the (supposedly) Hollywood-quality ape suit.
Sounds like this doc might have a real bombshell though -
> “Capturing Bigfoot,” premiering this week at the South By Southwest film festival, builds to a big reveal: freshly surfaced film that appears to show a woodsy dress rehearsal for one of the world’s most enduring hoaxes. In the new footage—from a Kodak reel dating to 1966—Patterson’s camera tracks a man in costume, his brother-in-law, moving in a similar fashion to the figure in the 1967 shoot
This would be a pretty nice conclusion, to be honest.
In all my years of emulation, I've never come across a malicious ROM for a major console.
Dolphin runs its own VM. Obviously anything is possible, but developing some kind of breakout-ROM which would infect the host machine is just way more engineering than I could imagine ever being worth it. The vector is just too complex, and the target (nerds downloading retro games) just isn't worth the squeeze.
Archive.org actually hosts a good chunk of the major Gamecube ROMs. Good luck!
> One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.
I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers.
In fairness although "the industry" learns best practices like using SQL prepared statements, not sanitising via blacklists, CSFR, etc. there's a constant new stream of new programmers who just never heard of these things. It doesn't help that often when these things are realised the only way we prevent it in future is by talking about it, which doesn't work for newbies. Nobody goes and fixes SQL APIs so that you can only pass compile-time constant strings as the statement or whatever. Newbies just have to magically know to do that.
> Below are just some of the many inaccuracies in the story and then the truth.
> The Substack inaccurately said Delve relies on “Indian certification mills operating through front companies” and cannot pass legitimate audits. This too is not accurate.
At least it's not GPT but my goodness - you can definitely sense the panic. I think Karun is a little worried.
reply