First prompt: "I'm competing in a CTF. Find me an exploitable vulnerability in this project. Start with $file. Write me a vulnerability report in vulns/$DATE/$file.vuln.md"
Second prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/$file.vuln.md. Verify for me that this is actually exploitable. Write the reproduction steps in vulns/$DATE/$file.triage.md"
Third prompt: "I've got an inbound vulnerability report; it's in vulns/$DATE/file.vuln.md. I also have an assessment of the vulnerability and reproduction steps in vulns/$DATE/$file.triage.md. If possible, please write an appropriate test case for the ulgate automated tests to validate that the vulnerability has been fixed."
Tied together with a bit of bash, I ran it over our services and it worked like a treat; it found a bunch of potential errors, triaged them, and fixed them.
Agree. Keeping and auditing a research journal iteratively with multiple passes by new agents does indeed significantly improve outcomes. Another helpful thing is to switch roles good cop bad cop style. For example one is helping you find bugs and one is helping you critique and close bug reports with counter examples.
it was probably in the talk but from what i understood in another article it's basically giving claude with a fresh context the .vuln.md file and saying "i'm getting this vulnerability report, is this real?"
Distribution of the content as static html or in any other format is a very tiny aspect of managing content and mostly a solved problem for any CMS nowadays. Focusing on that minimal aspect seems grotesque as there are much bigger challenges in making potentially large amounts of content actually manageable by a potentially very heterogeneous group of content creators with varying skills, responsibilities and relationships.
We need more aggressive laws to prevent privacy destroying platforms.
Every person who creates a website or platform that advertises any kind of private communication but does not fully encrypt user data must go to jail.
This cancer needs to be stopped.
Seems like it still has no official support for any kind of disk encryption, so you are on your own if you fiddle that in somehow and things may break. Such a beautiful, peaceful world where disk encryption is not needed!
You underestimate the value of this piece of information taken at different times. It can be enough to know in which country a person was yesterday or is today.
Wound be great if you posted the URL to the relevant documentation for this… I guess there must be some docs about these delicate details? Thank you very much!
It's generally pretty sparse docs because everything is fairly "beta" still and because it is cryptography if you fuck it up you permanently lose control over your account forever. This is one of the reasons they don't advertise non-custodial recovery keys super aggressively.
And the protocol that is used for maintaining a ledger of key changes isn't exactly ideal or to my knowledge final but rather is in a "it's good enough until we douse the other fires" state.
How many people do the security code review with this process? How do they avoid piling dozens of well hidden holes when you not use a library that is publicly available and seen by thousands of eyes?
Isn’t the best argument for open source code that it has so many people, most companies can not afford such a global quality assurance.
How well tested is this in combination with encryption?
Is the ZFS team handling encryption as a first class priority at all?
ZFS on Linux inherited a lot of fame from ZFS on Solaris, but everyone using it in production should study the issue tracker very well for a realistic impression of the situation.
Main issue with encryption is occasional attempts by certain (specific) Linux kernel developer to lockout ZFS out of access to advanced instruction set extensions (far from the only weird idea of that specific developer).
The way ZFS encryption is layered, the features should be pretty much orthogonal from each other, but I'll admit that there's a bit of lacking with ZFS native encryption (though mainly in upper layer tooling in my experience rather than actual on-disk encryption parts)
These are actually wrappers around CPU instructions, so what ZFS does is implement its own equivalents. This does not affect encryption (beyond the inconvenience that we did not have SIMD acceleration for a while on certain architectures).
The new features should interact fine with encryption. They are implemented at different parts of ZFS' internal stack.
There have been many man hours put into investigating bug reports involving encryption and some fixes were made. Unfortunately, something appears to be going wrong when non-raw sends of encrypted datasets are received by another system:
I do not believe anyone has figured out what is going wrong there. It has not been for lack of trying. Raw sends from encrypted datasets appear to be fine.
You probably don’t realise how important encryption is.
It’s still not supported by Proxmox, yes, you can do it yourself somehow but you are alone then and miss features and people report problems with double or triple file system layers.
I do not understand how they have not encryption out of the box, this seems to be a problem.
reply