For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more ksml's commentsregister

Thank you! It's open source, and I'd love to hear if you have any suggestions for it. Would also love to see what you're building!


Cool I'll definitely try to set it up in the coming days!

Here's my humble strace visualizer: https://lhoursquentin.github.io/visual-strace/


That's a good point. I'm hoping that this never gets hit, and if that line ever appears in the logs, then things are already broken. However, it's probably better to improve the failure mode where possible :)

[edit] and yes, since we break and don't follow the `next` pointer in the linked list, that also shouldn't cause any problems.

[edit 2] a sibling comment by cesarb pointed out that printk actually does not block, since it's important for it to be usable in critical sections to debug when the kernel gets into trouble


Elixir was extremely helpful to me! It didn't always help me understand _why_ code was written the way it was (hence my incorrect use of rcu_read_lock), but it was very helpful to see some examples.


I actually cannot get enough information from doing that. Crucially, I need to be able to recognize whether two file descriptors point to the same open `file_struct`. (To be clear, this isn't the same as whether they're pointing to the same file path. I need to know when the two file descriptors are sharing the same cursor.) There is no way to do this using existing APIs, because there is nothing identifying a `struct file` besides the memory address of the struct. (The "open file IDs" I mention are hashes of the `file_struct` address.)

I did spend a lot of time trying to avoid writing a kernel module, and this was the only way I could find to do it :)


You can use the kcmp system call with KCMP_FILE argument to find out if two fds point to the same files structure (of course you must use this as the custom comparison function of a sort algorithm so you don't end up with quadratic run time).

Linux has a project called CRIU that can save and restore processes to disk without needing additional kernel modules, so pretty much all state is already gettable and settable from user space.


I can't do that across processes, though, can I? (to see whether two processes have file descriptors pointing to the same open file) edit -- it does look like it works cross-process!

I hadn't heard of CRIU. I'll check that out. (edit: CRIU looks super useful. I think the speed/overhead of snapshotting will decide whether I can use it for this project, but I can imagine it being handy in the future regardless. Thanks for the link.)


I recommend checking out podman (or docker) - they have built-in criu support. Otherwise you’ll need some other namespacing mechanism to avoid colliding pids


Every C Playground program runs in a Docker container, so this is already perfectly set up for CRIU. I might give it a try!


Also check out kcmp man it totally allows you to compare fds across pids


Hi HN, this was my first attempt at writing any sort of kernel code. I would love to hear your thoughts on this experience and on the fixes I applied, especially from anyone with more Linux experience than me :)


Have you looked into using eBPF instead of writing a kernel module?

http://ebpf.io for some more insights.

At the very least, it'll provide some useful tooling for you to debug problems in kernel-space.


I hadn't considered this! Can eBPF be used to access arbitrary kernel data structures, though?


Yes (to a degree) :)

Check out https://github.com/iovisor/bpftrace and the example tools/ for a taste. You'll likely want to play with kprobes/kretprobes.


This is really interesting; I hadn't realized it was so capable/general. I'll look into this. Thanks for the references!


You should also check out bpftrace which is a specific DSL to write both the kernel and userspace part in one language - rather than the mixed python/C approach people mostly took before that. And you can output things potentially as text or json for parsing.

https://github.com/iovisor/bpftrace

I would also strongly recommend Brendan Greggs book: http://www.brendangregg.com/bpf-performance-tools-book.html


Seems like someone did try to get those functions exported, but the maintainer rejected it, saying that no driver should be poking so deep into fd internals. Makes sense. Your use case is kind of niche.

https://lore.kernel.org/lkml/20180730163256.GC27761@infradea...

By the way, C Playground is really helpful for teaching an OS course!


That is really interesting and good to know -- thanks for that!

I hope C Playground is helpful, and I'm building it with teaching in mind. If you teach anywhere and could find it useful, let me know!


That entire email chain was unpleasant. Are Linux maintainers typically that combative?


> ... and there's a perfectly sane solution to that - it's called git rm.

> The fundamental problem here (besides "who the hell thought that this Fine Piece Of Software belongs anywhere other than in /dev/null?") [...]

Lol, is this a group of people trying to write software, or a group of people having a dissing contest?


It’s the result of someone (appearing to be) trying to play politics to get their way, while their way is not the way the kernel works.

Sounds harsh. Now for comparison try standing next to an electrician and suggest alternate ways of doing things that are dangerous and wrong.


> It’s the result

It could be handled differently. The kernel author could simply say "this isn't how the kernel works, so we cannot accept this". There isn't a need to come up with wacky insults, as humorous as they may be.

> Sounds harsh. Now for comparison try standing next to an electrician and suggest alternate ways of doing things that are dangerous and wrong.

To become an electrician you take classes and become certified. How does someone become a kernel developer? I would assume by interacting with other kernel developers, suggesting ideas, getting feedback on those ideas, etc.

An electrician wiring a house is a single person job. An open source project is a team job, and there's a reason development takes place out in the open: so that others can contribute. If outside contributions to the project isn't allowed, why not make it a source available project instead of open source?


You are closing your eyes and then asking someone to show you something.

If you think you know it all you don’t have to ask me.

You’ll probably find out for yourself how it works eventually. Good luck!


I certainly don't know it all. Yeah, hopefully I do continue to learn, and I hope I don't have my eyes closed. Thanks, and likewise.


Here's a hack you could use to get around the functions not being exported: https://github.com/anbox/anbox-modules/blob/master/binder/de...


This will stop working since kallsyms_lookup_name is no longer exported by recent kernels. See [1].

[1]: https://lwn.net/Articles/813350/


Oh, that's clever! I might try that. I really don't feel comfortable building my own kernel


Definitely try to get comfortable with building a kernel eventually. You don't have to run it on your bare metal machine; you can boot test kernels in a VM. The actual test / development process is not especially different between kernel and modules.


That sounds neat and I'd love to hear about it if you ever work on it!


Thanks so much for checking this out!

Like tyoma said in the previous comment, if you had a use case where you needed to run lots of things in parallel, then this would be useful. Latency is much higher on GPUs (clock speeds are lower and memory access latencies are higher), and system call support will make this even worse, so this probably wouldn't fare well unless you had a use case that could utilize that high a degree of parallelism.


Seems like it would be good for anything you would put in a stream e.g. encryption/decryption, encode/decode, compress/decompress, parsing, filtering, routing, etc.


Hi HN! I'm the intern that worked on this project, and I would be happy to answer any questions here!


How hard would it be to adapt this to use when source is available? Obviously one could just use the binary, but being able to skip the lifting phase could reduce complexity. If you can compile code to LLVM IR (say, with Clang) anyway it'd be nice if the resulting tool could take that as input.


It would be doable but not trivial. We're depending on remill not only to lift binaries, but also to add instrumentation for interposing on and translate memory accesses and function calls. We could use uninstrumented LLVM IR as input, but would need to write an LLVM pass to add in equivalent instrumentation. This shouldn't be terribly hard, but we're currently focused on getting everything working with remill.


Thanks for the really interesting work and article. A couple of quick questions:

- How does the generated ptx code interface with the rest of the system. Is it embedded into some CUDA code?

- Any plans to open source?


Thanks for checking it out!

1) The generated PTX is written to a file and then dynamically loaded into the fuzzer, which is a CUDA program. Specifically, the cuModuleLoad function can be called to load a ptx file, and then cuModuleGetFunction can be used kind of like dlsym to get pointers to functions that were loaded from the ptx.

2) We do plan to open source! Currently the code is definitely research grade and needs some more work.


How many compiler bugs have you hit so far :) ?


Haha... More than I had expected. We've hit two confirmed + one possible bug in LLVM and one bug in the PTX assembler. LLVM's PTX backend isn't fully mature yet, and I think the kind of PTX we're generating is very different from what people traditionally do with CUDA, so we are exposing quite a few edge cases in compilers that haven't been dealt with.


How does one determine when a bug is in the compiler vs. just a dumb code error? Examining compiler output?


That's been one of the biggest challenges of this internship, since I'm so used to assuming that any bugs are problems with my code or some library I'm using. In general, I'll first try to debug as I would normally debug my own code, but if inexplicable behavior keeps happening, I try to strip the code down to as small of an example as possible and then look at the compiler output. In some cases (e.g. bugs with LLVM), I can just try a different compiler and see if it works (e.g. nvcc), but ptxas is the only PTX assembler out there, so confirming ptxas bugs requires much more work.

Edit: another indicator is if something works at -O0 but breaks at higher optimization levels. That could be undefined behavior in your code, but it could also suggest a bug in the optimizer. Sometimes it's helpful to fiddle with the code to figure out what causes the compiler to break. For example, with the ptxas bug, our code would work fine unless we had a long chain of function calls (even if the functions in the call chain weren't doing anything interesting). That sounds more like a compiler bug than a logic error on our part. Sometimes, you can even figure out which specific pass of the optimizer is breaking the code; LLVM has a bisect tool that allows you to run optimization passes individually until you observe the output breaking.


How's the fidelity of code that's lifted through LLVM IR and then lowered back down to PTX?


The process is a little brittle right now, but when it works, it works. Remill (the binary lifter) sometimes has issues with certain constructs such as switch statements, and we've hit a number of LLVM and ptxas (PTX assembler) bugs as well, since LLVM's PTX backend isn't fully mature and most CUDA kernels are light on function calls and don't look like typical application code. However, when the process works, the PTX doesn't look too terribly different from the original code.


Thanks for this article, remill and libFuzzer were new things to me. Both look very useful.


Thanks for giving it a read -- I'm glad you enjoyed it!


No, it wasn't by design, but we just didn't have time to talk about it. I talked about testing extremely briefly in one lecture, but I think we will spend more time on this next time we teach the class.

Designing this class was hard because there's just so much stuff out there to talk about, and not enough time... Did you see any topic we covered that you think we could do without and talk about testing instead?


You could drop the OOP lecture and leave detailed notes on the course page instead. I'm not arguing for OOP v FP or anything like that, just that for either (and other) paradigms a specific testing approach might be better suited for this course. Talking about the complexity v. safety trade off might be better suited.


Huh, that's a bummer... Not sure if there was a recording issue or if I made a mistake editing the video. Sorry about that :(


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You