He published a single author paper even though he acknowledges in the end "I thank Matthew Fiedler, PhD, and Jeanne Lambrew, PhD, who assisted with planning, writing, and data analysis". In my field, those people probably would have deserved a place on the authors list ;)
> He published a single author paper even though he acknowledges in the end "I thank Matthew Fiedler, PhD, and Jeanne Lambrew, PhD, who assisted with planning, writing, and data analysis". In my field, those people probably would have deserved a place on the authors list ;)
Yeah, in my field (astronomy) that would have also merited co-authorship. But, I think standards vary wildly. In astronomy, many co-authors' contributions consist solely of commenting on nearly-finished manuscripts. Some friends in ecology think that's way too generous and would instead just acknowledge people for their comments.
This is actually what we do in radiotherapy nowadays, it's even called "inverse treatment planning". Basically a radio-oncologist or a dosimetrist contours (manually, with a pen) organs on every slice of a CT scan and then we just say "We want this much radiation in the tumour, and less than Y units in surrounding healthy tissues".
Treatment planning software then simulates dose distributions from a bunch of possible radiation beam collimations and adjusts the amount of radiation coming out of each radiation "field" to minimize a cost function penalizing overdosing healthy tissue and underdosing the tumour(s).
As a computer graphics professional, this fascinates me. The integrals you describe are remarkably similar to a large variety of solutions we employ for area estimation and integration in general. For example, in a path-tracing paradigm, I would (naively probably) consider radiation beam collimation similar to what we call the "cone-angle" of a particular integration method, particularly WRT calculating illumination response for a given surface, etc. Can you describe in more detail what kind of calculations your treatment planning software does? It sounds fascinatingly similar to the advanced physically based rendering algorithms that are in common use in computer graphics these days. For example, I am guessing that radiation "fields" are akin to "light sources"? If so, I would guess that your planning software is doing all sorts of importance sampling of all these sources across a given domain. Anyway, I was an art major so all my jargon is probably off, but nevertheless I find your post fascinating.
I'm not super familiar with computer graphics, so you'll have to let me know if my description fits what you guys do ;)
I found a youtube video (https://www.youtube.com/watch?v=msX1ypCjkK4) that should give you an idea of what I'm describing actually looks like. Specifically it introduces the concept of a multi-leaf collimator which serves as the main collimating device in modern radiotherapy. The other degree of freedom is the angle of the gantry you see rotating around the patient.
Typically for every gantry angle, the treatment planning software would split up an open field (no collimation) into a bunch of 1x1 cm^2 "beamlets" and would simulate what kind of dose distribution you would get inside the patient from each beamlet (you turn the patient CT into a big 3D grid of voxels to simulate dose in).
You then throw all those dose distributions into an optimiser, and you do what's called a fluence map optimisation which gives you the amount of radiation you want to deliver out of each beamlet. This is the optimisation step I described earlier where the cost function is basically a square difference between the dose in each organ from a given set of beamlet weights and what you want the dose to actually be. Healthy tissue is the limiting factor so you give as much as you can to the tumour while making sure that less than X% of the volume of a nearby organ gets more than Y units of radiation. There's a final step at the end that turns the fluence maps into actual deliverable apertures shaped by the multi leaf collimator.
There's a huge amount of work that goes into the simulation aspect. You can't just model the radiation beam as pure light sources that attenuate in the body via some exponential decay because the high energy photons scatter off electrons which themselves scatter around while depositing energy (radiation dose) away from the point of interaction. The gold standard is Monte Carlo simulations (which is my area of research) since you can model the actual physics of particle transport but in practice most clinics will use a faster engine to generate dose distributions. The faster engines typically superpose a primary component (a pure exponential decay) convolved with a kernel representing the energy that gets deposited away from the point of interaction.
That's probably way more information than you wanted ;)
Wow, that video is really cool! The sliding lock-tumbler mechanism offers a really interesting amount of control over the aperture shaping the beam! Sort of like a brush in photoshop!
Well, an abstract description of processes like these always lends extra heft to what your brain will try to rationalize out of a written description. Especially with computational "decision making" people have a tendency to conjure up ghosts or perhaps a little gremlin at the heart of a nest of wires, watching a TV set, and ruminating over what to do next.
But when you stop and think about what collimation actually is, it's just a manner of focusing a projected beam, either with masking, and obstructing the path of a beam (as with x-rays which are high energey photon beams), or also possibly by bending and focusing a beam, using magnets to direct a beam of charged particles (such as positrons).
So, if working in three dimensions, you might wish to control depth of penetration, but honestly, with X-rays you'll only have so much success, so it's really about how many beams converge upon a region, and the shape they create as they cross over each other, while intersecting, when projected from different angles.
There are a number of ways to approach this strategy, of creating three dimensional shapes, by drawing cross-sections in modern 3D animation programs, like Maya and 3D Studio Max.
The easiest way is to draw a spline or a bezier curve, along one axis, then apply a "lathing" function, which duplicates the spline, rotating about the axis, and then connecting the splines at each control point on the spline/curve. Then you get a crude vase-like shape.
So, take that idea, and apply it to a light source with an articulated aperture. The aperture can create a shadow in the shape of a spline. It might strobe exposures, with small, discrete doses, effectively pixelating or rasterizing the dose with many small exposures, or continuously emit radiation while in motion.
Then, if you attach this beam source to a motorized system, that can rotate the source about an axis on a system of rails, and trigger exposures with different aperture shapes while being positioned around a target at the center of the axis of rotation, hey presto! The software-defined shape has guided the beam, using the same sort of motion control that translates coordinates to a set of motors, as has been done with stop-motion animation cameras in movies for decades!
So, it's like the reverse of a camera, and yes, radiation sources are like flash-bulbs, and you selectively cast shadows, by controlling a gate or shutter, and possibly the shape of the opening in a barrier that stands between the source of the target. (scanline, round dot, square...)
EDIT: As loarake mentions, the actual behavior of a radiation beam is not the same as light. When radiation penetrates a medium, each type of radiation may scatter, reflect, refract differently when interacting with the medium, depending on the material, if it's bone, flesh, metal dental fillings or implanted appliances or something else. Many materials may absorb the radiation and express the interaction by radiating heat, or the radiation will ionize the matter, triggering electrical interactions, and chemical decomposition and reactions. This aspect of radiation is truly the pure random factor (hence monte carlo simulations), the unknowable Schroedinger's cat in a box, but it's real and for every dosage, some radiation will ionize some matter eventually. This along with the conversion to heat is the part that kills tumors, causes burns, and exposes film.
I've been using http://naturesoundsfor.me/ as they let you mix a bunch of sounds and control the relative volumes of each. I've settled on 40% "Creek" and 70% "Rain" for sleeping.
Are you sure you're not mixing up neutrons and protons? Here are some typical depth dose curves from google images(http://www.nap.edu/books/11976/xhtml/images/p20014b2bg205001...). Neutrons, being neutral particles, don't exhibit the Bragg Peak (large increase in energy deposition at the end of the particle's track) that you get for heavy charged particles.
I was pretty sure neutrons beahved that way (recalling from a course I took in grad school), but it has been a few years, and I do not have references handy. I am not entirely sure what the y-axes are, on the plot you linked, so I am not sure how it correponds to what I was saying.
The do not. Protons have a quite definite range, and can be controlled in the way you suggest. This is because they lose energy primarily by scattering (much lighter) electrons out of their path. This means protons have relatively straight paths and the dynamics of the electromagnetic interaction gives an energy deposition curve that is sharply peaked at the end.
Neutrons slow down via interaction with nuclei (all of which except hydrogen are heavier) so they lose energy slowly and scatter all over the place. They have no definite range (search for "fermi age theory" to get a rough idea of the distribution) and can't be meaningfully beamed (unless they are ultra-cold, which is not relevant to fusion power.)
I've made a longer comment above that goes into neutron physics in a little more detail.
That's interesting... (and a little too bad for you, since a lot of systems will give you student discounts and all just for the .edu email address), how come it is that way? Is that a thing with a lot of Canadian universities, or?
Yeah it's a shame. The wiki page for ".edu" specifies "Since 2001, new registrants to the domain have been required to be United States-affiliated institutions of higher education, though before then non-U.S.-affiliated—and even non-educational institutions—registered, with some retaining their registrations to the present."
That's kind of a weird post - it conflates very different things. Mining burns power to protect against a specific class of attack, not any and all attacks. Bitcoin isn't magically immune to heists, and physical protection may be slightly easier (in that you don't need physical access as frequently) but no less necessary for large wallets.
The original source is taken from a medical image DICOM viewer. In my limited experience as a medical physics student, the people working with these tools would really benefit from a comment explaining the code. They are most definitely not coders, most of them having barely done anything more than write a few matlab scripts.
This is hardly a typical environment. All code would be challenging, even commented code. Excessive commenting would be necessary: write the program again in English.
The numpy example becomes fast when you use numpy arrays. Try %timeit numpy.array(arr); numpy.reshape(arr, (-1, 3));
and then just %timeit numpy.array(arr), you'll see that the reshape takes no time at all. Type conversion from python list to numpy array is what kills the performance.
A point of clarification here - numpy's reshape operation stays fast as long as the array is a numpy array.
Which is exactly what the parent comment was all about - the author figured that the reason numpy was significantly faster was because it was accessing / working with the data in a different fashion.
So, in order to test that theory, he converted the numpy.array into a normal python array before he proceeded to do any timed operations with zip vs. numpy.reshape, etc.
This is a more realistic playing field if you're considering data that was created outside of the numpy environment. At some point, if you're going to work with numpy.reshape, it will need to be type converted / "imported" into numpy data types.
For the purposes of this test, it's much more "fair" to include both the time numpy spent on splitting the array as well as that conversion time. The reshape process in numpy had essentially O(1) time with native data types indicating that it had done some behind the scenes work that allowed for such speed. The parent example is much more realistic in capturing the time of the behind the scenes work by forcing each method to start from the same exact same data objects.
My reply was in response to the statement "numpy is two orders of magnitude faster here; it's evidently using a highly optimized internal codepath for random sequence generation", which is false, it's not because of highly optimized internal codepaths for random sequence generation, it's because the code produced a numpy array (or didn't have to do type conversion). But I agree, when using numpy to produce a timing comparison, it would be fair to start with a numpy array, or to show the time involved in the creation of the array.
Thanks for your comments. I hope it didn't sound like I was negatively comparing numpy's array/sequence operations to anything. I know very little about numpy, and I assume that "real" numpy solutions don't look anything like what's being discussed here. I only included those measurements since the article's author did.
To clarify my points a bit, the optimizations I alluded to (in "highly optimized internal codepath") were meant to include things like using a generator, i.e. at no point is there an actual array of input random numbers. The fact that in numpy the 300-element "array" and the 3,000,000-element "array" had identical timings suggests exactly that; I disagree that it's an issue of internal representation, unless the concept of a numpy array subsumes the concept of a generator, in which case I think we're all saying the same thing.
That kind of optimization is only possible in this case because by the definition of randomness nobody could know what the values were until they were enumerated, so it's 100% transparent to use a generator. That's not how real-world data works, hence my forced-native-array measurement and pudquick's reply.