That's "relatively" easy to do. Get in a space ship that could quickly get to a fraction of c, travel for a short amount of time and then return back to the starting point. You would have traveled to the future from the point of view of some one at the starting point.
This is actually harder to do in practice and is a common misunderstanding of the twin paradox, which is resolved by incorporating acceleration into the model.
> In India, for example, they use the lines between the segments of the fingers to count.
> This means each digit can represent four numbers and the whole hand can represent 20.
The first part is true, but not the second part. You use the thumb as a pointer to track the lines on the other 4 fingers. So you count up to 16 on each hand. Besides, the thumb has 1 line segment less than the other fingers.
I think this is open to interpretation. I'm from India but actually count 3 per finger (i.e. the 3 spaces on your finger between the lines), counting up to a max of 12 per hand.
> This gets used to explain entanglement but it really has absolutely nothing to do with it. This is nothing that the ancient Greeks wouldn't have known.
To be fair, this usually crops up in entanglement discussions to deomonstrate how it can't be used for FTL communication and not to actually explain what entanglement is.
Leaving the serious engineering difficulties involved in a solar gravitational lensing system aside, such a system doesn't necessarily help in picking up just random omnidirectional broadcasts from an ETI.
Say we had an SGL telescope positioned such to image an Earth-sized planet we know is in orbit of Alpha Centauri A. For ease of math we'll say this Earth sized planet is at a distance such the solar constant is the same as Earth. So that's a total power received from the Sun of 1.74x10^17 watts and with the Earth's mean albedo of 0.3 about 5.22x10^16 watts being reflected off into space.
We can use our SGL to image the planet around Alpha Centauri because it's reflecting 52.2 petawatts of sunlight out into space. So it takes the power of a star reflecting off a disc with a cross sectional area of 1.26x10^8 square kilometers for a proposed SGL system to detect and image a planet.
The amount of power an omnidirectional antenna can possibly emit is somewhat less than 52 petawatts. Broadcast antennas output at most a few megawatts EIRP because there's no utility in blasting out hundreds of megawatts for terrestrial transmission. Such broadcasts just aren't going to be detectable even with gravitational lensing from our Centauran neighbors.
Nor are gravitational lenses terribly useful for beacons since you need the right geometry between the sender, star, and receiver to use the lens. You could use lensing to increase the EIRP of your transmission but only at a specific target. Nobody outside the focal plane of the lensing system is going to benefit from the lens.
SGLs are a neat idea and an interesting topic but I don't think they solve many SETI problems.
Can this be used for building a convergent, dockable phone? What I mean is - if I have, say, a Pine Phone, can I build a dock with its own CPU and RAM, so that when I dock the phone, I get the extra horsepower from the dock, but without the dock, everything still works, albeit a little slower?
Built in SSH agent - to store you ssh keys and make them available to your ssh clients. KeePass supported it to too, but awkwardly through an extension which needed to be separately installed.
- There are various efforts on the JVM to build multidimensional arrays, we're talking to many of them to try and figure out a strategy for the whole platform. Ditto for dataframes, though Apache Arrow looks like a good baseline.
- We're not looking at other languages outside of the JVM at the moment, but we're continuing to contribute to Tensorflow Java and ONNX Runtime to improve their Java support. We could look at pytorch inference support based on their Java API, but that overlaps pretty well with the things that ONNX Runtime supports. Do you have any suggestions?
> We're not looking at other languages outside of the JVM at the moment, but we're continuing to contribute to Tensorflow Java and ONNX Runtime to improve their Java support. We could look at pytorch inference support based on their Java API, but that overlaps pretty well with the things that ONNX Runtime supports. Do you have any suggestions?
Many models are deployed as restful endpoints, so a quick and easy ways to deploy models as services with containers or serverless providers will be very useful - although admittedly, you might not want that in the core project, could be a good sidecar project to this. Given your focus on model provenance, extending that beyond to model deployment and life cycle management tools such as MLFlow could also be very useful