I tried using Blink shell for a while and it was ok, but when my iPad screen broke I got a Thinkpad X280 as a replacement. For emacs, programming and a lot of other things it's way better. I've just spent some time in KiCAD this evening, I couldn't do that on an iPad. Plus if something breaks it's way easier to get parts, a new screen is under $100 and I can install it myself without any tools.
The front bezel is clipped in, it's quite hard to detach, but can be carefully removed, and then the screen just sits behind with a eDP connector. Quite a contrast to an iPad.
Does anyone know what kind of range you get with BTLE sensors like this? I'd think a decent distance would be required in most cases, and BTLE is meant to be short range.
A while back I built a LoRa device and we got really good range, 10km at least without having to try too hard, the plan was to deploy 100's of these so it was complicated ensuring communications without collisions, and also this was for a controller not a sensor, which is much harder on the battery as the device needs to listen periodically for updates.
That was my first question also - mention in replies are 100-240 m rages which is useful to know.
FWiW here in W.Australia 4,000 acres is a mean size for farms (with a skew distribution) .. roughly 4 km a side for a square boundary.
I can see these being useful for occassional monitoring around a central solar powered collector - with stakes and flags on the sensors tags so they can be picked up again and recovered as wheat grows.
These are the practical issues of modern grain farming (and many farms are much much larger - the mean is skewed by many small plot farms while the bulk of the industry revolves around a few hundred much much larger concerns).
So Lora is probably more practical. We had to trigger a solenoid so needed a bit more power anyway, but could reliably run off a LiIon battery with a solar panel for recharging. Makes the whole package more complicated, but it worked well.
The extra range or Lora means I didn't have to implement a mesh protocol. I wasn't looking forward to having to test a mesh network, point to point is much easier.
There’s a story about someone watching Woz entering hex into the computer directly to write a program, and every now and then he’d pause for a little. They asked him why and he said he was calculating the offset for a forward branch, which would mean he’d have to think of all the code up until the branch destination and count the bytes. Then he could carry on and enter those bytes.
That's pretty amazing. I would do something similar but easier: invert the branch do a jmp [word] and then continue as though nothing changed. Then, afer you know the destination address fill it into [word]. That way you don't end up running out of memory (your own, not the computer's).
I’ve used SRP on a few projects, including for a large bank for the log in from their counter system. Once you get the algorithm sorted out it’s quite simple and it feels a whole lot better than some of the password storage systems I have seen (and still do see) implemented.
I’ll have to have a look at OPAQUE, which according to Wikipedia is a newer alternative to SRP.
Edit: Just noticed that there are quite a few more systems using these types of algorithms, including Apple HomeKit.
And Atari DOS was written by the same people that wrote DOS for the Apple ][. Shepardson Microsystems. The book ‘The Atari BASIC Source Book, written by some ot the same people is still an interesting read today.
I do miss the diversity of computing from back then. Today everything has converged on looking and acting almost the same, but back then there was so much more choice. Amiga, Atari St, Mac, SGI, Sun, NeXT, and a whole lot more.
Yep. Nowadays a new release is likely to take features or design away or backwards for seemingly the need to be different. Like the many times Microsoft has tried to take away the Start menu. Gnome 2 to 3. Etc.
Meanwhile Planet 9 tried to pull off a logical extension of unix only to be abandoned. Some of those ideas live on, but probably not as extensively as P9 would have allowed.
A subset of us still use plan9, usually in the form of 9front.
Every time I think of Plan9, I get a little misty eyed and wistful. I'm just old enough to remembering time sharing systems and mainframes. These days looking at whats called cloud, I'm seeing a lot of the old architecture coming back, loosely speaking. Mobile devices are like the new dumb terminals, the browser is the shell, and the cloud is our mainframe. If you want to extend the analogy further we even pay for time in a convenient monthly subscription, whether that be Netflix's cloud or your favorite remote music service.
Circling back to plan9, it really was the ideal system, and for the way things are going these days, it's really only a matter of time before we see the return of Plan9 or something built in effigy of it. I say this with a straight face, from the filesystem to the network model, from top to bottom Plan9 was a new paradigm that extended on the original Unix system in ways we still haven't replicated today in a mainstream environment the way it was then. The way the network layer and system layer intertwined to form an actual everythings-a-file system was bonkers the first time I tried it, coming from the Linux world where this claim is often made and isn't always true.
Now that the 9 system is back in the community's hands it's just going to take one group to be the next Inferno, similar to redhat, and put on the suits and ask for money and youll see the rest of the industry start to join in. It might just be my tainted opinion as a Gopher, but we're letting good stuff go to waste, we should be evolving this technology while the creators are still around to give input about their vision.
Glenda will ride again, I just hope I'm alive to see it.
NEXTgen hardware virtualization + QUBES OS-like usage of virtualization but made nice looking and frictionless to standard user. Thats the future we get.
Yes and which definitely feels like the OS equivalent of “since you all can’t get along on your own you each will be given your own OS-like namespace and have no direct interaction with one another.”
Simplicity (im a TWM guy) also the way that acme is like a emacs from the future in that like the plan9 ui the mouse gets a lot of traffic. You can highlight words and instantiate them as commands, also one of my favorite features is preemptively drawing the space where my window is about to spawn.
Also, the color scheme is truly based, as the kids say.
Plan9 and EMACS have architecture which is antithetic to economic model of todays software industry, so in that sense im glad we did have windows95-like buggy macos (until atleast 10.4-10.5) XD
EMACS is essentially SAP+github+Office(including Visio and Power Automate) for your own offcloud usage. If you think about it.
How is it antithetic? Being able to “mount” external compute to a local terminal is a recipe for selling more hardware. I mean both PlayStation and Xbox can be locally streamed to a phone or tablet. Granted, it did take basically 20 years for those suits to come along to the idea.
The whole obsolescence through a glued on battery and all the other nastiness of modern mobiles does make a point though. They do want their recurring revenue streams for cloud resources. But that essentially moves the mountable resources to a provider.
convert all JS to "native" via LLVM.... or just webassembly ..
programming languages are ONLY communication layers
between human and computer, they are not some voodoo, gift from god,
so these layers can be made to suit human needs.
instead of browser downloading blob from facebook, your package manager will dowload it, and that can be made over P2P (like torrent) because you do not need it in 0.000001 sec. it will sit in cache.
and this can be signed with current certification authority ecosystem almost without change to provide safe downloads.
> I do miss the diversity of computing from back then
I’ve thought about this for a while. All sorts of different architectures, companies, protocol, etc. whereas computing seems to be a massive monoculture now. We basically only have a handful of operating systems, most of which are based off something from decades ago, basically only two architectures still succeeded and one of them just seems to be hacks piled on top of hacks, etc.
I get that there’s a reason certain stuff won out, but it’s just boring to me personally.
Even compared to not that long ago, I look at PC cases from 10-15 years ago and see a ton of cool and unique designs, anything from recent times seems to be reduced to “white or black case with flashy lights”.
Some people have problems with sodium lauryl sulfate. In toothpaste it gives me mouth ulcers, and in soap I get skin irritation. It’s quite hard to find SLS free toothpaste, or even shampoo, but it could be worth trying if you have unexplained skin irritation. I haven’t had one ulcer since swapping toothpaste and before that I was getting them all the time.
Like the original NeXT AppKit, where the UI was built from objects in memory, then stored on disk as archived objects. The objects could be manipulated with InterfaceBuilder and live tested. These concepts seem to have been lost along the way, to the point where they are not a part of modern NeXT->Mac->iOS development.
But also they got harder to implement as UI's got more complex.
I live in an area of New Zealand surrounded by hydroelectric dam's, and I've often thought it would be a great place for a datacenter. They'd be electricity savings from avoiding the transmission losses to get the electricity all the way to Auckland, and land is cheaper, but mainly because there ought to be a way of using the cold water for cooling rather than electrically powered coolers. The data center could even be built underwater, possibly in a canal to benefit from the existing water flow.
The problem is that the underwater cables land in Auckland, and for most applications, lower latency (both to the local populations and to the rest of the world) is a very important consideration.
I did hear that someone is putting in a 10MW cryptocurrecny mining "datacenter" right next to Clyde dam. I'm not exactly happy about that.
It would be pointless to put Chia mining operation next to a hydro power plant, the entire point is that they use very little electricity.
They haven't even admitted it's crypto.
All that's really known is that it's a 10MW "datacenter", hooked directly into the power plant, in a part of the country without a commercial fibre backbone, and will allegedly be running workloads that can be turned off depending on power demand.
Reading between the lines, that's a crypto mining operation.
I still like the idea of running a trans-Tasman cable to Southland and building data centres down there when Rio Tinto closes down. Plenty of cheap and green power, and the extra cable would solve the latency issue and provide some extra resiliency for NZ (all international cables currently land in the North Island.)
Incorrect. All of the hydropower is used as present, so any extra marginal load added to NZ causes extra gas usage (or coal sometimes). If the lakes are full and water is being sent down spillways, then perhaps you can get free power, but that is rare.
Also note that lots of people think transmission losses are heavy, but actually they are less than 10%. For example HVDC transmission losses are ~3% per 1,000km (the DC link from Benmore to Wellington)
Rio Tinto aren’t closing the smelter down. They can’t just shoot people like in other countries so they have to be crafty to get what they want.
The whole “give us cheap power or we close this plant down” is just a genius power play.
People don’t want to be out of jobs and govt want a good excuse to get the deal done. The politicians look like savvy negotiators keeping this massive corporate begging and rio tinto get discount rate power. It’s really a win win ruse for them.
I had an older student at my high school teach a machine language course, complete with a one page table of opcodes and their hex values so that we could type them in.
After that I purchased a copy of Programming the 6502 by Rodnay Zaks. Books took 3 months to arrive if they were a special order like that, sometimes longer. To this day it's still one of my favourite books.