Yeah, I think the chopsticks makes sense. For the actual contact I would have thought a friction based solution (wide soft/grippy chopsticks) being easier in practice. Too bad the metal they are using is not magnetic.
If you go this route all devices will be running Linux. The one OS route is kindof nice, hence the pinephone over open android alternatives (like Graphene OS).
I sorted from least to most technical. I also tried to pick the least technical challenging in each category. The Dell stuff should just work. The Phone will require some tinkering for the moment.
I'm sympathetic to remove Apple and Google from my life, but this list looks so sad. You know the experience, integration and headache of all this is going to be horrible.
Depends what you're looking to do - if you really value your privacy, yes, you're going to have to give up some convenience, but I can assure you, it's really not that bad if you put in a little work on an occasional basis, it's not the constant maintenance nightmare that some people seem to expect.
I fired up Syncthing on my phone and NAS and restic to perform encrypted backups nightly to B2, sure, you're not going to get some magical cloud image recognition stuff, but do you really need or even want it?
Everything can integrate quite nicely, but plan to do the connecting work yourself as a one-off with some occasional maintenance a few times a year.
I used to think that privacy arguments are ok in principle but we aren’t really loosing much by sharing some personal data with Apple or Google’s algorithms. In return we got so much convenience, software and products that worked well with each other and even shopping recommendations that were great.
But with this move, they have just crossed a huge threshold. Now it’s clear that we are hurtling into a truly dystopian future where our personal thoughts and experiences are simply not our own. America at least seems to be heading towards some form of corporate fascism that may not be dominated by an individual or a group, but will eventually lead to social ossification and societal decay by outlawing dissent.
To introduce this tech under the disguise of fighting child abuse is just amazing. This will eventually lead to identification of individuals who for example may hold sympathetic views for political views that are currently out of favour.
At some point you have to swallow the inconveniences outside the garden and recognise what this is leading to.
If you had the option of buying this hardware stack through an integrated third-party brand/storefront that offered support for the products and their integration would that make you feel differently?
That's an interesting idea. The amount of work even for a techie to maintain all this, is considerable. Could it be set up for a "user".
Tech support, would be;
Interesting.
I made an example storefront a few months ago and jotted down a plan in a bout of procrastination. I have a few thoughts on a pragmatic lazy (in the cs sense) approach.
If you or anyone one else is interested email me: ipodopt@pm.me
Laptop: Dell XPS 13 and very happy. Maxed out specs and clearly higher price range.
Or: Lenovo Yoga Convertible. My second device. I just don't do games. Or bigger data stuff on this machine. Some design work. Some photo and smaller video stuff. I love the flexibility of the convertible when working with PDF and doing annotations by hand.
The battery on my xps seems to be swelling and messing up the trackpad. Apparently it’s a pretty common issue
Edit: seems to be the precision line too
> The same problem is happening with the Precision 1510 line with the same batteries. I purchased 10 of these laptops for my department around the same time you did. We've had four of these failures so far in three laptops.
I have their laptop and do not recommend them. Honestly, I sort of regret the purchase.
- They lie about specifications like battery life (and other things). I get about 1 hour on my ~5 hour battery life when web browsing.
- My laptop had a race condition at boot that would prevent boot 50% of the time. There was a workaround.
- Wifi had a range of maybe ten feet (not joking).
I am sure there new laptop is better however I do not really trust them after my interactions. Especially for something more novel like a linux phone.
On the other hand, Pine64 is very focused on their hardware stack. All their products are running a very similar hardware unlike Purism. They are moving way more product then Purism and are better liked. Hence they have a stronger community. They are also much cheaper phone-wise for a similar feature set. And you can actually buy and receive the phone.
In terms of alternatives I think System76 is pretty good desktop wise right now. Laptop are alright. Waiting for their upcoming in-house laptop.
This is quite interesting. I'm writing this on their Librem 15 and can't recommend it enough. No problems with booting or anything. Battery life got short with time (but I never cared about it).
> Wifi had a range of maybe ten feet (not joking).
Purism is using the only existing WiFi card that works with free firmware. It is less performant than typical proprietary ones. If you don't care about your freedom strongly, you can replace the card (they are very cheap on ebay). Also, check Purism forums for such questions. It works better than "ten feet" for me.
> On the other hand, Pine64 is very focused on their hardware stack.
And the software is provided by Purism (Phosh, for most Pinephone users). Pinephone is great, I'm using it, too. But Librem 5 is much more performant. Many videos show it's working fine, except of the non-finished software (same as for Pinephone).
> Purism is using the only existing WiFi card that works with free firmware. It is less performant than typical proprietary ones. If you don't care about your freedom strongly, you can replace the card (they are very cheap on ebay). Also, check Purism forums for such questions. It works better than "ten feet" for me.
Not the wifi card, the surround material like the chassis are attenuating the signal (the librem 14 should have fixed this issue). I swapped mine out for a well supported and performant Intel card and only got marginal improvements to the signal.
My "ten feet" was using it at coffee shops. I was traveling with the laptop. It was hard. The card switch did get it over the hump for this use case. Not an issue most of the time but still will have get a spot closer the router for video calls (like standup).
So the constraints for work were getting a seat close the router AND a power plug. I ended up USB tethering with my phone alot.
I do appreciate their contributions to the ecosystem but was wronged as a consumer. They need to be truthful.
I take it you have the v3 with the standard boot drive?
The v4 updated the screen which burned more power. I remember telling them my real life results and them proceeding to not update their product marketing page while admitting it was based on the v3. I feel like they were already stretching it to begin with with v3 but you would know.
I also got the fastest SSD they had in checkout. Which I think contributed to the booting race condition. I never got a link to an upstream ticket so I do not know if it is fixed.
When I emailed them they said they do not have a laptop in that configuration to test, haha.
Sad the high DPI displays didn't go well. My eyes can't take the 1080 vertical pixel displays that are still so common on open laptops nowadays. But I really want to like the Librems; there aren't many trustworthy laptops with kill switches out there.
I have an X1 Carbon Gen 9 with a high DPI 16:10 display, 32GB RAM, and anywhere from 4 to 12 hours of battery depending on workload. It's worth a look for people who can tolerate Lenovo's history (BIOS rootkits targeting Windows in the non-ThinkPad lines).
I think it's about time to say:
"Thank you, Apple!"
Finally these awesome projects will get the funding and the support they deserve.
Also, has anyone tried the FXtec phones? https://www.fxtec.com
I am thinking about getting the FXtec pro1 version, which promises to get a decent UbuntuTouch support as well as lineageOS.
I feel that with the comeback of Vim, there might be a sufficient user base for devices that use the keyboard for most tasks. I miss the days, when I could send a text message without taking the phone out of my pocket.
Is there a Linux Laptop at 2560 x 1600 resolution like Macbooks ? System 76 still runs at 1920x1080. It really makes a difference wrt crisp font rendering and less eye strain.
Most of them I believe should allow you to get HiDPI (aka Retina) displays.
I've been looking at replacing my now 9y-old Macbook Pro (primarily running Manjaro Gnome as my daily driver) with a dedicated Linux laptop, and I've narrowed my selection down to the Lenovo ThinkPad P series or the Framework laptop. For the ThinkPad's, the 4k display (3840 x 2160) is recommended I believe (over the WQHD ones). The Framework laptop comes with a standard 2256 x 1504 display.
I hadn't come across that Omnia router before - it looks great! Bit of a shame it doesn't support 802.11ax, and it is more expensive than I'd like, but still...
I think the router might be my favorite open hardware piece:
- It was easier to setup then my old Asus router
- Schematics and source are easy to access.
- It has never not worked.
- It is made by a company that is a domain registrar with a good track record on open source projects (click the more button it the top right and you might recognize a few projects): https://www.nic.cz/
- And if you need to do something advance with it, you can. Mine is has Bird runing BGP.
The Omnia was quite expensive, but I've gotten frequent updates for the last four years. It's a nice mix of hackable and "just works".
I've turned off the WiFi at this point, and just use it as a router now. The UniFi access point that I installed provides better coverage in my house since it's easier to place in a central location.
Yeah, I figured. When waiting for lsp autocompletes in emacs, my entry level M1 MacBook with no gccjit is orders of magnitudes faster than my almost maxed out Lenovo with emacs and GCCjit.
The difference is so stark that I cannot bear to autocomplete on type on the Lenovo machine, it lags too much and frequently locks up.
My thinkbook g2 14 are is almost the same speed as m1 macbook and runs linux without any issue. It has "only" 9 hours of battery in my usecase, but that's completely fine by me.
I can't complain. But can't compare it to the M1. I had a 2020 MBPro and am currently using a XPS 13 with maxed out specs.
The camera on the XPS is some leagues below. The microphone had driver issues from the start and it cost me two days for a software workaround.
Other than that I am in every way more happy. Keyboard, track pad and resolution. Performance even with crappy corporate spy-and crapware is definitely way better.
I thought I would miss the Mac more. Not looking back once the mic was fixed.
I saw your blogpost [1] mentioning how the iPhone 12 is likely your last. Have you given any thought since then to what your next smartphone would be? Or if you still use smartphone at all?
Have you considered Purism and Fairphone or are their specs too underwhelming to consider?
With regards to your laptop, does not being able to use Mac-specific development tools (XCode, etc.) interfere with your work in any way or do you just limit the work you take to ones that are friendlier to Linux?
Laptops with an "s"? How many do you have and how many do you carry on your person? When you say you run MacOS on your laptops, do you mean as a VM or on Apple hardware? Did you keep the M1 laptop you blogger about or did you send it back?
I was thinking of using Tekton to make a CI/CD service at one point but I would pretty much need to smash the whole k8s VM/Node every time I do something and only allow one participant at a time. There are ways to run vm pods instead of containers in k8s but there are other issues at play. It's been a sec.
Does RBAC not limit these by default? Does cert-manger not already give itself restricted permission on install? Do I need to fix up my cluster right now? If so, do you have any example RBAC yamls? :D
> And when building docker images in the CI, I use google’s kaniko to build docker images from within docker without any privileges (it unpacks docker images for building and runs them inside the existing container, basically just chroot).
You can also use standalone buildkit which comes with the added benefit of being able to use the same builder locally natively.
No RBAC doesn't automatically do this. And many publicly available Helm charts are missing these basic security configurations. You should use Gatekeeper or similar to enforce these settings throughout your cluster.
He co-founded WhatsApp with Brian Action. They both wanted cash but also felt bad for selling out WhatsApp.
So Moxie founded Signal and Brain contributed. A new clean room project under a non-profit with an endowment. Made as what WhatsApp should of been if they didn't sell out.
And to the commenter below he work for Jack Dorsey as the head of cyber security. And Jack supported this project from the get go. They probably like each other. Go figure.
I think this comment is low effort, malicious, and unreasoned.
EDIT: End to end we do not give enough information from the application to OS to the hardware for perfect optimizations. We do not start with enough information and we lose too much down the stack. Especially around concurrency. I think that at the end of the day there is some atomic information needed by hardware and OS to make optimizes about all code running on the system.
I see the approach taken here in an effort to retain needed information by integrating compilation and OS level functionality. I think this is correct but we will see a shift to OSes having there own byte code as a translation layer between higher level languages. I also think high level languages lack some needed information in the first place...
I can't recall the details or find them searching online, but I recall reading back in around 2000 about an emulator that was actually faster than bare metal when emulating its own hardware. I think maybe it was a Sun Microsystems project, but I'm not sure. Does anyone else recall this?
It was almost definitely HP Dynamo. (Edit: if you combine ideas from HP Dynamo, SafeTSA JIT-optimized bytecode, and IBM's AS/400's TIMI/Technology Independent Machine Interface, you get a better version of the current Android Run Time for bytecode-distributed apps that compile ahead of time to native code and self-optimize at runtime based on low-overhead profiling.)
The really nice thing about Dynamo was that it was a relatively simple trace-based JIT compiler from native code to native code (plus a native code interpreter for non-hotspots). This meant that it would automatically inline hotspots across DLLs and through C++ virtual method dispatches (with appropriate guards to jump back to interpreter mode if the virtual method implementation didn't match or the PLT entry got modified). They didn't have to do any special-casing of the interpreter to handle virtual method calls or cross-DLL calls, it's just a natural consequence of a trace-based JIT from native code to native code.
The only downsides of something like Dynamo are (1) a bit of complexity and space usage (2) some startup overhead due to starting in interpretive mode and (3) if your program is abnormal in not having a roughly Zipf distribution of CPU usage, the overhead is going to be higher.
Ever since I read about Michael Franz et al.'s SafeTSA SSA-based JVM bytecode that more quickly generated higher-performing native code, I've had a long-term back-burner idea to write a C compiler that generates native code in a particular way (functions are all compiled to arrays of pointers to strait-line extended basic blocks) that makes tracing easier, and also storing a SafeTSA-like SSA bytecode along with the native code. That way, a Dynamo-like runtime wouldn't use an interpreter, and when it came to generate an optimized trace, it could skip the first step of decompiling native code to an SSA form. (Also, the SSA would be a bit cleaner as input for an optimizer, as the compilation-decompilation round-trip tends to make the SSA a bit harder to optimize, as shown by Franz's modification of Pizza/JikesRVM to run both SafeTSA and JVM bytecode.) Once you have your trace, you don't need on-stack replacement to get code in a tight loop to go into the optimized trace, you just swap one pointer to native code in the function's array of basic blocks. (All basic blocks are strait-line code, so the only way to loop is to jump back to the start of the same basic block via the array of basic block pointers.)
The background for HP Dynamo is that during the Unix wars, there were a bunch of RISC system vendors vying for both the high-end workstation and server markets. Sun had SPARC, SGI had MIPS, DEC had Alpha AXP (and earlier, some MIPS DECStations) and HP had PA-RISC. The HP Dynamo research project wanted to show that emulation via dynamic recompilation could be fast, so to get an apples-to-apples comparison for emulation overhead, they wrote a PA-RISC emulator for PA-RISC.
This project grew into an insanely powerful tool. It's called DynamoRIO and is still under active development and use today. It's one of the coolest technologies I've ever worked with.
It's used by the winafl fuzzer to provide basic block coverage for black box binaries.
Yes, I have poked around DynamoRIO a few times. It's now geared toward dynamically modifying binaries for various purposes from code coverage to fuzzing to performance and memory profiling.
There doesn't appear to currently be a turn-key solution similar to the original Dynamo. DynamoRIO could be used to put a small conditional tracing stub at the start of every basic block at application startup time, and then do some binary rewriting, similar to the original Dynamo, but it doesn't seem there are downloadable binaries that currently do this.
This dynamic optimization would be much easier and lower overhead (but less general) with cooperation from the compiler.
Could such a compiler include the runtime for this in the binary as an option? That might make it a lot more likely to be used by people, because it is all nice and stand-alone.
Who would benefit from this most? Is the benefit so diffuse it would almost have to be an open-source project without funding? Or could there be parties that see enough of an advantage to fund this?
I guess you could try and get a certain instruction set vendor (probably RISC-V, maybe ARM or x86 based) to have this as a boost for their chips. I guess the "functions are pointers to blocks" compilation could benefit from hardware acceleration.
You could presumably statically link in the runtime. Also, without the dynamically-optimizing runtime, it would run just fine, just a bit slower than normal native code due to the extra indirection. Lots of indirect calls also increase the chances of address mispredictions due to tag collisions in the BTB (Branch Target Buffer).
Function calls/jumps through arrays of pointers are how virtual method calls/optimized virtual method tail calls are executed. Though, in this case, the table offsets would be held in a register instead of immediate values embedded within the instruction. I'm not aware of any instruction set where they've decided it's worthwhile making instructions specifically to speed up C++ virtual member function dispatch, so I doubt they'd find optimizing this worthwhile.
Also, if things go according to plan, your hot path is a long strait run of code, with only occasional jumps through the table.
I should add that the GP only asked about CPU instructions for faster indirect jumps, but I should add that there are at least 4 things that would help a system designed for pervasive dynamic re-optimization of native code:
1. Two special registers (trace_position pointer and trace_limit pointer) for compact tracing of native code. If the position is less than the limit, for all backward branches, indirect jumps, and indirect function calls, the branch target is stored at the position pointer, and the position pointer is incremented. Both trace_position and trace_limit are initialized to zero at thread start, disabling tracing. When the profiling timer handler (presumably SIGVTALRM handler on Linux) executes, it would do some heuristic to determine if tracing should start. If so, it would store the resumption instruction pointer to the start of a thread_local trace buffer, set trace_position to point to the second entry in the trace buffer, and set trace_limit to one after the end of the trace buffer. There is no need to implement a separate interrupt for when the trace buffer fills up, it just turns of tracing; instead, re-optimizing the trace can be delayed until the next time the profiling timer handler is invoked.
2. Lighter weight mechanism for profiling timers that can both be set up and handled without switching from user space to kernel space. Presumably it looks like a cycle counter register and a function pointer register that gets called when the counter hits zero. Either the size of the ABI's stack red zone would be hard-coded, or there would need to be another register for how much to decrement the stack pointer to jump over the red zone when going into the signal handler.
3. Hardware support for either unbiased reservoir sampling or a streaming N-most-frequent algorithm[0] to keep track of the instruction pointer of the instructions causing pipeline stalls. This helps static instruction scheduling for those spots where the CPU's re-order buffer isn't large enough to prevent stalls. (Lower power processors/VLIWs typically don't execute out of order, so this would be especially useful there.) Reservoir sampling can be efficiently approximated using a linear feedback shift register PRNG logical-anded against a mask based on the most significant set bit in a counter. I'm not aware of efficient hardware approximations of a streaming N-most-frequent algorithm. One of the big problems with Itanium is that it relies on very good static instruction scheduling by the compiler, but that involves being good at guessing which memory reads are going to be cache misses. On most RISC processors, the number of source operands is less than the number of bytes per instruction, so you could actually encode which argument wasn't available in cases where, for instance, you're adding two registers that were both recently destinations of load instructions.
4. A probabilistic function call instruction. For RISC processors, the target address would be ip-relative with an offset stored as an immediate value in the instruction. The probability the function is taken would be encoded in the space usually used to indicate which registers are involved. This allows lightweight profiling by calling into a sampling stub that looks back at the function return address. Presumably some cost estimation heuristic would be used to determine the probability embedded in the instruction to make the sampling roughly weighted by cost.
Unfortunely when these things actually suceed on the market, we only get IBM and Unysis mainframes, Android, watchOS and .NET Native/WinRT, which are always a kind of compromise of the whole idea.
I’m being somewhat facetious or jocular when I say this—and somewhat serious—but…I wonder:
Is that a reflection of the quality of the software implementation? Or is it a reflection of the hardware it’s trying to implement? Or perhaps it’s related to the hardware the emulator is running on?
Or did the emulator emulate the hardware while running on /that/ hardware? Did it pull efficiency gains out of seemingly thin air?
Maybe VMware? There was a paper about how, in some cases, emulation was faster than hardware virtualization support. This was way back when intel's hardware virtualization support was new and VMware had already spent years optimizing software virtualization.