As an example, AOSC OS project owners feel offended and made a public post about it on Telegram: https://t.me/aosc_os/523.
It could disrupt the community because for issues and pull requests created on GitCode, the original maintainers are likely not going to receive any notifications and they will just be ignored.
GitCode also did not make it clear that those repos are mirrored from GitHub in an obvious way, especially on the organization or user pages, e.g. https://archive.is/su9h5. IANAL, but this looks like impersonation to me.
A sibling comment also mentioned that CSDN is publishing machine-generated blog posts about cloned projects with a link to GitCode. I believe this is even more unethical.
> [...] and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality
Use, display, and perform through the GitHub Service are permitted according to the GitHub ToS, but not reproduce except on GitHub, unless further rights are granted with a license.
Any repository without a LICENSE file or license grant in code or README, and I could create one in a few minutes.
GitHub and GitCode do not provide an easy way to search for these, so you will have to scrape the API and clone the repos to find out the ratio of FOSS/restrictive/no license.
The title reminds me of the Work Smarter loop in "Nobody Ever Gets Credit for Fixing Problems that Never Happened", but that's more about management than organic growth of efficiency.
One case where this might go wrong is if you accidentally delete a file, the deletion replicates and you're left without a backup. Replication is nice but it is not a replacement for backup.
I feel like a filesystem with native snapshot support (zfs, btrfs), replicated once (also native in zfs) obsoletes conventional 3-2-1 backup systems. It’s technically only 2 versions, but you’re protected from all the same failure modes.
To clarify all data that is transferred from Linux ZFS to FreeBSD based systems or visa versa is copied using restic or rsync.
I only use ZFS replication when doing Linux to Linux transfers and when that happens they are running the exact same operating system and version of OpenZFS.
Eh, especially if you're just using Linux and FreeBSD (doubly so now that they're both using openzfs) it's easy enough to keep pool features compatible. Obviously you need to either pin to a compatible feature level or avoid upgrading the pool, but I don't think it's terribly hard.
This was before freebsd used openzfs, maybe easier now. But my point still stands, you are off the beaten track at a time when you want to minimize risks.
Heh, you must really have foul look, I am using it on FreeBSD since 2007ish and never had any issues. Same pool, no longer the same disks as they were all replaced since. Also the hardware around it was replaced and FreeBSD 8 updated to all new versions, currently running 14.0.
Never lost any data. There isn't much software that could claim that.
Meanwhile I lost (I had backups) all my data twice on btrfs, I know it is more stable now, but I certainly wont ever use it again. Even HAMMER1 (I would love to use HAMMER2, but until my server dies, I will stay with FreeBSD) lost it only once and even in that case, after debugging irc session with Matt Dillon, I was able to recover most files.
The only thing that pisses me off is that Kubuntu doesnt support it trough installer (yes I could do it manually but been there with Fedora and I am sick of tracking if initfs was updated or finish with unbootable - easy solvable, but annoying - situation ) and I am now forced to use Ubuntu with KDE on workstation. But this is not zfs.
> The only thing that pisses me off is that Kubuntu doesnt support it trough installer ...
For my personal workstation I've started experimenting with using Proxmox (it's a Debian 12 variant) as the OS because its installer supports multi-drive ZFS installation (RAIDZ1/2/10/etc) out of the box. So, boot setup is currently mirrored SSDs (with /home on mirrored NVMe drives).
Apt installing the standard desktop stuff afterwards (Nvidia drivers, KDE desktop, etc) has worked well, and it all seems happy.
That being said, I'm only 4 or 5 days into this test setup. So far so good though. :)
It's quite possible to get ZFS into weird states without too much effort when you're screwing around with the underlying devices (adding, deleting, changing things).
This seems to crop up at really inconvenient times too, like when you're trying to do something during a scheduled outage. :(
That kind of thing aside though, it's been pretty solid in my use for actual data storage.
Just don't use ZFS's native encryption + ZFS snapshots + send/recv.
Reportedly that combination is a cause of data corruption:
Hmmm from what I understand zrepl can do copy of source to backup both in a push or pull mode. I would say the push mode is still a very fragile way to do backups.
Imagine source machine is compromised and attacker decide to delete/encrypt your data, and see there is a backup mechanism connecting to the backup machine, what prevents him from using the deleting/encrypting the backups as well?
You'd definitely want the backup machine to pull the snapshots, and have no way to connect to it from source machine directly with a user that would have access to the data or an admin account. That means no ssh keys on the source machine, no password kept in a password manager that would be loaded on the source machine either.
Another strong method would involve 3 machines:
source --push--> replica1 <--pull-- replica2
Where source and replica1 would have ZFS filesystem and snapshots while replica2 is using a different filesystem (LVM + ext4 ) and snapshots to safeguard from replicating bugs that lead to data not being available. ZFS snapshots could be saved as individual files on this filesystem.
> ... what prevents him from using the deleting/encrypting the backups as well?
With ZFS snapshots the older snapshots would still be present on the target server, in their unencrypted form.
> That means no ssh keys on the source machine ...
Typically for non-user logins (eg script access and similar) you do the extra step of configuring the receiving ssh to only allow a specific command for a given key.
It's a configurable ssh thing, where you add extra info to the .ssh/authorized_keys file on the destination server. With that approach, it doesn't allow general user logins while still allowing the source machine to send the data.
> With ZFS snapshots the older snapshots would still be present on the target server, in their unencrypted form.
Only and only if you apply best practicies mentionned above in the second point of your post.
Anyway I'd rather protect my backup as much as I can and not allow the source machine to have any direct access to an account on the backup server because of possible security. Security is hard sometimes and you never know when some bugs might increase the possibilities. I like my backup servers to not have any open port. The caveat is that it is only possible if your primary backup server is locally and physically accessible. If you are travelling you might want to be able to access it without being physically present. An option might be to just have ssh disabled when you are at home and you enable the service when you know you won't be at home for a long enough period to make it a problem if you need to restore data.
That doesn't make any sense. Native snapshots are fantastic, but are merely an effective way to do backups. One where you trade some complexity from the backup software to the filesystem.
(only talking backups, snapshots of course have utility beyond that usecase as well)
> Your device eligibility for alternative app marketplaces is determined using on-device processing with only an indicator of eligibility sent to Apple. To preserve your privacy, Apple does not collect your device's location.
> If you leave the European Union for short-term travel, you'll continue to have access to alternative app marketplaces for a grace period. If you're gone for too long, you'll lose access to some features, including installing new alternative app marketplaces. Apps you installed from alternative app marketplaces will continue to function, but they can't be updated by the marketplace you downloaded it from.
> If a developer sells physical goods, serves ads in their app, or just shares an app for free, they don’t pay Apple anything.
If so, don't ask me to pay $99 every year for installing my own app on a device I bought.
Apps signed with a free developer account are severely limited with a short validity period and Apple enforces a limit on the number of apps that could be installed this way on a device.
To avoid the mess, design with the fail-fast principle in mind, which brings you closer to the spot where an error occurred.