As a Canadian, I’ve been thinking since last year about migrating to non-US services and applications.
My main goal is simply to avoid giving money or data directly to US corporations. I have no illusions, these non-US services probably still benefit US companies in some ways.
They’re rare, but I’ve consciously decided to stay away from some Canadian alternatives. The main customers of most Canadian tech companies are in the US, and I feel they would happily move there if needed.
I started with this:
Gmail / Drive → Proton Mail / Drive
NameCheap / GoDaddy → Infomaniak
Google Maps → TomTom
Google Chrome → Vivaldi
Google Search → Startpage (Vivaldi default)
GitHub → Codeberg & Codefloe (for private)
I do like Proton Mail. The main thing I hate is how often the app and web versions get out of sync for read and archive states.
I’m really happy with Infomaniak, migrating all my domains was a breeze.
Vivaldi is based on the Chrome codebase, but I really love all the extra customization options. It was a very easy switch.
Startpage took me some time to get used to. It’s not as good as Google, but whatever.
TomTom isn’t great, but it’s not like Maps has been great over the last few years either.
Forgejo is much better than what GitHub has become.
Next, I’m thinking of moving away from Google Photos. I’m considering pCloud for that.
> A good manager is more like a transparent umbrella. They protect the team from unnecessary stress and pressure, but don’t hide reality from them.
I'm absolutely going to steal this metaphor going forward.
Being a "transparent umbrella" does require knowing the personalities of your reports, some people do get distracted when they think higher-up decisions or unhappiness are going to affect their team. Most people, however, really appreciate the transparency. It helps them feel more in control when they know what is happening around them, and when things do change they can tie it back to something that was said previously.
No, not the AI. Just the owner
of means of production like AI.
The fact that capital owners successfully avoid contributing to the financing of our states and social systems is, in my view, one of the fundamental problems of our time.
Evidence suggests that about 30% of people will accept being worse off in order to inflict a greater loss on someone else. They form a plurality, with the other groups being win-win types (~20%), loss-averse pessimists (~20%), selfless volunteers (~15%), and inconsistent folks who may be confused (~15%).
Now this is just empirical observation rather than proof, but it's a good quality observation, enough that it has heuristic value. If you admit the possibility that about 1/3 of people are mean, then an awful lot of ongoing political phenomena become much easier to understand.
Maybe I'm being too simplistic, but I think we're mixing two distinct debates.
Today we have an extraordinary invention—comparable to the wheel in its time. That invention is: predictive inference over all human knowledge. Period. I don't like calling it "Artificial Intelligence" because it's not intelligence; it's a prediction system that can project responses by illuminating patterns across all human knowledge encapsulated in text, audio, and video. What companies like OpenAI call "reasoning" models is simply that predictive process, but in a loop packaged as a product—one of the first marvelous uses of this fascinating invention: predictive inference over all human knowledge.
When the wheel was invented, no one could have imagined that, combined with hundreds of subsequent technologies, it would enable an electric car powered by solar energy. The wheel wasn't autonomous transportation—it was a fundamental component.
I see two debates getting mixed up here:
- The debate about the current invention: A tool that makes encyclopedias "speak" by connecting patterns across all human knowledge. As a tool, that's what it is—nothing more, nothing less. Tremendously useful, but a tool.
- The debate about the future dream: What this invention might enable when combined with hundreds of technologies that don't yet exist—similar to imagining an electric car when you only have the wheel.
It seems many experts are taking positions and getting "upset" because they're mixing these two debates. Some evaluate the wheel as if it should already be a solar electric car. Others defend the wheel by saying it already IS a solar electric car. Both are right in their observations, but they're talking about different things.
LLMs are a fundamental breakthrough—the "wheel" of the information age. But discussing whether they "understand" or have "world models" is like asking whether the wheel "comprehends transportation."
On the danger of confusing capabilities: Conflating the tool with the end goal leads us to poor decisions—from over-investment to under-utilization. When we expect AGI from what is fundamentally a pattern-matching engine, we set ourselves up for disappointment and misallocation of resources. No magic, just reality.
The temporal factor: The AGI debate is a debate about the future—about what might emerge from combinations of technologies we haven't yet invented.
> Sadly, the whole culture around SV is based on libertarianism, so regulation isn't even considered.
Thiel actively supported one of the least libertarian candidates in US history. Whatever reputation he has for having libertarian views is nonsense.
No libertarian would try to control others based on his/her religious beliefs, and no libertarian would be remotely comfortable with any of the heavy handed stuff in Trump's platform.
In my view, what happened to Thiel and Musk is that they succeed in business and everyone starts respecting them and treating them like deities. They want to believe it is justified rather than simply people trying to manipulate them, which leads to a reinvention of self where they perceive themself to be a bit superhuman or important to the world. They act, they explore new areas, they act more. They usually do not experience as much reward from additional success in business, they are typically poorly socialized and fail to create a solid support network of people who know them and care about them. They realize money doesn't really help, fine food doesn't help, expensive possessions doesn't help. Even positions where they occupy a top hierarchical role end up feeling lacking.
What's left is the allure of tradition, religion, blood, war, progeny, and the trajectory of civilizations. They admire the brutality and decisiveness of medieval kings and the idea of theirs being destiny rather than luck. They then try to figure out how to believe they are deserving and suitable for the unique kind of destiny they realize can be theirs.
Most of us do not have to worry about hearing the voices they hear calling them to this destiny. One can see it on Elon's face. He's quick to sweat, quick to contemplate how his every decision will be more significant to the world than the entire lives of thousands.
Day after day of waiters, concierges, personal assistants, aides, advisors, trainers, masseuses, chefs, SVPs, etc. all at their absolute service. They must ask themselves again and again endlessly "what do I want? What do I really want?" Ultimately they realize that all they really want is to shape the world like so many kings or prime ministers or philosophers have. But theirs is a different skill-set. In spite of their desire they are not philosophers, not kings, not literati, not demagogues.
So they struggle to become that which they are not so they can do more than order a delicious lunch and pay for everyone else's and listen to everyone's flattery.
They want to shape the world with who they are, but part of them realizes it was luck and the are not as unique as they hoped. So they find ways to feel special like cultural supremacy, authoritarianism, buying favor with politicians or religious leaders, etc.
I replaced all my thermostats for both of my homes with Sinopé products. Smart, allows integration with locally hosted home automation, and compatible with ZigBee networks. Purchased my first batch in late 2021 and haven't had any issues. Physical temperature controls if the LAN goes offline. Highly recommend.
Here's the hardware installed for on-prem home automation using the open-source Home Assistant software:
* Raspberry Pi[1] CPU, heatsink, A/C adapter, and case
* ConBee II Zigbee USB gateway[2]
* USB ADATA Micro SD card reader and USB cable
* Micro SD card (for operating system and Home Assistant)
I guess we're just going to be in the age of this conversation topic until everyone gets tired of talking about it.
Every one of these discussions boils down to the following:
- LLMs are not good at writing code on their own unless it's extremely simple or boilerplate
- LLMs can be good at helping you debug existing code
- LLMs can be good at brainstorming solutions to new problems
- The code that is written by LLMs always needs to be heavily monitored for correctness, style, and design, and then typically edited down, often to at least half its original size
- LLMs utility is high enough that it is now going to be a standard tool in the toolbox of every software engineer, but it is definitely not replacing anyone at current capability.
- New software engineers are going to suffer the most because they know how to edit the responses the least, but this was true when they wrote their own code with stack overflow.
- At senior level, sometimes using LLMs is going to save you a ton of time and sometimes it's going to waste your time. Net-net, it's probably positive, but there are definitely some horrible days where you spend too long going back and forth, when you should have just tried to solve the problem yourself.
The correct move is Russian-style hybrid warfare against the US. Botnets manipulating social media with pro-European viewpoints. Paying opposition politicians. Inciting unrest to paralyze the regime. Etc etc.
I have multiple system prompts that I use before getting to the actual specification.
1. I use the Socratic Coder[1] system prompt to have a back and forth conversation about the idea, which helps me hone the idea and improve it. This conversation forces me to think about several aspects of the idea and how to implement it.
2. I use the Brainstorm Specification[2] user prompt to turn that conversation into a specification.
3. I use the Brainstorm Critique[3] user prompt to critique that specification and find flaws in it which I might have missed.
4. I use a modified version of the Brainstorm Specification user prompt to refine the specification based on the critique and have a final version of the document, which I can either use on my own or feed to something like Claude Code for context.
Doing those things improved the quality of the code and work spit out by the LLMs I use by a significant amount, but more importantly, it helped me write much better code on my own because I know have something to guide me, while before I used to go blind.
As a bonus, it also helped me decide if an idea was worth it or not; there are times I'm talking with the LLM and it asks me questions I don't feel like answering, which tells me I'm probably not into that idea as much as I initially thought, it was just my ADHD hyper focusing on something.
I've been running Frigate for more than two years now and it beats the hell out of any system I've tried in terms of detection speed and reliability. For context, I've tried Ring, Tapo cameras, and also Eufy security. Today I have turned away from all the cameras except for the Tapo cameras now serving RTSP streams into my Frigate instance. I have also blocked them from accessing the internet and that gave it complete privacy by default.
Eufy Security started showing advertisements about their new products whenever I tap on a motion detected notification. They prioritize their ads over your own security which is ridiculous. Not just that, some of their clips stored in their cloud storage would never open despite the fact I used to pay them my membership fees every month. They were also caught storing passwords and other security credentials in plain text. Thanks to them, they were the primary motivation for me to move away from using those proprietary platforms and look for something self-hosted.
I got Frigate running on my old hardware with hardware acceleration enabled via RX 550 GPU and detection is always under one second. I wrote a small app that uses Frigate API to grab screenshots and send me notifications via Telegram and Pushover. It's been very self-sustainable for two years now. I only had to restart the service two times in all of this time. I am also using some tunneling from my VPS into the locally hosted Frigate running on my home server and it's just been flawless. Thanks to this amazing project.
There is no "overspending." There is only undertaxing. The debt is literally just the accumulated difference between spending and taxation.
If extreme wealth was taxed, the debt would be zero. The point isn't even to "pay for spending" but to enforce a functional social contract, and to limit the political and democratic distortions created by extreme inequality.
"Markets" should not have a veto on policy in a democracy.
Making it non-zero is a policy choice. Inflating away this debt is also a policy choice. There is nothing accidental about this.
What will not be inflated away is personal debt. That will remain linked to inflation even after the currency is revalued, and will be captured from personal assets wherever possible.
Happy long term user, great project. Here is a list of Open Source Apps, I use to replace Google stuff:
Aurora Store - Anonymized frontend for Playstore
F-Droid - Open Source App Store
Obtainium - App Store for other sources (e.g. github)
Organic Maps - Open Source navigation (not as good as proprietary ones though)
SherpaTTS - Text to speech for Organic Maps
PDF Doc Scanner - Little Trickster, Open Source document scanner
Binary Eye - Barcode reader
K9 Mail / FairMail - Mail client
LocalSend - Cross Platform File Transfer
Syncthing Fork - Catfriend1 Syncthing fork to sync files
VLC Media Player - media player
KOReader - ebook reader
Voice - Paul Woitaschek, local audiobook player
AudioBookShelf - Remote audiobook player
Immich - image backup
Fossify File Manager - file manager
Substreamer / DSub - Audio streamer for navidrome self hosted server
OpenCamera - Open Source camera app
I wish I had this list from the start... Hope it helps someone :-)
Here's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
There are no moral hazards when it comes to social welfare programs. People really think there are, but every time we look we find practically no freeloaders. This idea that we have to threaten people with literal starvation to get them to be productive members of society is ironically deeply impoverished.
And if we really think that's true, why do we let people accrue wealth at all? Why do we then think that the most productive people in our society are also the richest? Shouldn't it be the opposite? I struggle to see the pillars of this moral structure in any other way than "poor people are a different breed and need stricter rules to keep them in line". Which again is super wrong! TFA cites research that shows that these kids' parents work, but their wages/bills are too low/high. Does anyone want to guess how bad those parents' jobs are? Do we need to detail the struggles working people go through (lack of health care, wildly inconsistent hours, sexual harassment and assault, etc)? The nicest thing you can say about this kind of thinking is that it's out of date.
And what is "freeloading" anyway? Kids of all backgrounds and parenting situations get to eat? Bring on the freeloading then. Who do I make the check out to?
"Entrepreneurship is like one of those carnival games where you throw darts or something.
Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.
Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about "meritocracy" and the salutary effects of hard work.
Poor kids aren't visiting the carnival. They're the ones working it."
Carville (DNC strategist) is advocating a "play dead" strategy. Let Trump implode so that he owns the inevitable failure. His base will desperately want to blame the left for not letting the policies work as intended. The less the Democrats do, the harder that is. I think a lot of Democrat politicians are going this way, and it's why Schumer rolled over on the budget.
Part of the logic here is that Trump is indeed different from other authoritarians. He's even less competent. He's blowing all his political capital on imploding the economy. He also can't understand the legal battles, so when Stephen Miller tells him they won the Supreme Court case 9-0, he believes him. This seems to have been a big wake-up call to Gorsuch, Coney-Barrett and Kavanaugh. The administration has shown its hand much too quickly, before it fully consolidated its power.
What the Democrats should be doing already is campaigning more. Run ads that are literally just Trump quotes. Show people Trump calling January the "Trump economy" before inauguration, then calling April the "Biden economy" now that he's crashed it. If Trump polls low enough, more senators will jump ship, and impeachment could be possible.
I've rarely had my hate spiked so highly so quickly by a comment.
Many of these loans are handed out predatorily to people classified as children, 6-8 years before their higher level thinking pre-frontal cortex fully develops, with no life experience, little work experience, a wide range in quality of parenting and childhood, with a society that has said "this is how you achieve the American dream of doing better than your parents."
The loans themselves shield the lender from consequences and put risk squarely on the the loan taker.
The seeming goal of these loans being the same for most of the structure of American society, to ensure desperate workers that have no freedom and no ability to say no to businesses.
When society offered these loans to people who in no way would have been able to get them on fundamentals, which was already a failure of society to invest in the future, there was an implicit promise that they would lead to a better life and they didn't.
You can't blame societies young for believing what their elders and even society at large tells them.
Universities failed their students, society failed students and universities, our own culture decayed in terms of earning vs receiving credentials, and the people you want to pay the cost of that are the people that the system victimized the most?
Worth reading in its entirety. The following four paragraphs, about post-WWII funding of science in Britain versus the US, are spot-on, in my view:
> Britain’s focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.
> The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.
> A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a “brain drain.”
> Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.
The author's most important point is at the very end of the OP:
> In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over.
The rubicon has already been crossed. If you asked some of the framers of the US constitution - beyond all other factors, unelected powers etc - what was the one defining trait of the government structure they wished to avoid; they'd have replied with arbitrary imprisonment and the suspension of due process.
Please don't take my word for it, hear it from the Prosecutor's Prosecutor. The SCOTUS justice, former AG and former USSG who led the American prosecution against the Nazis at Nuremberg, Robert H. Jackson,
No society is free where government makes one person's liberty depend upon the arbitrary will of another. Dictatorships have done this since time immemorial. They do now. Russian laws of 1934 authorized the People's Commissariat to imprison, banish and exile Russian citizens as well as "foreign subjects who are socially dangerous."' Hitler's secret police were given like powers. German courts were forbidden to make any inquiry whatever as to the information on which the police acted. Our Bill of Rights was written to prevent such oppressive practices. Under it this Nation has fostered and protected individual freedom.
The Founders abhorred arbitrary one-man imprisonments. Their belief was--our constitutional principles are-that no person of any faith, rich or poor, high or low, native or foreigner, white or colored, can have his life, liberty or property taken "without due process of law." This means to me that neither the federal police nor federal prosecutors nor any other governmental official, whatever his title, can put or keep people in prison without accountability to courts of justice. It means that individual liberty is too highly prized in this country to allow executive officials to imprison and hold people on the basis of information kept secret from courts. It means that Mezei should not be deprived of his liberty indefinitely except as the result of a fair open court hearing in which evidence is appraised by the court, not by the prosecutor
There is a reason why citizenship was not a requirement for receiving due process under the law. Citizenships are bestowed by the government. They can be taken away by the government. The framers held certain rights to be unalienable from human beings - something that no government can take away, and that was the right to not be unjustly detained for your beliefs, your behavior, your dress, your religion or composure.
Suspending due process for anyone is fundamentally un-American. But we have crossed that threshold. What comes next is fairly inevitable - if the process isn't stopped now.
Americans are cosplaying (voting their belief system, not what they'll do, the "revealed preference"), as they do as farmers [1] [2] [3] [4], as they do as "rural Americans" [5]. It is an identity crisis for tens of millions of people [6]. Their crisis is our shared political turmoil. Happiness is reality minus expectations.
From the piece: "The people most excited about this new tariff policy tend to be those who’ve never actually made anything, because if you have, you’d know how hard the work is."
I don't see anyone mentioning that the United States needs to manage its massive national debt, currently in the trillions, by issuing Treasury securities. These securities mature at varying intervals and require continuous "rolling" or refinancing to pay off old debt with new borrowing.
Significant rollovers are expected from April through September 2025, with additional short-term maturities due by June.
Higher interest rates significantly complicate US' ability to refinance. The cost of servicing this debt — paying interest rather than reducing principal — is already a major budget item, surpassing Medicare, approaching Defense and Social Security levels.
If rates don't come down soon it locks in higher costs for years. The country is at risk of a debt spiral.
How can rates come down? The present uncertainty around tariffs and a potential crisis could create conditions that pressure interest rates downward before those Treasury securities mature, by influencing Federal Reserve policy.
Treasuries are considered safe during such crisis. Increased demand for Treasuries pushes their prices up and yields down, effectively lowering interest rates.
This is a direct quote from a major institution behind the current administration that is well known to be a major force for choosing judicial candidates for the GOP. I understand it feels like hyperbole, but that's not because it's hyperbolic, it's because it's hard to accept. Accepting it would cause a lot of grief and grief is disabling. Accepting it creates a sense of responsibility or a sense of helplessness, neither of which feel good. Accepting it means that we aren't just in a bad situation, we are in a literal emergency. It means action is necessary now.
These are primary sources... This isn't hyperbole.
> We can't assume the worst-case outcome despite the serious danger of that outcome should it occur.
We don't have to assume the scenario to ask if it is possible. What prevents that scenario? That is the real problem. Without rule of law, there is nothing that stops it. The problem isn't that one particular story will come true, it's that there is nothing preventing any of the atrocities we have seen authoritarian regimes commit from happening because there is no red line, and no un-corrupted enforcement authority, and there is no substantive resistance. Nobody is even treating the end of rule of law as the emergency it is because it's not observable until the lack of law is abused, for example to send people to El Salvador without due process. Without due process, it could be you going to El Salvador. There have been no consequences for this. This is an emergency. We can't know for sure, but if the atrocities do start, this will have been the proof of concept.
> It is extremely hard to change the constitution.
I don't know how it was done in Turkey or Hungary, but I strongly disagree that it is hard to change it. It has been ruled that the constitution is not a document to be interpreted by you and me, but by the supreme court authoritatively, and only by his DOJ authoritatively (even against the supreme court) for the executive branch.
Law is just paper unless enforced. China called the agreement with the UK "was a historical document that no longer had any practical significance." That type of thing can just be declared when you have the power to do so. Who will stop you? The enforcers on your payroll?
If the supreme court rules the text on paper means something different, how is that meaningfully different than changing the words on the paper except in how tortured the justifications are?
> It is extremely hard to remove lifetime appointment judges from the opposition administration.
I really think you think you are standing on rock, when you are standing on sand. That intellectual sand you are standing on is the assumption of using rule of law to justify why you have rule of law, which is the thing in question. Without rule of law these statements which might have once been true become hurdles not limits. Maybe you can't dismiss a judge, but you can control which cases go to what judges and constructively dismiss them. Your budget of bad behavior is limited only by the consequences you experience...
You have to have an answer for who enforces the law as it was understood and with the intent it was written.
> Executive orders are easily reversed and laws must be repealed to undo them.
Are you seriously asserting that this tariffs and their consequences will easily be undone? I'm not trying to attack, I am just shocked that you would say this. Will trees in national parks get un-chopped down? Will trails rebuild themselves? Will government workers, like those in the USDS, come back? Will oil go back in the ground? Will rivers de-pollute themselves? Will oil execs get less rich? Will government functions be un-privatized? Will mom get her retirement money back? Will those dying people on medicaid get their life or their family's money back? Will children receiving survivors benefits get the childhood years spent more stressed than they needed to be back?
I get what you're saying in the literal sense, but I also think it is very wrong to talk about executive orders with this little weight.
> Just because the current administration is acting like it completed a successful coup doesn't mean that it actually has in reality. Don't take what they say to be truth, because they constantly lie. The would love for the opposition party to think that they have taken over and that there is no hope. All they've done is won one single election by very slim margins and tossed out a bunch of executive orders.
They are literally going through positions of authority and replacing them with loyalists. That is a coup. Here is a historian telling you that not only is it a coup, but if this happened in a foreign country you would recognize it as one: https://archive.is/fNpSS -- https://snyder.substack.com/p/of-course-its-a-coup
> The current administration loves tariffs simply because it's the most impactful lever the president can use without congressional approval besides commanding troops. The current administration literally cannot do much else in terms of lasting policy and they are too lazy and incompetent to approach congress with any actual ideas (something that the Obama and Biden administrations excelled at, e.g. the ACA and Bipartisan Infrastructure bill).
This is ignoring what is happening in the bureaucracy. They tried to just not give states money, which in defense of your previous points, is currently a check on power, but doesn't have to remain so.
> The US constitutional system has survived authoritarian-minded presidents before. And of course, that's not to say the US system is perfect or even an especially fair and representative system. But it has very strong protections against permanent dictatorship or an outright coup.
Denial happens when the consequences of understanding reality causes negative emotions, so rather than accepting those negative emotions, you reject reality until reality asserts itself in to your life.
Timothy Snyder would call what you said the politics of inevitability: Politics of Inevitability, Politics of Eternity (12m) -- Timothy Snyder https://www.youtube.com/watch?v=Eghl19elKk8
It is very worth a watch, and it was recorded in 2018.
Complacency is a problem. If nobody thinks they have to do something, then nothing gets done, and what we once thought was inevitable, is proven not to be, someone should have acted, but by the time we realize that someone is us, enough power may have been consolidated to make the cost too high.
I've noticed that my son spends way too much time on YouTube or playing Minecraft and one of the few offline activities he enjoys doing on his own is coloring. And since he comes to me every time he wants a new coloring book and we spend about 10 minutes together searching for each picture, I made a website with a collection of coloring books for him. The site is very simple, but to be honest, I haven't had so much fun with the process of creation for a long time.
> a defining characteristic of the Intelligence Age will be massive prosperity
That's the sales pitch, that this will benefit all.
I'm very pro-AI, but here's the only prediction for the future I would ever make: AI will accelerate, not minimize, inequality and thus injustice, because it removes the organizational limits previously imposed by bureaucracy/coordination costs of humans.
It's not AI's fault. It's not because people are evil or weak or mean, but because the system already does so, and the system has only been constrained by inability to scale people in organizations, which is now relieved by AI.
Virtually all the advances in technology and civilization have been aimed at people capturing resources, people, and value, and recent advances have only accelerated that trend. Broader distributions of value are incidental.
Yes, the U.S. had a middle class after the war, and yes, China has lifted rural people out of technical poverty. But those are the exceptions against the background of consolidation of wealth and power world wide. Not through ideology or avarice but through law and technology extending the reach of agency by amplifying transaction cost differences in market power, information asymmetry and risk burdens. The only thing that stops this is disasters like war and environmental collapse, and it's only slowed by recalcitrance of people.
E.g., now we are at a point were people's economic and online activity is pervasively tracked, but it's impossible to determine who's the owner of the vast majority of assets. That creates massive scale for getting customers, but impedes legal responsibility. Nothing in economic/market theory says that's how it should be; but transaction cost economics does make clear that the asymmetry can and will be exploited, so organizations will capture governance to do so.
It's not AI's job nor even AI's focus to correct injustice, and you can't blame AI for the damage it does. But like nuclear weapons, cluster munitions, party politics, (even software waivers of liability) etc., it creates moral hazards far beyond the ability of culture to accommodate.
(Don't get me started on how blockchain's promise of smart contracts scaling to address transaction risks has devolved into proliferating fraud schemes.)
Do LLM's parse language to understand it, or is entirely pattern matching from training data?
i.e. do the programmers teach it English is it 100% from training?
Because if they don't teach it English it would need to find some kind of similar pattern in existing text, and then know how to use it to modify responses, and I don't understand how it's able to do that.
For example: "Always focus on the key points in my questions to determine my intent." How is it supposed to pattern match from that sentence (i.e. finding it in training data) to the key points in the question?
Firstly im not comparing myself to that guy, but you could say i have similar “odds” with starting companies and having them succeed, and I'm a solo developer.
Ive build close to 100 projects and companies that have generated over 1B in revenue combined with only one other person (the non-tech owner). I dont have a team, I just build alone on all these.
A few notes:
I have programmed for over 30,000 hours. 3x what people say the time is to mastery is
I look at things in a way that i haven’t really ever heard anyone else explain. I’m not sure if it’s unique but it IS the reason. Everything in my mind is a complex web of cause and effect down to the most nuanced level. In my mind it has a visual aspect even. You have causes (knobs and dials you can turn) to produce effects.
Part of meditation is that you can learn an idea more deeply (insight). This same idea sort of applies to what I said above. People miss the magnitude of this cause and effect statement. I’ve told many people and they’re like sure cool. In my mind this statement is like standing next to the tallest mountain. The magnitude and depth is profound. It’s of this magnitude because it means you are in direct control of your own outcomes. Anything you want, is a solvable puzzle. Literally. And the deeper level of insight you feel about this idea the more you are capable of.
Now for the actual process of how to navigate this cause and effect. My mind operates on a value formula. Every single decision, word, line of code, micro decision is basically a tradeoff decision. Not in terms of code performance but in terms of this cause and effect web, of EVERY action in physical reality. I have excellent ability to “project” causes outward and then find the “end result”, and essentially find the fastest path from A->B to get there. And this value formula always optimizes RESULTS over other things many other great programmers optimize for like knowledge. I just have a different style. So basically I'm always analyzing every single tradeoff as if I see “threads” of reality extending from every decision and what path I go down. As a simple example I might learn linear algebra and Bayesian statistics extremely intensely for 7 days to learn or build an algorithm, but then I hit a diminishing return where I will switch to something else knowing i can hire someone later to teach me and fill in gaps.
This is an extremely simple contrived example. In real life, instead of there being 2 variables there would be like 50.
I've tried all the popular yerba mate brands, smoked, flavored, Uruguayan, Argentinian, but I still prefer organic unsmoked Yerba Mate with stems. I brew 1/2 cup of mate with 2 cups of 150F water and a splash of lemon juice for 30 minutes, then pour the whole thing through a chemex coffee filter. It takes a few minutes to filter, but the result is a delicious, very caffinated, slightly lemony tea.
My main goal is simply to avoid giving money or data directly to US corporations. I have no illusions, these non-US services probably still benefit US companies in some ways.
They’re rare, but I’ve consciously decided to stay away from some Canadian alternatives. The main customers of most Canadian tech companies are in the US, and I feel they would happily move there if needed.
I started with this:
Gmail / Drive → Proton Mail / Drive
NameCheap / GoDaddy → Infomaniak
Google Maps → TomTom
Google Chrome → Vivaldi
Google Search → Startpage (Vivaldi default)
GitHub → Codeberg & Codefloe (for private)
I do like Proton Mail. The main thing I hate is how often the app and web versions get out of sync for read and archive states.
I’m really happy with Infomaniak, migrating all my domains was a breeze.
Vivaldi is based on the Chrome codebase, but I really love all the extra customization options. It was a very easy switch.
Startpage took me some time to get used to. It’s not as good as Google, but whatever.
TomTom isn’t great, but it’s not like Maps has been great over the last few years either.
Forgejo is much better than what GitHub has become.
Next, I’m thinking of moving away from Google Photos. I’m considering pCloud for that.