I love tig. I have an ad-hoc fork of it adding a new window which displays a preview of the selected file in the diff prelude section of the diff window, so I don't have to navigate through the single window. It is very helpful to browse the files in a diff easily. (https://github.com/LucasPayne/tig) I swear I will clean up the fork soon, I just implemented it in whatever way worked.
I do something similar to what you are doing with Kakoune daemon mode, but I work in the shell within a vim context. I have bound "e" in diff/stage views of tig to send a message to the current vim to open a new tab for that file at that line.
I have not tried magit yet but am going to now to at least extract some great ideas from it and somehow get it in my current workflow.
I would not like to think of "techno-" pessimism, just pessimism (we are past the point of making a sci-fi distinction between reality and computer-using reality, I think, especially since people walk around using a computer). I personally am not too focused on AGI or existential threats, or the loss of jobs, or the political changes, opportunity for exploitation, scams, AI grifting, misdirection of funding. It would be easy to name that as pessimism as it is just a little bit removed from daily life, you can go about your daily life and really forget about it unless you are ethically/politically or personally concerned.
One thing is obviously immediately concerning though. It is unbelievable how fast ChatGPT has been normalized. Within a year the new people have come to expect something like ChatGPT to always be provided for them, for the rest of their life. Of course people are rushing in to define what that something is, and provide an AI service which people will use/misuse like the first page of Google results. I think almost certainly this new "first page of Google" component of people's thoughts will be abused. Do you think the first page of Google is good and formed by good incentives and empowerment of the user, considering how much of a cultural force it is (Googling something being just a normal part of your thought process)?
(I do have limited life experiences. Anecdotally, I know a high school teacher who has been teaching for almost 40 years. Over time students went from hiding their phone in the back of the class to scrolling through it in class freely, vaguely attempting increasingly simplified homework, and have now jumped on the new cool AI website to complete even basic tasks. This is extremely normal, why would a kid not use it?)
> Do you think the first page of Google is good and formed by good incentives and empowerment of the user, considering how much of a cultural force it is (Googling something being just a normal part of your thought process)?
Yes. Google is an immensely useful everyday tool available to everyone for free. I think the negative externalities it causes are laughably trivial compared to the utility provided.
Ah yup I agree, I think I am wrong to make the comparison between Google and ChatGPT here. I am not nearly as concerned about the problems with Google as with potential problems with ChatGPT (or whatever the most popular equivalent is in the future). I do think an AI service will be as important as Google in the future. I think the negative externalities this time might not be trivial considering what ChatGPT promises the user (basic Google search still is in general just a way to find some webpages or immediate info, while an AI homepage is a way to create things or communicate with others...).
I am studying physics now (with a CS and math background) and I feel obligated to get up to date on AI and to develop a good working philosophy of how it can be meaningfully used in scientific work. Not that all "apply machine learning to X" approaches aren't interesting, but I lack enough understanding to know whether these are popping up everywhere because people feel obligated to apply new methods. For example, the Fourier transform is deep and interesting and there are libraries and standards and ways to transform different objects over clusters, etc., but I wouldn't say good scientific research is about finding a place to apply a Fourier transform (maybe :)). I am new to this though.
I'm nearing the end of my PhD in computer engineering, but because my uni is associated with a big physics research lab, my work ended up being pretty involved with physics. I've had to go through a similar learning experience regarding how to meaningfully apply AI (and computation in general) to perform meaningful science.
I agree that simply applying a fourier transform isn't meaningful research, but there are fields like fourier optics, where you're effectively just doing that, with the goal of modeling wave optics.
The kinds of issues I had in mind wrt poor uses of AI are things like inverse problem solving methods which train a model as a black box and neglect to involve physics informed feedback, so the results are of questionable value, or hyperparameter estimators which need an unrealistic amount of data or produce blatantly unrealistic estimates because they're naive and the AI "researcher" has no interest in actually understanding the physical logic to the parameters.
My research hasn't been with AI, just with simulations of certain physical systems, and one thing that was constantly drilled into me by the physicists was to be really careful about how far I stretch computational techniques, because ultimately none of this digital stuff means much if it becomes detached from reality. It can even cause actual damage because sometimes expensive purchases will be made on the basis of simulations.
Thanks for the pointer to Fourier optics! I mentioned Fourier because it is something I still don't "get" even after studying and using these methods.
I am still learning, but I have so far only applied inverse methods to systems where the physics is well-defined and the model already accepted as "real". For example, CT scanner algorithms which model the photon counters and attenuation through matter (although this can get complicated...), with the output being data in well-defined format (like the distribution of matter). I do see people using inverse solvers or AI to derive something which isn't an "image" but a model of the physics (does that make sense?). The extreme version of this which I have seen once before, is a paper using AI to extract algebraic forms of PDEs from videos of fluids. To what extent does the AI user have to understand/"pre-model" the physics, and to what extent does the AI generate understanding in some way? Apologies if this is misinformed as I am still outside the research community, and don't very actively read papers.
Currently I am trying to learn background to understand electromagnetic properties of materials with an interest to apply this to computer graphics. I eventually want to learn what processes are currently done for extracting material properties from scans, etc., along the lines of the classic gonioreflectometers. In this space, from the papers I look at, it looks like AI is unavoidable and I am sure an extremely useful tool but I definitely want to be careful not to get lost in misunderstanding.
>To what extent does the AI user have to understand/"pre-model" the physics, and to what extent does the AI generate understanding in some way?
The conclusion that I reached is that you need to understand at least enough to be able to handle basic scrutiny from people in the same field. So, for example, if you come up with an AI for deriving some model of the physics of a CT scanner, you should at least understand the problem you're tackling enough to make meaningful comparisons to any popular competing approaches that might exist. That understanding should automatically encompass things like understanding how your variables are bounded/how they behave.
A contrived example could be with an AI intended to guess 2 prime factors of a given number. Imagine someone making such a tool without even knowing what a prime number is, where the only benchmark they use for correctness is if its outputs fit the dataset of products of prime numbers given. The tool would be clearly useless, because while to the AI researcher it'd seem fine that it outputs 36, which is close to the intended 37, to anyone who knows about prime numbers it'd be an immediate warning sign needing further elaboration and explanation. The researcher could protest that as an AI expert, they don't know anything about prime numbers, but that doesn't really matter in research.
Similarly, due to my non-physics background, while I was (and to an extent still am) given significant leeway in terms of my ability to respond to detailed physics questions about my work, by now I am expected to be able to handle questions about things that are common knowledge among physicists in this field (so, for example, knowing what performing a Fourier transform means in our context, understanding the high level functioning of the key components of the machines we work with, having some intuition for when a simulation result is unrealistic etc). If I were to talk about how I've simulated some of these components, but drew a complete blank when asked how my work improved upon the common knowledge means of simulating them, I'd obviously look like a complete idiot :P
It may seem like a pretty low and obvious bar, but surprisingly I have seen many cases of researchers who didn't even care about reaching that level.
TikTok is another everything platform. If I used it, I imagine I could take any stream of thought and somehow warp it into a reason to use TikTok. It is very easy to pretend that the platform doesn't matter and it is just a carrier to provide the opportunity to engage in new thoughts or interactions. I made a huge mistake allowing reddit to be my go-to. I did not think twice about going from thinking "I want to study insects" to reading the top 500 posts of all time for insects hobbyists, learning almost nothing except apparent insider-knowledge and insect drama, feeling burned out, and continuing on from that thinly veiled meme-cynicism-comment-meme-article-comments new-tab new-tab new-tab cycle (in the context of insects) to the rest of the reddit machine. Maybe something is wrong with my brain but I'm going to assume that TikTok is actively trying to draw in every thought and intention into it's algorithm, and leave someone scrolling TikTok rather than continuing with whatever it is they intended to do.
For some reason, here I actually read the articles. On sites like reddit it seems like a lot of the content for discussion is already in the title, and that is the launching pad for discussion in the comments. Maybe just a psychological trick, and I would like to build the habit of reading articles in every case, but I think the title length restriction helps a bit.
This is not advanced, but Rodrigues' rotation formula (https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula) which is used to act on vectors given an axis-angle vector representation of the rotation. This calculation or similar is used all the time in 3D programming, but I thought the derivation would be a bunch of unintuitive trig so I never bothered with the proof until I was bored. Given a vector v, v can be thought to act on other vectors u as (v x u). The "v x" can be represented by a rank one anti-symmetric matrix (with 3 degrees of freedom, same as v) which can be thought of as a "rotation generator". In fact, anti-symmetric matrices are the Lie algebra which generate rotation matrices. One way to compute the exponential is by a matrix Taylor series. By some algebraic properties of anti-symmetric matrices, this expanded Taylor series can easily be rearranged into Rodrigues' rotation formula.
This is on the simpler side of applied math but I think it leads well to thinking about interesting things like encodings of geometric objects (vector versus rank-constrained tensor) and to geometric algebra which is becoming more popular in game development.
I feel like something bad is happening and I am unsure how much of this is me being negative. I "quit" the internet in late 2018 due to severe addiction and years-long disengagement in my hobbies (reading, programming, thinking, music, art, ...) and only recently have started reintroducing it into my life, for example, catching up with what people do on YouTube, trying to get an idea of "what people are talking about". It feels like a fever dream and I am unsure whether this is due to me picking up my extremely unhealthy internet browsing habits that I had pre-2018, and I wonder whether there is a "healthy" internet that I am not seeing.
I made a twitter account recently and followed Jonathan Blow, Sebastian Lague, and 3blue1brown. I just scrolled through the default twitter page and I got karen videos, a naked woman in a car caught cheating, israeli soldiers harassing someone, a police dog mauling someone. I made an instagram account to see 3blue1brown videos on my phone and went to the default page (I assume it knows nothing about me) and I was given videos of disabled people doing sports which are clearly meant to be "funny", with extremely hateful comments apparently from children, spelling out the N-word with separate letters, etc. That was in 3 minutes of using Instagram and I gave up. I am interested in game development and a google search of "game development" gives me absurd "industry knowledge" youtube videos and r/gamedev. r/gamedev is just "meta-commentary" pointing out "societal problems" of game developers, you won't make money, noob assumptions, angry at the world, even mixing in discussions of depression, self-hate. It is a mess. Where are the people talking about neat collision detection tricks? I am aware that this is the "surface level" and I will eventually need to find and curate a variety of incoming "feeds", such as group chats with good engaged people, great forums, etc., but I just don't feel like it is "natural" to find them. I feel like I need to start a project for myself to intentionally build my own "algorithm" which leads me to find enriching content. I feel though that that is actively going against what "the internet" wants me to do. It is so easy to find such hateful things, and I am worried it will bring me back to being as depressed as I was before quitting the internet.
With the larger platforms, the more you use the platform the more specific the feed or recommendations become. What you describe sounds indeed like the surface level feed that a new or logged out account would get. Ignore all feeds and search for specific things, such as collision detection. Then you will come across genuinely good channels which you can subscribe to. The feed will then no longer show those crappy superficial popular topics.
However the YouTube Shorts has an insanely addictive nature, I regret every second spent in that feature. To contrast, the rest of youtube contains so much useful educational content. I really learned a lot thanks to these videos. In the last two years I stopped watching regular videos and started watching Shorts, which I find truly regrettable.
Yes, I do not follow that argument comparing modern tech to past changes. It feels like people are assuming a weird technological inevitability. The printing press was amazing, it had a huge effect on society, of course, leading down the line to some kid reading trash in class. But it is just marks on paper, while digital technology forms a "place", and a direct line of effect from tech companies to the daily habits of billions of people, overnight. I think it is completely different...
I do something similar to what you are doing with Kakoune daemon mode, but I work in the shell within a vim context. I have bound "e" in diff/stage views of tig to send a message to the current vim to open a new tab for that file at that line.
I have not tried magit yet but am going to now to at least extract some great ideas from it and somehow get it in my current workflow.