I just want to say "thank you." I run Magic Lantern on my Canon 5D Mark III (5d3) and it is such awesome software.
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
You're a better photographer than I am. I'm glad if ML helped you.
Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.
I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.
C is taught as the introduction to programming in CS50x, Harvard's wildly popular MOOC for teaching programming to first-year college students and lifelong learners via the internet. Using the clang toolchain gives you much better error messages than old versions of gcc used to give. And I bet AI/LLM/copilot tools are pretty good at C given how much F/OSS is written in C.
Just to provide another data point here... that C is a little easier to pick up, today, than it was in the 1990s or 2000s, when all you had was the K&R C book and a Linux shell. I regularly recommend CS50x to newcomers to programming via a guide I wrote up as a GitHub gist. I took the CS50x course myself in 2020 (just to refresh my own memory of C after years of not using it that much), and it is very high quality.
What do you mean by "locked down computer." Maybe something like ChromiumOS?
Might be a tough sell for the volunteer open source community ("linux & friends") to work on such an alternative "locked down" computing experience. Free and open source software is usually more focused on unlocking use cases, not locking them up.
That all said, I basically consider macOS to be a locked down computing experience. So that's my solution for older people.
It's not a perfect solution but the Apple closed ecosystem is better designed for the limited use cases of the elderly. Rely on iCloud and built-in Apple approaches to data security as much as possible.
For example, an iMac and an iPhone can get all "adulting" use cases done, including typing/receiving emails, printing documents, online banking, government services, and so on. Apple Passwords plus Face ID helps to simplify password-based security. My biggest issue is getting TOTP-based two-factor adopted. Apple Passwords supports this but I usually have to do remote tech support to get it set up initially. It's also annoying that right now, the current generation of iMacs don't support FaceID, because that would simplify authentication across the two primary platforms (desktop/mobile).
I would never use this setup myself since I like to run F/OSS everywhere as much as possible. But I am realistic about tech expectations for the elderly who just want to live their life with minimal investment in learning about data/software security.
But you're right, along with other commenters, that it's dangerous for society to rely on a monopolist technocorporate overlord (or a pair of overlords forming a de facto duopoly) for the basic administrative tasks of adult living and lawful citizenship.
I sometimes describe Instapaper as "/dev/null for web content". I reflexively share to Instapaper not to read it later, but to absolve guilt for not reading it at all. It is one of my weirdest web habits, on reflection.
OTOH, back when del.icio.us was good, I used it for roughly the same purpose.
These days, I still send links to Instapaper when they are essays or articles. I send links to Raindrop.io when they are anything else, basically anything the Instapaper text extractor would fail on. Things like repos, interactive charts/graphs, photographs, videos, etc.
I still think it is behaving roughly as /dev/null. I do sometimes think that, at least nowadays, you can ask an LLM to visit your bookmarked links and do some semantic search over them. But I guess the best use case is just saving it for later/never rather than wasting time on it now.
I tried to use various LLMs to go through my Instapaper stuff. It was probably 6-12 months ago and it didn't like any of my attempts.
However, I did still find one-off AI summaries to be very helpful in getting through the backlog to get me down to 0. I now stay at 0. If there is a long article I don't feel like reading, but want to know more than the headline, I will use the AI summary in my browser. That's usually good enough to absolve the guilt, without creating more guilt by adding something to the reading list.
Instapaper has a lovely "Shuffle" sort option. I usually don't feel like reading what I saved last, and randomly picking one of the first 5 articles the Shuffle option presents ensures that I read at least something!
> I do sometimes think that, at least nowadays, you can ask an LLM to visit your bookmarked links and do some semantic search over them. But I guess the best use case is just saving it for later/never rather than wasting time on it now.
I like the idea -- to extend yours -- all the bookmarks and pages visited (or pages dwelled on for more than a minute) get full-texted and filed to a local LLM. And you can query directly, and it has the context.
I use https://libib.com for this use case. I didn't see it mentioned here, so figured I'd share.
I'll also mention a fun coding project that I used ChatGPT on. I created a data enriched spreadsheet out of my physical books. This could then be used to bulk import into libib for a searchable and visual digital bookshelf.
First I took photos of my bookshelves such that the spines were visible. Then I had ChatGPT vision model transcribe visible titles and authors, and guess the books based on that. Then I turned that into a CSV. Finally I had ChatGPT generate a Python script that used the Google Books API to enrich the spreadsheet with ISBNs. Finally I bulk uploaded that CSV with ISBNs to libib, and voila, I had a digitized library.
Thank you for your comment.
Libib is indeed a well-established player in this industry. Although it suggests a lot of different functionalities/features, it lacks detailed statistics/analytics regarding users' reading activity, libraries and content. It also doesn't allow you to create your own data fields for storing information about books, authors or publishers.
Regarding data extraction from photos, I considered this method initially, but then decided to leave it until there is specific feedback regarding this. Apparently, people would actually use it, as another user also pointed this out in a comment.
Written in Jan 2024. I sensed that the world had already moved on from reading as a core source of information and awareness of the world.
But in that essay, I tried to make the case that this is a mistake -- that the environment has never been better for deep readers, that the internet and the various sources of cheap/free long-form text can be a deep reading utopia, if properly curated.
But that's the issue. Most people click into the default. They don't curate. They don't monitor their media diet. And so they are drawn, like moths to a flame, to short-form video (especially), as well as other passive information sources that resemble TV and talk radio from prior eras.
To provide a data point from a long-time ProjectionLab user, I don't really need account linking. I use Monarch Money to link cash and CC accounts to track spending.
I use a custom Google Sheet to track retirement portfolio performance. If =GOOGLEFINANCE isn't enough, there is a nice paid extension called WiseSheets, which adds a =WISE function that fills all the gaps.
My monthly ProjectionLab process is to update the "Current Finances" values on the first of the month, using the values from the other tools. Works well enough for me!
I'm glad to hear you have a workflow for updating current finances that's already serving you well. I do plan to add a lot more automation/integration options, it's just always a question of where to invest my time to deliver the most value to the community efficiently.
Something people don't always realize instantly is that with a long-term planning tool, the point is to spend the bulk of your time focused on the future, not fixating on all the latest daily stock price fluctuations... in fact, those can sometimes be a source of noise/distraction.
I don't need/want account linking, I just want $VTSAX etc recent prices. I know it can be noise but what you're saving me from is really annoying copy/paste work when I come in quarterly to update numbers and track my progress.
sounds like that would also mean overhauling the data model to track individual positions within every account, not just balances?
and then folks would expect every account's asset allocation to be automatically derived from those positions too I imagine? that would differ a lot from how asset allocation modeling and change over time currently works in the tool.
maybe I'm missing something, but it feels like there could be a lot of complexity here that would need to be carefully weighed against the product vision and other things on the roadmap.
I think all I can do as a customer is just say what my biggest pain point is (having to copy and paste a bunch of values every time I check in with the tool) and however you think it's best to solve that, or to not solve it, is really up to you.
I think adding account linking is something you absolutely should not do, that definitely will add so much complexity and also touches up on people's security fears around 3rd party linking, so definitely don't do that.
Will keep using the product for sure. Nice work and congrats on the success so far
I really love “Sneakers.” It’s one of the best classic “hacker” films. This essay also does a nice job of breaking down a scene you might not have otherwise noticed, which has some lovely filmmaking technique and subtle sound design at play:
The ensuing despair over how to proceed is interrupted by the voice of Whistler (David Strathairn, Bay Area native), the team’s blind technician. “What did the road sound like?” he asks. “Did you go over any speed bumps? Gravel? How about a bridge?”
I nearly stood up in the theater from excitement. I’d never seen anything like it: the geography of San Francisco turned into a puzzle to be solved.
Geoffrey Hinton recently discussed how neural networks, even human ones, may be "generative" in how they recall information. Our memories of events are hazy, and change every time we recall them. Memorized rote facts can also be hazy in this way, and are subject to mix-ups and confabulations. That we "generate" memories from neural weights in our brain can also help explain why it seems like our brains store so much information. Perhaps they don't, they instead store lossy neural weights built from our past sensory experience, and we generate the rest and use metacognitive reasoning and a form of attentive reasoning upon recent data/context to sort through the potential errors in memory.
As you point out, LLMs work much better when you ask them to operate on objects within its context window, especially artifacts it knows how to work with, like code and text. But I think people are so trained to ask questions to the oracle and expect answers (e.g. Google), and who can blame them, that is the UX built into people's muscle memory for open-ended text input boxes. The launch of ChatGPT Search is recognition of this. Plus, most people are being told to treat these chat boxes as strong AI rather than as text/code-processing programs with specific strengths and weaknesses.
"The Most Interesting Thing in AI - The Atlantic - Episdoe 1 - Machine Consciousness - with Nicholas Thompson and Geoffrey Hinton.
What if the most advanced AI models could think and respond in a way that felt like a human consciousness? How might that transform our understanding of intelligence itself? Some of the leading AI scientists believe that a super-intelligent form of this technology is only five to ten years away. This episode explores the idea of AI consciousness and delves into how the act of dreaming is connected to neural networks in unexpected ways."
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
More here:
https://amontalenti.com/photos
When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.