You're always welcome to try my service TurboScribe https://turboscribe.ai/ if you need a transcript of an audio/video file. It's 100% free up to 3 files per day (30 minutes per file) and the paid plan is unlimited and transcribes files up to 10 hours long each. It also supports speaker recognition, common export formats (TXT, DOCX, PDF, SRT, CSV), as well as some AI tools for working with your transcript.
Thanks! I've had good results with Turboscribe (paid plan) and appreciate having this as a service. I typically use it for 2-3 hour long video recordings with a number of speakers and appreciate the editing tools to clean things up before export.
HTMX powers the UI for my AI transcription product TurboScribe (https://turboscribe.ai). Dynamic UIs that change without a page refresh, lazy loading, multi-step forms/flows, etc. It's working GREAT.
My general take on HTMX is:
1) You need to have your act together on your server. Because HTMX pushes more onto your backend, you need to know what you're doing back there (with whatever tech stack you happen to be using).
I have a friend who teaches at a coding boot camp and they do not teach students about server-rendered HTML at all. Folks coming from this world are going to have a tougher time ramping up on something like HTMX.
2) HTMX is great for the 90%+ of common UI paradigms shared by most apps (form submissions, validation, error messages, partial page reloads, lazy loading, CRUD UIs, etc).
If you have a key, critical experience that demands highly dynamic or novel interactivity, you're going to find yourself reaching to either (a) extend HTMX, or (b) create an island powered by raw JS, React, etc.
--
I love HTMX. It's a fantastic tool for delivering quality product (often with much lower engineering cost).
It feels really crummy to be accused and convicted of an "offense" by an algorithm, especially without any recourse.
I once had my account with a major cloud provider terminated for "violating our terms of service". After contacting support, they then claimed that someone had gained access to my credentials.
What evidence did they have? None. I just updated a VM's metadata too frequently (about once a minute). This tripped an ML model, which caused them to automatically terminate my account and send an automated email saying that I had been a bad boy.
This took down a key part of my business for about 5 hours (while I navigated my way through layers of customer support and ultimately temporarily moved this functionality to another cloud provider). Customers were not happy.
It took about 2 weeks and multiple support tickets for the full story to come out. I got them to refund a few months of charges (amounted to several hundred dollars at the time) and restore my account. There was never any recognition that they made a mistake.
I get that companies need to resort to automated means to handle fraud or abuse. But they should also own up to it, add some humility in their automated outreach to customers ("our automated system has detected possible X" instead of "you are guilty"), provide clear escalation paths to talk to a human, and provide a way to "shield" your account (identity verification, upfront deposit of $X, etc) that forces them to contact you before any enforcement action.
In my case, I upgraded to a paid support plan ($100+ per month) in the hopes that their system will be a little less trigger happy with my account in the future. I don't use support at all, it's purely a lame form of insurance that may or may not actually protect against anything.
When these algorithms get it wrong, it completely sucks. And since no tech company has any semblance of customer support, you're completely hosed.
With respect to Facebook: I posted a shop vac last April for $50. I got a message that I was banned from using marketplace for "violating community guidelines."
However, if you believe this happened in error, you could request a review.
So I did! And was denied. I did this process a few more times and each time was denied. Once I requested a review for the third (or fourth?) time, I received a message that said "Unfortunately, your account cannot be reinstated due to violating community guidelines. The review is final."
I have no idea what happened.
So now, I can't use Facebook Marketplace because of some stupid error in their algorithm that can't be ever appealed. Which is a bummer, because I've legitimately found some good electronic finds on there (and have been able to offload things I don't have use for).
Meanwhile, their algorithms for advertising and marketing useless stuff to us are just perfect. A passage from Yuval Noah Harari's book, "Homo Deus: A Brief History of Tomorrow" highlights this:
> A recent study commissioned by Google’s nemesis – Facebook – has indicated that already today the Facebook algorithm is a better judge of human personalities and dispositions than even people’s friends, parents and spouses. The study was conducted on 86,220 volunteers who have a Facebook account and who completed a hundred-item personality questionnaire.
> The Facebook algorithm predicted the volunteers’ answers based on monitoring their Facebook Likes – which webpages, images and clips they tagged with the Like button. The more Likes, the more accurate the predictions. The algorithm’s predictions were compared with those of work colleagues, friends, family members and spouses.
> Amazingly, the algorithm needed a set of only ten Likes in order to outperform the predictions of work colleagues. It needed seventy Likes to outperform friends, 150 Likes to outperform family members and 300 Likes to outperform spouses. In other words, if you happen to have clicked 300 Likes on your Facebook account, the Facebook algorithm can predict your opinions and desires better than your husband or wife!
We need Habeas Corpus for tech. Companies should be obliged to tell you what your violation was, and you should have the opportunity to challenge the judgment in which you are able to present arguments and evidence.
Additionally, I think there should be a right to download your data after being banned, whether or not the ban was fair.
I think Backblaze B2 is probably the reference (which has free egress up to 3x data stored - https://www.backblaze.com/blog/2023-product-announcement/). I don't know of any public S3-compatible provider that is as cheap as 20$/TB/year (roughly ~$0.0016/GB/mo).
As an indie dev, I recommend R2 highly. No egress is the killer feature. I started using R2 earlier this year for my AI transcription service TurboScribe (https://turboscribe.ai/). Users upload audio/video files directly to R2 buckets (sometimes many large, multi-GB files), which are then transferred to a compute provider for transcription. No vendor lock-in for my compute (ingress is free/cheap pretty much everywhere) and I can easily move workloads across multiple providers. Users can even re-download their (again, potentially large) files with a simple signed R2 URL (again, no egress fees).
I'm also a Backblaze B2 customer, which I also highly recommend and has slightly different trade-offs (R2 is slightly faster in my experience, but B2 is 2-3x cheaper storage, so I use it mostly for backups other files that I'm likely to store a long time).
The premise of Workers AI is really cool and I'm excited to see where it goes. It would need other features (custom code, custom models, etc) to make it worth considering for my needs, but I love that CF is building stuff like this.
Wanted to share a new tool I just built and thought it might be useful for some of you.
It will transcribe your audio or video files (voice memos, lectures, podcasts, interviews, whatever!) and then allow you to quickly import the transcripts into ChatGPT with a variety of initial prompts to get you started (summaries, blog posts, outlines, social media posts, etc). It's part of the AI transcription product I created (TurboScribe).
I've been using it to summarize podcasts, write initial drafts of content from voice memos (kind of like a ghost writer), and transform long audio lectures into detailed summaries (w/ section headings and reference timestamps).
I made the first 4 daily audio/video transcriptions free (up to 30 min long) to make it easy to try.
I've been working on TurboScribe (https://turboscribe.ai/) for a few weeks and preparing to "officially" launch soon.
I think high-quality AI transcription should be a lot cheaper.
It's pretty simple: Unlimited transcription (powered by Whisper) for $10/month. Upload your audio/video files, get your transcripts (and export to many common formats).
There's a free tier of 4 transcripts (i.e. up to 2 hours) per day if you'd like to check it out.
TurboScribe is especially designed for individuals who need to transcribe larger volumes of audio/video (i.e. dozens or hundreds of hours, including at the highest Whisper large_v2 quality level) or just don't want to have to worry about getting billed by the hour, hitting some ridiculously low cap, etc.
EDN also contains support for date and UUID literals, the former of which is a huge pain when using JSON and needing to communicate which standard you're using and remember to encode/decode on each side of the wire.
Plus you can even write your own reader literals to enhance your data format. Aero does this well/horribly depending on your taste but it's a cool mechanism for declarative formats.
Yep, two types in fact, line comments with ; and a discard sequence - you can tag code that is read (so it must be correct), but then discarded. We have edn files that are well documented with comments.
or something else? And how can I distinguish between values which just happen to look like maps but are actually not maps? And what if the service I'm talking to does it differently? What if I'm comparing or sorting two JSON values, don't I have to modify the equality/hashcode/ordering logic now to interpret
[
["key", "value"],
["otherKey", "otherValue"]
]
and
[
["otherKey", "otherValue"],
["key", "value"]
]
as being equal and implement a new hashcode/ordering which treats them as equal?
Quite a lot easier when this is just native to the format.