I am doing something similar where I have a parser which looks for changes in documentation, matches them with the GraphQL schema and generates code using Apollo. In a nutshell it is a code generator written using Claude to generate more code and on failure goes back to Claude to fix the generator and asks a human for review.
Daniel, your work is changing the world. More power to you.
I setup a pipeline for inference with OCR, full text search, embedding and summarization of land records dating back 1800s. All powered by the GGUF's you generate and llama.cpp. People are so excited that they can now search the records in multiple languages that a 1 minute wait to process the document seems nothing. Thank you!
Hey in really interested in your pipeline techniques. I've got some pdfs I need to get processed but processing them in the cloud with big providers requires redaction.
Wondering if a local model or a self hosted one would work just as well.
I run llama.cpp with Qwen3-VL-8B-Instruct-Q4_K_S.gguf with mmproj-F16.gguf for OCR and translation. I also run llama.cpp with Qwen3-Embedding-0.6B-GGUF for embeddings. Drupal 11 with ai_provider_ollama and custom provider ai_provider_llama (heavily derived from ai_provider_ollama) with PostreSQL and pgvector.
People on site scan the documents and upload them for archival. The directory monitor looks for new files in the archive directories and once a new file is available, it is uploaded to Drupal. Once a new content is created in Drupal, Drupal triggers the translation and embedding process through llama.cpp. Qwen3-VL-8B is also used for chat and RAG. Client is familiar with Drupal and CMS in general and wanted to stay in a similar environment. If you are starting new I would recommend looking at docling.
Yes, they are all linked using Drupal's AI modules. I have an OpenCV application that removes the old paper look, enhances the contrast and fixes the orientation of the images before they hit llama.cpp for OCR and translation.
Disclaimer: I'm an AI novice relative to many here. FWIW last wknd I spent a couple hours setting up self-hosted n8n with ollama and gemma3:4b [EDIT: not Qwen-3.5], using PDF content extraction for my PoC. 100% local workflow, no runtime dependency on cloud providers. I doubt it'd scale very well (macbook air m4, measly 16GB RAM), but it works as intended.
If you have a basic ARM MacBook, GLM-OCR is the best single model I have found for OCR with good table extraction/formatting. It's a compact 0.9b parameter model, so it'll run on systems with only 8 GB of RAM.
Then you can run a single command to process your PDF:
glmocr parse example.pdf
Loading images: example.pdf
Found 1 file(s)
Starting Pipeline...
Pipeline started!
GLM-OCR initialized in self-hosted mode
Using Pipeline (enable_layout=true)...
=== Parsing: example.pdf (1/1) ===
My test document contains scanned pages from a law textbook. It's two columns of text with a lot of footnotes. It took 60 seconds to process 5 pages on a MBP with M4 Max chip.
After it's done, you'll have a directory output/example/ that contains .md and .json files. The .md file will contain a markdown rendition of the complete document. The .json file will contain individual labeled regions from the document along with their transcriptions. If you get all the JSON objects with
"label": "table"
from the JSON file, you can get an HTML-formatted table from each "content" section of these objects.
It might still be inaccurate -- I don't know how challenging your original tables are -- but it shouldn't be terribly slow. The tables it produced for me were good.
I have also built more complex work flows that use a mixture of OCR-specialized models and general purpose VLM models like Qwen 3.5, along with software to coordinate and reconcile operations, but GLM-OCR by itself is the best first thing to try locally.
I also get connection timeouts on larger documents, but it automatically retries and completes. All the pages are processed when I'm done. However, I'm using the Python client SDK for larger documents rather than the basic glmocr command line tool. I'm not sure if that makes a difference.
Cool! For GLM-OCR, do you use "Option 2: Self-host with vLLM / SGLang" and in that case, am I correct that there is no internet connection involved and hence connection timeouts would be avoided entirely?
When you self-host, there's still a client/server relationship between your self-hosted inference server and the client that manages the processing of individual pages. You can get timeouts depending on the configured timeouts, the speed of your inference server, and the complexity of the pages you're processing. But you can let the client retry and/or raise the initial timeout limit if you keep running into timeouts.
That said, this is already a small and fast model when hosted via MLX on macOS. If you run the inference server with a recent NVidia GPU and vLLM on Linux it should be significantly faster. The big advantage with vLLM for OCR models is its continuous batching capability. Using other OCR models that I couldn't self-host on macOS, like DeepSeek 2 OCR or Chandra 2, vLLM gave dramatic throughput improvements on big documents via continuous batching if I process 8-10 pages at a time. This is with a single 4090 GPU.
Keep in mind for next time, the date you drop the mail in your local post office might not be the date it is postmarked. Impacts voting, bank fees, IRS and more.
A KVM system is not a BMC. Running a cluster (even a fairly small one) you probably want power control, e.g. with powerman, serial console availability, e.g. with conman, metrics, e.g. via freeipmi and some monitoring solution, and probably logs for alerts. Control and monitoring need to be out of band.
(Powerman, conman, and freeipmi come from Livermore for use with serious HPC systems.)
wxWidgets is a wrapper over the native GUI toolkit. It allows you to write a OS independent GUI code and because it is a wrapper you get most of the benefits of native toolkit like dark mode.
> most of the benefits of native toolkit like dark mode.
On which OS does dark mode work with wx? Certainly not with Windows... (which, as VZ points out, isn't wx's fault, it's just that no APIs officially exist for this - explorer.exe has dark mode, but that uses completely private, undocumented APIs).
The fact that you have to change your SSID to opt out of third parties using it is... shady at best. What happens when two competing third parties have conflicting name requirements for you to opt-out?
I am doing something similar where I have a parser which looks for changes in documentation, matches them with the GraphQL schema and generates code using Apollo. In a nutshell it is a code generator written using Claude to generate more code and on failure goes back to Claude to fix the generator and asks a human for review.
reply