For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more olllo's commentsregister

Hi HN,

I want to share a cool feature in Data.olllo, a local, offline, no-code data analysis tool with AI chat capabilities.

Handling time and date data can be a pain—timestamps, strings, extracting year/month, or converting formats usually require coding knowledge. With Data.olllo’s AI chat, you just describe what you want in plain English, and the AI instantly generates the correct pandas code inside a process(dfs) function you can run on your dataset.

For example, you can: - Convert Unix timestamps to readable date-times - Extract parts like year and month from date columns - Parse messy string dates into proper datetime objects - Convert datetime back to numeric timestamps - Format datetime columns into any string style you want

Here’s a sample snippet the AI generates for converting a Unix timestamp column:

def process(dfs): df = dfs["df"] df["timestamp"] = pd.to_datetime(df["timestamp"], unit="s") return df

You don’t need to know pandas or write any code yourself — just type your request, and the AI does the heavy lifting, letting you explore and visualize your data faster.

Data.olllo runs 100% locally, so your data stays private, and it can handle millions of rows quickly.

If you often struggle with time data or want a fast way to analyze large CSVs without coding, give Data.olllo a try:

https://olllo.top/convert-format-datetime-ai-chat

Happy to answer questions or get feedback!

— Denis


We all have those moments: a massive dataset, full of potential, but locked behind rows, columns, and hours of manual digging. I built something to change that — not just for myself, but for anyone who’s ever stared at a CSV and thought, “There has to be a better way.”

Data.olllo is a desktop app that lets you talk to your data.

Ask in plain English:

“Which products grew the fastest this year?” “What’s unusual about Q2 performance?” “Show me regional trends for refunds.”

The AI assistant understands your dataset and responds with insights — tables, summaries, even charts. No Python scripts. No cloud latency. No need to upload anything. Your data stays with you. It just becomes smarter.

Behind the scenes, it uses your choice of AI (ChatGPT, Gemini, or even a local LLM), but the goal isn’t just automation — it’s flow. You and your data, in sync.

This isn’t a data tool. It’s a new way of thinking with your information.

Try it here → https://olllo.top/AI-CSV-Analysis


I built a tool called Data.olllo that helps split huge CSV files—like multi-GB datasets—into smaller parts by size, row count, or column value. It’s 100% offline, runs on your desktop, and doesn’t require any coding.

We just published a detailed article explaining how it works: https://olllo.top/articles/article-24-Split-Huge-CSVs-in-Sec...

Would love feedback from folks who deal with messy or oversized data. What features would you want in a CSV splitter?


I’ve just relaunched https://olllo.top with a brand-new design, logo, and overall presentation. Data.olllo is a no-code data analysis tool that runs on your desktop, aimed at helping users analyze large files quickly, privately, and without needing to write code.

What’s new: • Fresh branding and visual identity (logo, colors, layout) • Simplified homepage with clearer descriptions of what Data.olllo does • Improved demo experience and easier navigation • A focus on real-time, local data handling – no uploads required

You can try it directly on Windows (7 days trail after sign up), and I’d love to hear your feedback—whether about the design, the messaging, or the tool itself.

This is something I’ve been building personally, and I’m here to answer questions and gather thoughts.

Thanks for checking it out!

https://olllo.top


Data.olllo is the only desktop tool that lets you open, filter, and visualize massive CSV datasets—all offline, with no coding and no cloud. Break free from Excel’s limits and unlock true data power.

Open 100GB+ CSVs instantly—no RAM bottleneck Visualize & filter millions of rows in real time Convert CSV to HDF5 for blazing-fast analytics No coding, no cloud, no data limits 100% local: your data stays private


Have you ever struggled with a CSV that's just too big? Whether you're hitting Excel limits, uploading constraints, or simply want to work on a smaller piece of the puzzle — Data.olllo has you covered.

With just a few clicks, you can split any large CSV file by:

File Size — Define the number of files and let Data.olllo do the rest. Column Values — Automatically group and split the data based on any column (e.g., Region, Category, Date). Direct Split (No Load) — Instantly split a massive CSV by row count without opening it first, for maximum speed and minimal memory use.


Working with large datasets—50GB, 100GB, or more—is a serious challenge in most tools.

Excel can’t open files this size. Python scripts take time to load and debug. Even many "pro" data platforms get sluggish or crash outright.

That’s why Data.olllo was designed to open massive CSV and HDF5 files without breaking a sweat—up to 100GB and beyond.


I’m curious how others here approach data tasks: Do you prefer writing SQL directly, using Python/pandas, or working with no-code/low-code tools (like Tableau, Airtable, or internal tools)?

For context, I’m building Data.olllo, a desktop app for processing CSVs locally — no upload, just fast filtering, transforming, and exploring data with a spreadsheet-like UI.

What do you personally reach for first when cleaning or analyzing a dataset?


Thanks for the feedback! Data.olllo isn't Electron-based—it's built in Python with tkinter and custom tkinter, so the size mainly comes from the data libraries and embedded Python environment. I agree that keeping things lean is important, and I’m actively working on optimizing the package size further.

Appreciate the DuckDB comparison—great tool and definitely a benchmark worth learning from!


Absolutely—CSVs are still everywhere, especially for simple interchange between teams and tools. I designed Data.olllo with that in mind.

That said, I also plan to add support for Parquet and other formats soon—definitely agree it's gaining traction for larger, structured datasets.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You