For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | bpanon's favoritesregister

Havana summarizes my sales calls and drafts a follow-up email automatically after each call.

https://tryhavana.com

When you do more than 5-10 sales calls a week, writing up notes and emails can take hours of your time.

It's tedious work but also must be done (otherwise you might forget what's going on in a deal when it's time to do another call down the line!).

Also, the quality of the summaries and emails must be good (clear, readable) but not necessarily great (we're not looking to win a Pulitzer here).

It's the perfect kind of task for GPT.


I actually built a search engine back in 2018 using postgresql

https://austingwalters.com/fast-full-text-search-in-postgres...

Worked quite well and still use it daily. Basically doing weighted searches on vectors is slower than my approach, but definitely good enough.

Currently, I can search around 50m HN & Reddit comments in 200ms on the postgresql running on my machine.


Wim hof have achieved great things practising the ancient practice of pranayama which is actually a part of yoga. Yet he presents it as if he "discovered" it. This is pure intellectual dishonesty.

Yoga's actual name is ashtanga yoga which means it has eight limbs[1]. Contrary to popular belief Yoga is not about just twisting your body into weird poses. That aspect is called ASANA and what Wim Hof claims as his discovery is called pranayama which is a fully developed science of breath control practices by Sadhus and monks for centuries.

YAMA – Moral disciplines

NIYAMA – Positive duties

ASANA – Posture

PRANAYAMA – Breathing techniques

PRATYAHARA – Control over senses

DHARANA – Focused concentration

DHYANA – Meditative absorption

SAMADHI – Enlightenment

I think there is strong bias going on here in the comments. Two of the comments mentioning yogic origins of this practice are dead. So I present 2 videos as an example. First video[2] was captured by Indian soldier in -45 degree temperatures of himalayan border. Second video[3] captures another sadhu who lives naked under snow without any protective gears

[1] https://en.wikipedia.org/wiki/Ashtanga_(eight_limbs_of_yoga)

[1] https://youtu.be/8Xbi4Qrf9mE

[2] https://youtu.be/FcUdEkPa-2c

EDIT: Second video added.

EDIT 2: Major edit explaining yogic origins of this practice.


Someone once told me he derives his self-confidence from the dialogue with himself. It took me a few years to understand it:

1. A clear dialogue with oneself is establishing certainty about the inner self

2. Certainty with the inner self enables one to see the outside in a clear way

3. That clarity contributes to confidence in one's actions.


For theoretical continuous/nonlinear/convex optimization, your #1 is the bible, together with

"Convex Optimization" by Boyd & Vandenberghe.

However, beware that both are grad textbooks. They can be tough going at times. Unfortunately, I never found undergrad textbooks I liked much, for theory.

If you're interested in discrete optimization too (the other half of math optimization), the classics are:

"Optimization Over Integers" by Bertsimas & Weismantel

"Integer and Combinatorial Optimization" by Nemhauser & Wolsey


When I launched my former bitcoin casino in 2011 (it's gone, but it was a casino where all games, even roulette tables were multiplayer, on a platform built from scratch starting in '08), I handled all web requests through a server in Costa Rica that cost about $6/mo. Where I had a shell corporation for $250/year. Once the front end -- the bullet containing the entire casino code, about 250kb -- loaded, from Costa Rica, and once a user logged in, they were socketed to a server that handled the gaming action in the Isle of Man. Graphics and sounds were still sent from the Costa Rica server. I didn't have a gaming license in the IoM, though - that was around $400k to acquire legally. So I found a former IoM MP who was a lawyer, who wrote a letter to the IoM gov't stating that we didn't perform random calculations on their server, thus weren't a gambling site under IoM law. Technically that meant that no dice roll or card shuffle leading to a gambling win or loss took place on that server. So the IoM server handled the socketed user interactions, chat, hand rotation and tournament stuff. Also the bitcoin daemon and deposits/withdrawals. But to avoid casino licensing, I then set up a VPS in Switzerland that did only one thing: Return random numbers or shuffled decks of cards, with its own RNG. It was a quick backend cUrl call that would return a fresh deck or a dice roll, for any given random interaction on the casino servers. The IoM server would call the Swiss server every time a hand was dealt or a wheel was spun; the user was still looking at a site served from a cheap web host in Costa Rica. And thus... yeah, I guess I handled millions of requests a day over a $6 webserver, if you want to count it that way.

At 1 year: Hang in there, only 39 more years to go until retirement.

At 5 years: Hang in there, only 35 more years to go until retirement.

At 10 years: Hang in there, only 30 more years to go until retirement.

All jokes aside, the best advice I got that covers all the years is, "When they give you your first assignment, bust your ass like you've never busted your ass before. Do it fast and perfect as you can. That will set the stage for the rest of your time there." I got into the habit and do it always. Once you show people you can "get stuff done," they will bring you with them when they move jobs. You'll always have work and you'll never have to talk to a recruiter again.

You should learn how to build systems. Build stuff in your spare time, full applications so you understand what it takes to build them. Ideally you would want to build a product that people use so you understand what it takes to actually build and ship a product. Setup the server that hosts your applications. Setup the DNS records and register a domain name so you know how all that works. Also, pick an industry like healthcare or banking or even tech, that way you're not a programmer you're a "financial industry programmer." Each industry has it's own nuances and regulations it has to deal with. Knowing these things adds value.

Mind the burnout. If you are in the US and get almost no vacation time, mind it carefully. Burnout will make you hate your job and lead to more burnout. It can last months, sometimes years.


You can change. There's a technique called Releasing outlined by David Hawkins in his book Letting Go. I've undergone a metamorphosis since starting this practice. Not 10% better, but life altering, horizon expanding shifts I never thought were possible for me. It's inner work that resolves the feelings that create your outer reality.

I'm by no means a prominent employee within the company, but I am unusually well-remunerated for my role. I attribute this to hiding about 1/3 of what I do from my manager until it's ready to demonstrate.

I usually have 1-2 projects my manager knows about and is regularly tracking, then 2-3 "skunkworks" projects in various stages of development that get prioritized based on the word around the org. For example, I might hear from our data center team at a regular meeting that they need to buy 5% more racks based on current usage projections. I might have been working on optimizing a key data saving routine that reduces usage by 10% (fictitious numbers), so this gets bumped up in priority. So when my manager asks the team the following week if they have any ideas on how to improve our code, I've already got something halfway working. My manager (and I) get to look good, and I got to amortize the work across a few months instead of suffering through crunch time.


You can lack self-confidence and still have empathy for the welfare of others.

Narcissists see other people entirely as threats or opportunities, not as separate individuals with lives and needs of their own.

They literally have no concept of empathy. Their actions are motivated by a need to destroy and devalue any perceived threats to their fragile self-image, while cultivating people, projects, and situations that make them look and feel good in front of the imaginary audience that judges everything they do.

There's no ability to think of others as equals. Everyone is either better or worse than a narcissist - not just in a passing way, but absolutely, infinitely, and obsessively.


We used historical flow meter readings (started out with 15 minute intervals but it worked much better with a higher frequency) and used that to train Recurrent Neural Networks (RNN's) to predict where areas were likely to flood. I was the devops lead on it for the prototype, not the data scientist unfortunately so can't give you the in's and outs but can tell you that we used tensorflow together with pandas/df/timescaledb. We then displayed that using plotly, all this was stuffed inside several containers. It was a great project to work on actually. The whole setup was pretty much a joy to work with.

The LMAX system is more about latency than throughput. When designing for very low-latency the result can be a system that achieves very great throughput if the appropriate techniques are employed. Single threaded applications are very suitable for low-latency because of the avoidance of lock contention and predictability they bring.

Some means of reliable delivery of messages, to and from this single thread, is necessary to make a useful application. These messages must be delivered in the event of system failures. To address this need the Disruptor is employed to pipeline and run in parallel, the replication, journalling and business logic for these messages. The whole system is asynchronous and non-blocking.

In our architecture we have multiple gateway nodes that handle border security and protocol translation to and from our internal IP multi-cast binary protocol for delivery to highly redundant nodes. Lots of external connections can be multiplexed down from the outside world this way.

The 6 million TPS refers to our matching engine business logic event processor. We have other business logic processors for things like risk management and account functions. These can all communicate via our guaranteed message delivery system that can survive node failures and restarts, even across data centres.

Modern financial exchanges can process over 100K TPS and have to respond with latency in the 100s of microseconds firewall to firewall, thus including the entire internal infrastructure. For those tracking the latest developments will see it is possible to have single digit microsecond network hops with IP multicast with user space network stacks and RDMA over 10GigE. Even a well tuned 1GigE stack can achieve sub 40us for a network hop. For reference single digit microseconds is in the same space as a context switch on a lock with the kernel arbitrating. Most financial exchanges rely on having data on multiple nodes before the transaction is secure. A number of these nodes can be asynchronously journalling the data down to disk. At LMAX we tend to have data in 3 or more nodes at any given time.

In my experience of profiling many business applications the vast majority of the time is either spent in protocol translation such as XML or JSON to business objects, or within the JDBC driver doing buffer copying and waiting on the database to respond, when the application domain is well modelled.

Often applications are not well modelled for their domain. This can result in algorithms that, rather than be O(1) for most transactions, have horrible scale up characteristics because of inappropriate collections representing relationships. If you have the luxury of developing an in-memory application requiring high performance it quickly becomes apparent the cost of a CPU cache miss is the biggest limitation to latency and throughput. For this one needs to employ data structures that exhibit good mechanical sympathy for CPU and memory subsystem. At LMAX we have replaced most of the JDK collections with our own that are cache friendly and garbage free.

So far we have had no issue processing all transactions for a given purpose on a single thread, or holding all the live state in memory for a single node. If we ever cannot process all the transactions necessary on a single thread then we simply shard the model across threads/execution contexts. We only hold the live mutating data in-memory and archive out to database completed transactions as they are then read only.

Martin (LMAX CTO)


That's because most of those tutorials have not been written by somebody actually putting something in production.

I've been using asyncio for a while now, and you can't get away with a short introduction since:

- it's very low level

- it's full of design flaws and already has accumulated technical debt

- it requires very specific best practices to be usable

I'm not going to write a tutorial here, it would take me a few days to make a proper one, but a few pointers nobody tells you:

- asyncio solves one problem, and one problem only: when the bottleneck of your program is network IO. It's a very small domain. Most programs don't need asyncio at all. Actually many programs with a lot of network IO don't have performance problems, and hence don't need asyncio. Don't use asyncio if you don't need it: it adds complexity that is worth it only if it solves your problem.

- asyncio is mostly very low level. Unless you code your own lib or framework with it, you probably don't want to use it directly. E.G: if you want to make http requests, use aiohttp.

- use asyncio.run_until_complete(), not asyncio.run_forever(). The former will crash on any exception, making debugging easy. The later will just display the stack trace in the console.

- talking about easy debugging, activate the various debug features when not in prod (https://docs.python.org/3/library/asyncio-dev.html#debug-mod...). Too many people code with asyncio in the dark, and don't know there are plenty of debug info available.

- await is just a way to inline a callback. When you do "await", you say 'do the stuff', and any lines of code that are after "await" are called when "await" is done. You can run asynchronous things without "await". "await" is just useful if you want 2 asynchronous things to happen one __after__ another. Hence, don't use it if you wants 2 asynchronous things to progress in parallel.

- if you want to run one asynchronous thing, but not "await" it, call "asyncio.ensure_future()".

- errors in "await" can be just caught with try/except. If you used ensure_future() and no "await", you'll have to attach a callback with "add_done_callback()" and check manually if the future has an exception. Yes, it sucks.

- if you want to run one blocking thing, call "loop.run_in_executor()". Careful, the signature is weird.

- CPU intensive code blocks the event loop. loop.run_in_executor() use threads by default, hence it doesn't protect you from that. If you have CPU intensive code, like zipping a lot of files or calculating your own precious fibonacci, create a "ProcessPoolExecutor" and use run_in_executor() with it.

- don't use asyncio before Python 3.5.3. There is a incredibly major bug with "asyncio.get_event_loop()" that makes it unusable for anything that involve mixing threads and loops. Yep. Not a joke.

- but really use 3.6. TCP_NODELAY is on by default and you have f-string anyway.

- don't pass the loop around. Use asyncio.get_event_loop(). This way your code will be independent of the loop creation process.

- you do pretty much nothing yourself in asyncio. Any async magic is deep, deep down the lib. What you do is define coroutines calling the magic things with ensure_future() and await. Pretty much nothing in your own code is doing IO, it's just asking the asyncio code to do IO in a certain order.

- you see people in tutorials simulate IO by doing "asyncio.sleep()". It's because it's the easiest way to make the event loop switch context without using the network. It doesn't mean anything, it just pauses and switch, but if you see that in a tutorial, you can mentally replace it with, say, an http call, to get a more realistic picture.

- asyncio comes with a lot of concepts, let's take a time to define them:

    * Future: an object with a thing to execute, with potentially some callbacks to be called after it's executed.
    
    * Task: a subclass of future. The thing to execute is a coroutine,, and the coroutine is immediately scheduled in the event loop when the task is instantiated. When you do ensure_future(coroutine), it returns a Task.

    * coroutine: a generator with some syntaxic sugar. Honestly that's pretty much it. They don't do much by themself, except you can use await in them, which is handy. You get one by calling a coroutine function.

    * coroutine function: a function declared with "async def". When you call it, it doesn't run the code of the function. Instead, it returns a coroutine. 

    * awaitable: any object with an __await__ method. This method is what the event loop uses to execute asynchronously the code. coroutines, tasks and futures are awaitables. Now the dirty secret is this: you can write an __await__ method, but in it, you will mostly call the __await__ from some magical object from deep inside asyncio. Unless you write a framework, don't think too much about it: awaitable = stuff you can pass to ensure_future() to tell the event loop to run it. Also, you can "await" any awaitable.

    * event loop: the magic "while True" loop that takes awaitables, and execute them. When the code hits "await", the event loop switch from one awaitable to another, and then go back to it later.

    * executor: an object that takes code, execute it in a __different__ context, and return a future you can await in your __current__ context. You will use them to run stuff in threads or separate processes, but magically await the result in your current code like it's regular asyncio. It's very handy to naturally integrate blocking code in your workflow.

    * event loop policy: the stuff that creates the loop. You can override that if you are writing a framework and wants to get fancy with the loop. Don't do it. I've done it. Don't.

    * task factory: the stuff that creates the tasks. You can override that if you are writing a framework and wants to get fancy with the tasks. Don't do it either.

    * protocols: abstract class you can implement to tell asyncio __what__ to do when it establish/loose a connection or send/receive a packet. asyncio instantiate one protocol for each connection. Problem is: you can't use "await" in protocols, only old fashion callback.

    * transports: abstract class you can implement to tell asyncio __how__ to establish/loose a connection or send/receive a packet.
Now, I'm putting the last point separately because if there is one thing you need to remember it's this. It's the most underrated secret rules of asyncio. The stuff that is literally written nowhere ever, not in the doc, not in any tuto, etc.

asyncio.gather() is the most important function in asyncio ===========================================================

You see, everytime you do asyncio.ensure_future() or loop.run_in_executor(), you actually do the equivalent of a GO TO. (see: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...)

You have no freaking idea of when the code will start or end execution.

To stay sane, you should never, ever, have an dangling awaitable anywhere. Always get a reference on all your awaitables. Decide where in the code you think their life should end.

And at this very point, call asyncio.gather(). It will block until all awaitables are done.

E.G, don't:

    asyncio.ensure_future(bar())
    asyncio.get_event_loop().run_in_executor(None, barz)
    await asyncio.sleep(10)
    
E.G, do:

    foo = asyncio.ensure_future(bar())
    fooz = asyncio.get_event_loop().run_in_executor(None, barz)
    await asyncio.sleep(10)
    await asyncio.gather(foo, fooz)  # this is The Only True Way
   
Your code should be a meticulous tree of hierarchical calls to asyncio.gather() that delimitates where things are supposed to stop. And if you think it's annoying, wait for debugging something which life cycle you don't have control over.

Of course it's getting old pretty fast, so you may want to write some abstraction layer such as https://github.com/Tygs/ayo. But I wouldn't use this one in production just yet.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You