For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more varunramesh's commentsregister

I made a site that you can use to compare features / prices between cloud storage services - https://comparecloud.io


In 2014, I wrote a minimax-based AI to play games on Pokemon Showdown. We adapted Showdown's battle simulator for our tree search. The hardest part was syncing the local simulator state with the actual game state - the battle state in Pokemon is both complex and partially observed. Bugs in this process could result in the AI using Protect twice because the state wasn't updated with the fact that it used Protect the previous turn.

The minimax AI was able to use tactics like Pain Split, Spikes, and Magic Guard.

Writeup: https://varunramesh.net/content/documents/cs221-final-report... GitHub: https://github.com/rameshvarun/showdownbot


This is super interesting. I would like to see a real (learning, à la AlphaGo) A.I. play on Showdown OU someday, without cheating (i.e. about the number of matches per day a real human player would play).

I think there are a few challenges not found in most other online games, one being that strategies that win on different strata of the ladder are not the same (e.g. hyper offense is the most efficient on the lower ladder, but at some point around 1500 / 1600 you will start losing using it...). Also, I wonder how well an A.I. trained on the ladder would do in a tournament (e.g. SPL), where the metagame is a bit different.

Your code looks like a good entry point for that, all that's needed is to write a new bot using ML libraries...


Considered building something like this last year, and couldn't find any prior art online. Looking forward to reading your report!


Lua is an awesome language, but there are significant gotchas, especially with regards to the behavior of ipairs and the length operator - https://blog.varunramesh.net/posts/lua-gotchas/#the-behavior...


I would love to see an official static site mode for Ghost (and Wordpress). That way we can get both a nice authoring experience and the low cost/security of S3 hosting.


This is always the first thing I look for in the new version announcements for big blogging platforms.

Would be amazing if I didn't need to know a completely separate technology and/or hack together bridges/site "staticizers" in order to have a quite basic functionality (the site be 100% static when no back-end dynamic interaction is needed).

That said, there are third party plugins for WordPress that do this. Don't know about ghost, but my hopes are for first-class support for this.


Check out Publii, then (Open Source too). It's quite similar to WordPress in a lot of ways, but it's a desktop app that uploads static files. https://github.com/GetPublii/Publii


Thanks for posting this recommendation, I've been looking for a static blogging tool like this.


Why have I not heard of this? This sounds like a great tool.


Seems like a great way for them to bankrupt the .com


Not the same as static but you can get pretty good results by caching in a CDN. It’s also pretty easy to set up these days.


This is the first thing I googled after I saw this update.


Next.js has had support for a static site export for a long time. I’m just waiting for someone to make it into a user friendly CMS.


Right now, I am hosting my personal site and blog on S3 + Cloudflare for 6 cents / month. I use Middleman, because I feel that it offers more flexibility than Hugo.


I wonder if maybe this person just doesn't understand Git / GitHub and was just trying to remove the credit from their own deployment.

Still a dick move, but maybe not as bad as what it seems.


Yeah, I hope its something like that because if he actually thinks the commit is a good idea then he really is taking the piss.


In the article, a Google spokesperson states that "The technology flags images for human review, and is for non-offensive uses only." The "killing machines" statement in the headline seems like an exaggeration when it should really refer to "surveillance machines."


The reason for it currently to be "for non-offensive uses only" is NOT because making the machine to automatically kill someone is unethical, but because it's is not accurate enough to only hit valid targets (May hit deployer and friendly).

So yeah, may be now it's "for non-offensive uses only".

Few years in, it will be "human control the kill trigger".

Then, eventually, it will be "human deploy the device, then it fires and kills all by itself, just like a fire-and-forget missile, plus it can hit multiple targets and be reused."

The dark side of technical advancement.


According to the paper, "The data include the infant’s month of birth, and a clinical estimate of gestation in weeks, which we use to estimate a month of conception."

On miscarriages, "We interpret these data with caution because they come from a subset of one state, and because fetal deaths are under-reported. Nevertheless, the data provide no evidence of an increase in miscarriages leading up to recessions that is anywhere near the magnitude required to explain a significant portion of the observed decrease in births." They similarly argue that abortions aren't significant enough to explain the reduction in births.


This is similar to hg histedit, where the "ui" is simply a text editor.


Are threads both concurrent and parallel (like goroutines)? In other words can they be used to parallelize CPU-heavy computation?


If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. (Apparent number mismatch to account for slowdowns and general amdahl's law.) And, with no offense intended to Goby, a brand new scripting language built on top of a language like Go (already ~2x-3x slower than C in general) could easily see performance penalties of 100x or 500x vs C. (Think logarithmic here.)

Even Go, at 2x-3x slower than C, is already not a terribly great choice for true CPU-intensive loads. It's fast enough it can fit in some scenarios, but if you really start to ramp up you're going to want to switch to something else.

Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it? I find the idea that in one second, someone cares deeply enough about CPU performance to call their load "CPU-heavy" and want to learn how to use many cores to process it, but in the next second is oblivious to the issues of using languages that are very slow on the CPU to do the work, to be a very bizarre shape of concerns about performance. It's like someone asking how they can move ten tons of something from New York to LA as quickly as possible, but insisting that they will only use bicycles to do it. The fact that you may be able to work out a way to do it with only bicycles, even perhaps surprisingly quickly compared to what one's initial reaction may have been, isn't going to change the fact that it sure is weird how one moment you're concerned about doing it as quickly as possible and in the next moment completely oblivious to the performance consequences of the chosen tools.


> "If you care about CPU-heavy computation, you care about not using an interpreted language to do it, because if you pay a 10x performance penalty, that turns your 16-core machine back into an effective 1-core machine. "

> "Edit: Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?"

You first talk about interpreters and then you talk about performance. They're not perfectly correlated. Given how modern implementations of languages are not either simple token processing state machines of the 1980s or simple compilers of the same period, this equivocation of yours seems out of place in the 2010s. A proper interpreter like LuaJIT can not only reach very decent performance on computationally expensive stuff (1x-2x of C run time in Scimark 2, depending on the particular test, for example) but also allow for delaying computation to as late a time as possible and then generating specialized code based on the increased amount of information available. That can be done not only even across modules (which static compilers still struggle with without some kind of link-time optimizations) but also depending on actual data at run time (which static compilers are completely incapable of, unless they're somehow embedded into the final application - an option that, e.g., Lisp programs can use if they choose so).


"A proper interpreter like LuaJIT"

Is a JIT, not an interpreter. JITs are fundamentally different. I seriously doubt Goby is a JIT yet, to the point I'm not even going to check the source.

LuaJIT is also an outlier. I consider it a solid point in favor of the argument that if you build a language for speed from day 1, you can do pretty well and still build in a lot of nice features. Even so, I understand LuaJIT had to drop some Lua features to get there. However, if you first design your language's features with a lot of focus on convenience, and then try to make them fast without compromise, you end up in the PHP/Python/Perl/Ruby/Javascript space, where no matter how much work you put into it you hit fundamental walls. (Yes, even JS with all modern JIT'ing is not really that fast of a language.) The counterargument to your point is that LuaJIT is pretty much all alone in its position on the performance, despite the fact that other seemingly-similar languages have had orders of magnitude more work poured into their JITs.

I think there's a lot of up-and-coming languages that have learned a lot about designing for performance and while, alas, LuaJIT's future seems dim, I believe a lot of languages like Nim and Crystal and even to some extent Go have learned about how to be nicer languages than C or C++ while not giving up tons of performance. LuaJIT, in my opinion, still has a place of honor in the history of programming languages, far outsized from its actual use.

(Rustaceans may be assured I have not forgotten them, I just think Rust is coming at this from a significantly different angle.)


> Even so, I understand LuaJIT had to drop some Lua features to get there.

Not true. LuaJIT is complete Lua. The difference, besides being jitted are: having parts written in assembly, super optimisations and things like FFI which cannot be written in C89. Main Lua uses nothing but C89 which makes it run on almost anything so this is not the case for LuaJIT. Also vanilla Lua is way smaller and the source code is cleaner simpler. Divergence in the language is not because LuaJIT dropped some features but because it was created while Lua was on version 5.1. Now Lua is on 5.3 and LuaJIT didn't catch up on everything yet, it is basically 5.1 compatible with some sprinkles of 5.2.


Goby doesn't have JIT. It just created 6 months ago so we have many things prior to performance improvements. And I'm not a programming language expert so it's too hard for me introducing JIT in my first language.


> Would someone like to explain what is wrong with the idea that people who care about CPU-heavy computation also need to care about the performance of the language they are using rather than just downmodding it?

I didn't downvote you, so i'm not sure, but maybe it was because of the condescending tone (assuming what they care about from a simple question and telling them what they should care for instead), aaand not answering their question about parallelism in the first place.


I didn't assume, the question contained "CPU-heavy", and the answer is relevant because it's a common misconception that you can make up for a slow language with parallelization, but you can't. If you've got a CPU-heavy task, you will find in practice that even using a lot of threads you'll be lucky to get even a 3x speedup on clock time, unless your problem is 100% strictly embarrassingly parallel. (I've tried a few times, which is where the 3x comes from. It's all I was able to get and my tasks were very close to embarassingly parallel, but the cruel nature of Amdahl's law is that it takes only very slight non-parallel components to wreck your speed.) It's not a viable solution in practice.


I see your point now, and i agree. Maybe the original message could've been phrased differently, or maybe it was my fault for misinterpreting it :)


Didn't downvote, but perhaps a request for clarification; perhaps you could cite some benchmarks for these numbers? Esp. The 100-500x speed loss for interpreted languages.


"Esp. The 100-500x speed loss for interpreted languages."

First, let me remind you that you have to think logarithmically here, not in absolute terms. 1-5x is the same sized range as 100-500x, about half-an-order of magnitude. (Pedants will correctly observe it isn't actually half as half an order of magnitude is actually 3.16... but it's close enough for estimate work.)

I use 50x slower than C as my guideline for how fast conventional 1990s dynamic scripting languages running an a conventional interpreter are, based on: http://benchmarksgame.alioth.debian.org/u64q/which-programs-... You can see Erlang, Perl, Ruby, Smalltalk, Python3, Lua (not LuaJIT, Lua, huge difference), and Ruby. You can find Node.JS with all the mighty power of a JIT'd language down on the next graph all the way on the right, hanging out somewhere around 10x slower than C, which seems to be all you can practically expect from a JIT'd dynamic scripting language; excepting my comments about LuaJIT in the cousin comment, I haven't see anything that convinces me you can go any faster for that crop of languages. Of course if somebody produces a 2x-slower-than-C Python JIT, I'll just update my understanding rather than insist it can't exist. But at the moment I see no particular reason to think that's going to happen.

You can also see Go at 2-3x slower than C just a bit to the right. (This is why I say Go is pretty fast for a scripting language, but you can see it's not all that fast when compared to the conventionally compiled languages. It takes a non-trivial loss on both not doing a lot of optimization, and requiring a lot of indirect vtable lookups when you use interfaces heavily which C++ often avoids and Rust aggressively avoids whenever possible.)

100-500x speed loss is just an estimate for a brand new, unoptimized scripting language... and, actually, it's a rather generous one, it could go another order of magnitude or two quite easily, especially in the very early days. Note how while that may seem extreme, it's just another order of magnitude or so slower than the optimized dynamic scripting languages. For an unoptimized implementation, that's not necessarily a terrible estimate. As I understand it, Perl 6 is currently hanging out in the 100-500x slower than C range, though I see no fundamental reason they won't catch up to the current scripting languages at the very least once they have time to optimize. (Whether than can significantly exceed them I don't know; I don't even know that it's a goal, since the dynamic scripting languages are certainly plenty fast enough for a huge variety of tasks as-is and that will continue to be true indefinitely.) These languages aren't "stuck" there, it's just that it takes time to optimize.

And my final caveat is to point out that A: fast != good and slow != bad, it's merely one element of a rich and complicated story for every language and B: that while benchmarks always have a certain subjectivity to them, we are broadly speaking observing objective facts here than, in particular, engineers responsible for creating solutions for people really, really ought to know and not dismiss because they make you feel bad. Being "insulted" at the idea Python is meaningfully fundamentally slower than Rust or something isn't going to change anything about the performance of your system, so it behooves you as engineers to be sure that you've lined your requirements, resources and solutions all up correctly.


"These languages aren't "stuck" there, it's just that it takes time to optimize"

Another thing is that we have CPU architecture that is optimized for a one size fits all system when it comes to personal computing. If we truly wanted power from higher level, more expressive languages and programming systems, we would have architecture designed for those systems.

If you want to experiment with truly new programming languages and environments, you probably have to experiment with hardware too. Our present reality makes this difficult to change, which is really too bad.


But, you may have chosen to use an interpreted language for other reasons. GPs question still makes sense, relative to the other options s/he may consider (like go itself, or ruby)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You