For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | aoe's commentsregister

Offtopic: Which external monitor do you use?

I'm thinking of getting the 27" Apple one, but waiting to see if they would release a retina-resolution anytime soon.


Not going to happen any time soon - the GPU power needed (esp from a laptop) is a few years off I think.

Dont get the Apple display - just get a Korean knock off with the same panel for a quarter of the price...


The Thunderbolt display is expensive, but it also has a lot more than just the panel - it has a webcam, speakers, USB, firewire, ethernet, and a MagSafe connector to charge a MacBook, not to mention that gorgeous aluminum and glass body. Worth $999? Maybe not, but it's definitely not equivalent to a Korean display off eBay.


The comparable Dell panel was the same price last time I checked (which didn't have all the extra ports/webcam/speakers)


I don't really need webcam, speakers, usb, firewire, ethernet or a magsafe connector.

Gorgeous body would be great.

What monitor would you suggest?


You'll probably need a Mini DisplayPort to Dual-link DVI adapter as well, if you're driving one of the Korean displays from a recent Mac. Apple sells one for a hundred dollars, or monoprice has one for $70: http://www.monoprice.com/products/product.asp?c_id=104&c...

Just something to keep in mind when you're budgeting.


In a store full of big-screen TVs, laptops, tablets and every species of shining rectangle, one device emits a reality-distortion field like no other: the iMac. The display is just.. perfect. After seeing the iMac 2 years ago, I switched to e-IPS (Dell 2209WA, then the Dell U2312HM), but somehow they're not in the same league. Way better than TN LCDs, of course, but not iMac/Cinema display. Is e-IPS that much worse than IPS? Perhaps it's Apple's calibration which does the trick? I don't know.

I would love to find a cheap monitor which used the same panel as the iMac and had similar image quality. Any pointers?


Don't forget to factor in $70 - $100 (and the loss of the a USB port) for the miniDP to dual-link DVI adapter you need to drive one of those panels.


They could always build a Thunderbolt display with a built-in GPU.


Absolutely—it's effectively a PCIe x2 external interface. There's plenty of bandwidth.


Quarter? really? could you link me?


Atwood did a decent write up about them. It's a good place to start. http://www.codinghorror.com/blog/2012/07/the-ips-lcd-revolut...

Edit: and you found it.



I spoiled myself with the Apple thunderbolt display I confess, and love it. But I did not research all the alternatives and no doubt I paid the Apple premium tax.

No doubt there are better deals but if say the premium is $350 over a 3 year lifetime (at least) it's a few starbucks coffees or meals out per year for something I use hours per day :)


Unfortunately, my city is not available as an option. They should have a location select using Google Maps or something.


Looks like a good concept, but I really cannot see how this would work for functions with side effects? What if I write `File.rm("something")` and press enter?

And how would this work with, say, Ruby on Rails development?

Can anyone throw some light?


This is an issue[1] that SublimeLinter ran into when syntax checking Perl modules. Code in a BEGIN block (which gets executed at compile-time) could actually delete files when using `perl -c` to handle syntax checking:

BEGIN { `rm -rf $ENV{HOME}` }

They had to switch to using static analysis rather than relying on Perl's built in syntax checking to avoid executing code in BEGIN blocks.

[1] https://github.com/SublimeLinter/SublimeLinter/issues/77


As with all tools you have to consider if it's the right place to use it. Things like the instarepl aren't very valuable in highly side-effecting code, but being able to evaluate some block on command still is. You still have to test if that code that removes a file is doing what you think it is as you write it. That being said, you have full control over what does and doesn't end up eval'd - just don't press cmd-enter :)


just don't press cmd-enter

That seems like a dangerous approach. People could get in the habit of hitting cmd-enter, and accidentally run some destructive code.

Perhaps you could sandbox the execution environment so destructive operations are logged but not actually performed.


There are environments that have been behaving just this way for decades (emacs elisp buffers, Satimage Smile for AppleScript text windows) and no one complains or claims it's not well worth the risks. How is typing in any destructive code anywhere or command lines at the shell and just hitting enter different? If you don't know approximately what your functions are doing, you should never call them under any circumstances.

In practice, this has just not been a legitimate concern. Yes, you have to look both ways before you cross a busy street; people are pretty good at looking both ways.


Yes, in SQL and filesystems, such security relies on permissions at a lower level than your shell/REPL.

(BTW, this post is written using Light Table, rather than the usual emacs. Nice feel! :)


> If you don't know approximately what your functions are doing, you should never call them under any circumstances.

Then why would I use an editor that its main selling point is how easy it is to do that?


Well, if you really want to be on the safe side of that particular power vs. safety tradeoff, the repl code could run from (say) clojail. (https://github.com/flatland/clojail)


But why would you do that? Press enter only where it makes sense. It's just like with other tools, you can cut your finger with knife, but that doesn't make knives less useful.


If I can do it only when it makes sense, I would default to "don't do it". So what's the point of binding it to oh-so-convenient shortcut key, when I have to think about when I can do it? What's the different from using terminal or Makefile to run it then?


You can't do it only for very specific cases. And even `File.rm("something")` usually works because if you are removing something, you have code for creating it too, and can quickly test both with command+enter, instead of opening whole application and trying to invoke that through your gui or something.


If File.rm("something") is undoable, then its not a problem right? We are talking about developer tools anyways, its not inconceivable that a mock environment would be setup for the program with undoable side effects or what not. However, this would require serious programming model changes, which are probably not in the scope of Light Table.


Which is a shame. As joesb points out above, this severely reduces the benefits of the "instant execution&feedback" features.

Given that this is an alpha release, maybe it's still time to rethink that approach and allow for sandboxed environments.


There's absolutely nothing preventing a sandboxed environment :) Connections are just processes that talk over tcp, if the client wants to sandbox whatever it evals, it can definitely do that.


Sorry I wasn't clear, but the ability to "undo" an operation definitely requires a programming model change. The sandbox is only one part of that.

Rolling your own magic sandbox isn't going to help much without more buy in from the environment (though, I'm sure most of us test in sandboxes anyways).


Even GHC's (Haskell) REPL executes side-effects.


I'd expect that instead of eval, you'd want to run some sort of unit test for effectful code.


The child process could be run inside a chroot jail, or a virtual machine.


To that end the developer could anticipate that their code has a lot of side effects and just run light table in a test vm with snapshots to hop back to in case they really mess up. Setting one up isn't hard, and you only pay the overhead cost of a vm or separate test machine if your use case demands it that way.


Umm, so retrieving at 1MB/s is around $2/GB?


Transferring 1 MB/sec for 1 hour = 3.51 GB per hour

Assume you stored 1 TB (1024 GB) and need to retrieve all of it.

Free allowance per day = (1024 * 0.05)/30 = 1.70 GB/day

Free hourly transfer allowance = 1.70/24 = 0.07 GB/hour

Billable hourly transfer rate = 3.51 - 0.07 = 3.44 GB/hour

Retrieval fee = 3.44 * 720 * 0.01 = $24.76

Based on info here: https://aws.amazon.com/glacier/faqs/#How_much_data_can_I_ret...


In addition to the $25 retrieval fee there will also be a $120 bandwidth fee, so we're looking $145. And it will take 12 days to complete, at this speed.

Bumping up the speed by a factor of 10 will bump up the retrieval fee by the same factor, so $250 + $120 = $370 total restore cost in a little over a day's time.

Hm. I wonder if you can bill the insurance company for this in case of fire etc.

Alternatively, it would be nice if Amazon allowed several accounts to pool together their retrieval allowance - it's not likely that all of my friends will have their house burn down at the same time.


It's actually a bit more complicated than that. You get 5% of your data back for free every month but it's prorated at a data transfer rate of (5% of your data)/num days in month = data per day. if you store 1TB you can retrieve 50GB/month = ~1.66GB/day.

If you go over that it's more expensive but they prorate it based on how much free data you get and how fast you were going while you downloaded it.


But still, even if that's the maximum you can get charged if you have a small backup, it's too much.


Slightly off-topic, but when will we see such an implementation of Ruby (most importantly, speed)?

Rubinius looks like a similar project, but it's not even as fast as MRI 1.9.2/3.


So, anyone has a list of the major changes coming in 2.0?

Btw, slightly misleading title. It kind if implies that 2.0.0 is out.


According to Matz last year,

"The version number goes up to 2.0 but the changes are rather small. Smaller than the ones we made in 1.9."

http://www.rubyinside.com/ruby-2-0-implementation-work-begin...

Here's a Quora thread with links to a presentation by Matz and a summary by Yehuda Katz: http://www.quora.com/Ruby-programming-language/What-are-the-...

"Language improvements:

- Named arguments.. 1.step(by: 2, to: 10) { ... }

- Selector namespaces (unclear to me whether this differs from refinements as described by Katz)

- Multiple inheritance

Interpreter Improvements:

- Incremental performance improvements over 1.9's VM

- Better compatibility with non-unix environments and small/constrained devices (embeddable)

- Sandboxed VM's (VM per thread)"

Matz's presentation: http://www.youtube.com/watch?feature=player_embedded&v=t...

Yehuda's summary of "refinements": http://yehudakatz.com/2010/11/30/ruby-2-0-refinements-in-pra...


> * Sandboxed VM's (VM per thread)

This will not make it into Ruby 2.0.0:

https://bugs.ruby-lang.org/issues/7003


> Named arguments

yes, though it's basically an optional argument hash. AFAIK, you can't do required named arguments without weird hacks, or specifically checking the arguments. For example:

    irb(main):007:0> def foo(bar: bar, baz: Object.new); [bar, baz]; end
    => nil
    irb(main):008:0> foo(bar: 1)
    => [1, #<Object:0x007fcaa40db4e0>]
    irb(main):009:0> foo(baz: 1)
    NameError: undefined local variable or method `bar' for main:Object
    	from (irb):7:in `foo'
    	from (irb):9
    	from /Users/aaron/.local/bin/irb:12:in `<main>'
    irb(main):010:0>
This hack makes the "bar" parameter required, but only because the value is evaluated when the method is called, and you get a NameError (rather than an ArgumentError).

> Selector namepaces

yes, but it's called refinements. You can see how they're used here: https://github.com/ruby/ruby/blob/trunk/test/ruby/test_refin... (sorry for the link to a test, I'm feeling lazy ;-) )

> Multiple inheritance

Sorry, there won't be multiple inheritance.

> Incremental performance improvements over 1.9's VM

yes, ko1 has been working on removing / optimizing bytecodes in the VM.

> Better compatibility with non-unix environments and small/constrained devices (embeddable)

I don't know of any work on this other than mruby, which isn't MRI.

> Sandboxed VM's (VM per thread)

nope. https://bugs.ruby-lang.org/issues/7003

Other stuff:

* DTrace probes

* Better within ruby tracing https://bugs.ruby-lang.org/issues/6895

I run edge ruby against rails daily. The main incompatibilities I've hit in Ruby 2.0 are what methods respond_to? searches (I've blogged about that here: http://tenderlovemaking.com/2012/09/07/protected-methods-and... ), and the `Config` constant has been removed (which is sometimes an issue for C extensions).

EDIT

Just thought of this for the required args:

    irb(main):001:0> def foo(bar: (raise ArgumentError), foo: Object.new); [bar, foo]; end
    => nil
    irb(main):002:0> foo(bar: 1)
    => [1, #<Object:0x007fc65a882f48>]
    irb(main):003:0> foo(foo: 1)
    ArgumentError: ArgumentError
    	from (irb):1:in `foo'
    	from (irb):3
    	from /Users/aaron/.local/bin/irb:12:in `<main>'
    irb(main):004:0>
You could probably define a private method like `required` or some such, like this:

    irb(main):001:0> def required(name); raise ArgumentError, "missing param: %s" % name; end
    => nil
    irb(main):002:0> def foo(bar: required(:bar), foo: Object.new); [bar, foo]; end
    => nil
    irb(main):003:0> foo(bar: 1)
    => [1, #<Object:0x007fcfea143338>]
    irb(main):004:0> foo(foo: 1)
    ArgumentError: missing param: bar
    	from (irb):1:in `required'
    	from (irb):2:in `foo'
    	from (irb):4
    	from /Users/aaron/.local/bin/irb:12:in `<main>'
    irb(main):005:0>


> > Multiple inheritance

> Sorry, there won't be multiple inheritance.

Thank all that is holy in the world.


Amen to that.

C++ pretty much ruined that party for everyone.


So it's syntactic sugar? What a shame. The whole point is having the language handle it so its default behavior is well-known and uniform and not implement umpteenth patterns and validations[0].

[0] e.g assert_valid_keys (raising ArgumentError on a mismatch so it's really only for options={} pattern: http://api.rubyonrails.org/classes/Hash.html#method-i-assert...


Well, it does actually define locals. But, IIRC, it uses the symbol hash parsing productions, so the default values are required. The only way to specify required parameters (without the above hacks) is to do a traditional method definition:

    irb(main):001:0> def foo(bar, baz: Object.new); [bar, baz]; end
    => nil
    irb(main):002:0> foo(1)
    => [1, #<Object:0x007fbc39161ed8>]
    irb(main):003:0> foo(1, baz: 10)
    => [1, 10]
    irb(main):004:0> foo(baz: 10)
    ArgumentError: wrong number of arguments (0 for 1)
    	from (irb):1:in `foo'
    	from (irb):4
    	from /Users/aaron/.local/bin/irb:12:in `<main>'
    irb(main):005:0>


I really like the required(name) solution.

  module RequiredParameter
    refine Kernel
      def required(name)
        raise ArgumentError, ...
      end
      private :required
    end
  end
Can't wait to test this code. :P


another nice vm improvement would be the introduction of unboxed floats in 64 bit architectures (akin to Fixnums)

https://github.com/ruby/ruby/commit/b3b5e626ad69bf22be3228f8...


Multiple inheritance is the major one for me.. been waiting a long time for this


What does multiple inheritance provide that isn't better achieved through composition/modules?


Agree with you. Didn't we learn from C++ that multiple inheritance is good in theory but in practice a horrible idea? Ruby already lets you be 'magical' in too many ways.


> Didn't we learn from C++ that multiple inheritance is good in theory but in practice a horrible idea?

No, we learned from C++ that C++ multiple inheritance is an atrocity. Python's MI works in a clear and obvious way, similar[0] to how the Ruby inheritance chain is clear and obvious. MI really doesn't fit Ruby though, and IMHO would look too bolted on.

Most of the time the problem is people beating the platform they develop on into submission, instead of embracing it by getting a clear picture of what happens.

[0] As _why said[1], python and ruby are damn close: you can easily get something similar to Ruby's modules and inject them dynamically in the inheritance chain: https://gist.github.com/3951273

[1] https://github.com/whymirror/unholy


It is also available in Eiffel, OCaml and Python.

So there are some good ways to make use of it.

But I agree interfaces/traits is a better way of dealing with multiple inheritance scenario.


Perl (5 & 6) also has multiple inheritance. Current best practise in the Perl community is to avoid using MI and instead make use of roles.

ref: https://metacpan.org/module/Moose::Role | http://en.wikipedia.org/wiki/Perl_6#Roles | http://modernperlbooks.com/mt/2009/05/perl-roles-versus-inhe...


Magic is bad, but multiple inheritance isn't magic; it's clear and obvious, the only potential wrinkle being method resolution order when two superclasses define the same method, which is fine as long as it's consistent and documented.

Almost all of the problems with multiple inheritance in C++ are to do with the static type of an instance, and go away in a language where everything is virtual by default, as in ruby.


Why is that even something worth working on? I find modules work pretty good.


Sometimes you want to express multiple is-a relationships at the same time. In that case, composition is a hack, and multiple inheritance is a better match for the concept.

I prefer having more options how my objects behave than fewer.

But, since Ruby 2.0 does not and will not have multiple inheritance, this is all off-topic.


I'm seeing contradictory information about this. Is multiple inheritance in or out?


I believe this also includes the bitmap marking GC changes which will make the GC copy-on-write friendly. This is pretty important for a lot of folks running Ruby web servers.

See, for example:

http://patshaughnessy.net/2012/3/23/why-you-should-be-excite...


This is probably the most exciting part to me: Having copy-on-write without having to run REE.


You can have this right now. We back ported the GC. Shopify in running on this ruby version in production.

https://gist.github.com/1688857


Thats great! Hope this will work on 1.9.3p286 as well?


Yes, the Falcon patches work like a charm against p286.



I'm guessing the recent comments widget is just doing an ORDER BY date DESC LIMIT 10 or something? With an index on date, I don't see how this could be slow.

How long does the query take?

How many rows do you have?


Appears that he has 188590 rows. It definitely sounds like a configuration issue. Maybe the mysql table cache might be bogged down or the query cache is full or malfunctioning. Could be a lot of things.

Sorting that into a temp table, and caching that in memory would be really fast on a server with reasonable RAM and proper indexes.


InnoDB helps a lot. MyISAM likes to perform joins on disk if a table has a TEXT field ... even if you're not using that field in your query.


Are there any other pros/cons in switching to Slim? Other than the syntax of course.


I'd say speed and syntax are the main reasons. I've used both Haml and Slim over 5 or so projects and the jump doesn't feel all that significant. If you know Haml, you can pickup Slim within a couple of days at most.


I'm considering switching to Slim from Haml (not because of this benchmark, but the syntax).

Are there any downsides to Slim that I should be aware of?


Which is the best PLT related twitter novelty account?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You