For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more TylerJay's commentsregister

thirded. Can someone who doesn't like symbols help me understand the downsides of them? I really like how natural it makes passing around params that can be one of {x1,x2,x3...,xn} different values. Accomplishing the same with strings just feels messier and more prone to error.


The only problem with Ruby's implementation of symbols was possibility of DoS which is resolved through garbage collection in the latest Ruby.

Something that I found a bit ironic about the video linked in the article is at 5 mins 30 secs in; he states, "Why not give the memory address a name that makes sense to a person?" Here is referring to assembly and the abstraction it makes and how this relates to the abstraction that is variables. Symbols are really a cross system immutable memory address abstraction; I don't see how this is a bad thing.


See djur's response to me in the thread. Apparently the cross processes piece is not true. So now that symbols are garbage collected there and the way "string".freeze works now the two are now basically the same construct.


> Can someone who doesn't like symbols help me understand the downsides of them?

I wish I had been clearer in my talk but I only had 30 minutes and wanted to cover other topics. Here is a more comprehensive argument against symbols in Ruby:

In every instance where you use a literal symbol in your Ruby source code, you could you could replace it with the equivalent string (i.e. calling Symbol#to_s on it) without changing the semantics of your program. Symbols exist purely as a performance optimization. Specifically, the optimization is: instead of allocating new memory every time a literal string is used, lookup that symbol in a hash table, which can be done in constant time. There is also a memory savings from not having to re-allocate memory for existing symbols. As of Ruby 2.1.0, both of these benefits are redundant. You can get the same performance benefits by using frozen strings instead of symbols.

  "string".freeze.object_id == "string".freeze.object_id
Since this is now true, symbols have become a vestigial type. Their main function is maintaining backward compatibility with existing code. Here is a short benchmark:

  def measure
    t0 = Time.now
    yield
    t1 = Time.now
    return t1 - t0
  end

  N = 1_000_000

  puts measure { N.times { "string" } }
  puts measure { N.times { "string".freeze } }
  puts measure { N.times { :symbol } }
There are a few things to take away from this benchmark:

1. Symbols and frozen strings offer identical performance, as I claim above.

2. Allocating a million strings takes about twice as long as allocating one string, putting it in into a hash table, and looking it up a million times.

3. You can allocate a million strings on your 2015 computer in about a tenth of a second.

If you’ve optimized your code to the point where string allocation is your bottleneck and you still need it to run faster, you probably shouldn’t be using Ruby.

With respect to memory consumption, at the time when Matz began working on Ruby, most laptops had 8 megabytes of memory. Today, I am typing this on a laptop with 8 gigabytes. Servers have terabytes. I’m not arguing that we shouldn’t be worried about memory consumption. I’m just pointing out that it is literally 1,000 times less important that it was when Ruby was designed.

Ruby was designed to be a high-level language, meaning that the programmer should be able to think about the program in human terms and not have to think about low-level computer concerns, like managing memory. This is why Ruby has a garbage collector. It trades off some memory efficiency and performance to make it easier for the programmer. New programmers don’t need to understand or perform memory management. They don’t need to know what memory is. They don’t even need to know that the garbage collector exists (let alone what it does or how it does it). This makes the language much easier to learn and allows programmers to be more productive, faster.

Symbols require the programmer to understand and think about memory all the time. This adds conceptual overhead, making the language harder to learn, and forcing programmers to make the following decision over and over again: Should I use a symbol or a string? The answer to this question is almost certainly inconsequential but, in the aggregate, it has consumed hours upon hours of my (and your) valuable time.

This has culminated in objects like Hashie, ActiveSupport’s HashWithIndifferentAccess, and extlib’s Mash, which exist to abstract away the difference between symbols and strings. If you search GitHub for "def stringify_keys" or "def symbolize_keys", you will find over 15,000 Ruby implementations (or copies) of these methods to convert back and forth between symbols and strings. Why? Because the vast majority of the time it doesn’t matter. Programmers just want to consistently use one or the other.

Beyond questions of language design, symbols aren’t merely a harmless, vestigial appendage to Ruby. They have been a denial of service attack vector (e.g. CVE-2014-0082), since they weren’t garbage collected until Ruby 2.2. Now that they are garbage collected, their behavior is even closer to a frozen string. So, tell me: Why do we need symbols, again?

I should mention, I’d be okay with :foo being syntactic sugar for a frozen string, as long as :foo == "foo" is true. This would go a long way toward making existing code backward compatible (of course, this would cause some other code to break, so—like everything—it’s a tradeoff).


I don't think so, but I'll second a request for that! I think it would be good to have on the Showcases page.

Here are some examples that have some UI widgets in common, like list elements/navigation, but they are clearly different apps so it's not blatantly obvious which differences are based only on the platform:

https://www.nativescript.org/showcases


I think what makes it "native" or not is how the app is rendered. You can basically take any web application and wrap it into an app, but it displays in a "web view" whereas a "native" app uses the built-in display components of the OS. For example, the little slider gray/green icon instead of radio buttons on iOS. Also, native apps have access to certain device APIs like the camera while non-native apps don't (at least I don't think so).


Cordova, which uses HTML in a WebView paradigm, lets you wrap native API's as plugins, so you can access them from JavaScript. You can access most (all?) native functionality from within JS using either Corfova-supplied or third party plugins. Honestly, Cordova + Ionic/Angular has been a great experience so far. Very few problems and a super quick time to market. I can't imagine iterating as quickly for two platforms at once (three if you care about Windows Mobile).


Yes, but you don't get a native UI with hybrid solutions like Cordova. Also, in order to use native APIs, you have to create special wrappers for them, whereas in NativeScript, it looks like you can call the APIs directly. (How this affects performance, I don't know.)


How does calling API's directly work with different platforms using different names, calling conventions, etc.? I like that with Cordova someone already did the work of normalizing the native calls.

Agreed about the lack of native UI. Things like Ionic help, but it is not 100% perfect by any means.


So... You're right, of course, but I think the spirit of TC's comment was that he or she doesn't like the idea of nations and thinks people just just be able to go/live wherever they want. In that sense, the two of you actually seem to agree with the core sentiment that "where you're born shouldn't matter," I don't think anyone's trying to argue that it currently doesn't.


> Patients that switched from Bootstrap to Min reported up to a ninefold decrease in markup

hehe.

Actually though, as someone who learned CSS with the rule "Use Divs! No Tables!" What does a page with less divs even look like? What are the workhorses for page layout?


That rule is good. Using divs normally is fine. However, it is possible to overuse divs. Sometimes you'll have a div nested inside a div (and so on) six levels deep. That's considered bad.

For example, <div id="div-holder"> <div id="inner-padding-div"> <div id="right-align-div"> <div id="centering-div"> <div id="actual-content">

is bad - you could reduce that to one or two divs at most.


Gotcha. Thanks!


I think you hit the nail on the head here. That's one of the most interesting parts of thinking about GAI for me. Which parts will end up being top-down and which parts will end up being bottom-up? And even if we have evidence that a certain part is TD or BU in humans, do we even want machine intelligence to work the same way?

The article says something to the effect of "no matter how much you advance this strategy, you never get a toddler out of it." And that makes sense because, presumably, certain parts of the human brain exercise some sort of top-down control over the sensory-data-processing and other parts.

For example, it seems like the human mind is built to see things as things. Does the human mind reallY start off seeing "pixels" and then learn by itself to think of the word as solid, whole objects instead of collections of similarly-colored photons/pixels or atoms? It seems like this is a universal use-case and it would make sense if our tendency to see the world in terms of "things" instead of patches of color is built-in (gestalt psychology seems to suggest this as well).

It sounds like the AI in the article starts off from pixels and then builds up some sort of model of blocks, the ball, paddle, game physics, etc, (but then again, maybe it doesn't have those models at all and is just doing statistical analysis on patterns of pixels). Either way, it likely doesn't have any higher, context-independent model of objects/things like humans do. I suspect this may be one of the hurdles in transfer learning. Humans think of objects as having certain properties. When other objects in other contexts appear to have similar properties, we guess that they may have other properties in common which gives at least a rough model of the new object.

So I guess what I'm trying to say is: Humans have hierarchical models of the world that let us think separately about patterns of light, atoms/molecules, whole physical objects/things, systems, etc. They are all first-class citizens and we ascribe properties to each of them. We already have a rough-model of anything at the same level, but a different context, and with similar-enough properties to something we already know. It seems to me like this is fundamentally connected to humans' ability to do transfer-learning. Could this effect be achieved through bottom-up algorithms, or are we going to have to figure out some top-down way of developing transferrable, generalizable, hierarchical models?


> It's not yet possible to get a hierarchical system to emerge from machine learning. Medium term planning as an emergent behavior is a near term big challenge for AI.

It's also a big challenge for AI safety / Machine Ethics / Formal Verification. It's notoriously hard to prove statements about Emergent behavior in complex or dynamic systems.


This is the most fascinating part of mental illness. And it's a bittersweet thought that as horrible as mental illness is, it might be what allows us to really understand the human mind.

However, making evolutionary arguments for psychological traits is tricky business and while I'm not a professional evolutionary psychologist myself, I think the explanation you gave violates a fundamental principle of evolutionary arguments.

Imagine gene A confers a fitness advantage because it allows a person to better cope with a selection pressure X, and gene B confers an additional fitness advantage against X, but only if gene A is present, and does nothing otherwise. In this (common) case, gene B will not be selected for unless gene A is already universal in the population. Following the same rules, imagine we then get gene C which is dependent on B, then a variant of gene A called A* which is dependent on B and C, and so on. Eventually, if even one gene is removed (either by sexual reproduction with someone who doesn't have it or by mutation), the whole tower falls down and the entire piece of complex biological machinery is broken.

Basically, there's no way for selection pressure X on a significant chunk but not all of the population to produce a piece of complex machinery (read: involving 2+ interdependent genes) in the first place, and it would be broken beyond all repair in all offspring who didn't have both parents with the full genetic instructions. So the idea that "many humans were abused, enslaved, etc." only works if the selection pressure was on everyone and the adaptation is universal in the human population, unless it's attributed to a single mutation.

The rarity of this condition isn't consistent with it being a feature. Seems like a bug to me.

Hope this was helpful!


Isn't it more likely that they just sent the emails themselves but used his email as the "Reply To" address? A lot of software systems (SalesForce, Hubspot, etc.) do that too.


That would instantly get flagged as spam by virtually everybody if SPF is enabled on his domain. And yes, gmail.com seems to have SPF enabled.


Errm no, a reply-to header is not the same as a sender envelope. Spam filters will flag emails that are faking the sender envelope. Spf is also only checking sender envelope. A reply-to can generally be what ever you want, same for from header... So linked sends email with from and reply-to headers set with your email, but sender envelop is from their server. So email appears to come from you, but was sent from linked in server, which is setup to pass spf test so does not get flagged by spam filter. Check the headers in the emails raw source, and you will see what i mean


When did I ever say anything about reply-to? Please do not put words in my mouth and then speak condescendingly to me. It's extremely irritating.


The post you were commenting on was talking about reply-to... Please practice your reading comprehension.


The post I was commenting on apparently got edited after I replied to it. And your condescension is not at all appreciated.


It appears the parent comment I was replying to got edited after I posted this. Thanks TylerJay for completely changing the meaning of your comment without any notice.

The original commented suggested that they sent the email themselves as if it had come from the user, not merely setting Reply-To.


That would only be true for only some recipients (by far not "everybody") only if Google's SPF record forbade other SMTP servers with -all. It doesn't, it uses ~all soft-fail.

Why? Precisely because of this: there are lots of perfectly legitimate situations when a third party sends email on your behalf.

Moreover, if LinkedIn signs their outgoing emails with DKIM, that would be a positive signal for a spam filter (and e.g. Gmail would show such mail as "sent via LinkedIn" or something to that effect).


Sounds like you know more about this than I do. I will defer to your greater knowledge.

Although "there are lots of perfectly legitimate situations when a third party sends email on your behalf" strikes me as being rather wrong. I cannot think of a single reason why anyone else should be sending email that claims to be coming from my email address. Sending email that lists me as a reply-to, sure. But as the sender? Not a chance.


It's common in enterprise products where the user's first action is in a non-email.

Like I've uploaded version 1 of the plans, added some notes and the system needs to send out an email to everyone, I did the action, it's coming from me, not the system.

There's a reason it's part of the spec.


You did the action, but that does not ever justify sending the email with an envelope claiming it came from you. Because you did not send the email. It could certainly put you as a Reply-To on the email, and it might possibly justify putting your name on the From line, but actually claiming to have been sent from your email address is wrong.


Says you.

However, all the clients says "why does this email come from admin@thibgy.com, I want it to come from my email address, I'm sending it".


No, you didn't misread it. You can see the example for yourself in the paper. It's Section 2 here:

http://research.microsoft.com/en-us/um/people/lamport/pubs/p...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You