For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | 10-6's commentsregister


Yes true. The strongarming tactics of MS aren't referenced. While it brings up some interesting points, the entirety paints the common revisionist view that MS was crushed by Google on search alone, with no other mitigating factors. A fairy tale of sorts. Google would not have risen to anything if MS continued with it's anti-competitive and anti-trust practices that was the goto MS strategy for competition into the early 2000. It was baked into the MS culture. and common knowledge, maligned on message boards long after it had ceased. Under enacted US justice dept scrutiny, MS was forced to compete more fairly with superior talent and lower margins (aww no more predatory acquisitions). With a giant, inflexible infrastructure and rigid payscales, it was a slow imminent decline.


This is exactly right. A lot of posts regarding loading times or bundle sizes for frameworks/libraries forget that there are trade-offs when it comes to building an application.

The correct way to frame this is: use a slow/clunky/large/etc framework vs. build everything from scratch (which comes with its own costs).

Sure, you can optimize parts of your application to speed up the JS portion or even remote it completely [1], but it's not always as simple as "your framework is making your application slow so you should think about ditching it."

This article actually ends with a very reasonable conclusion in the section "Are Frameworks Evil?" but I've seen plenty of articles where the author doesn't offer an alternative to some library/framework [2].

[1] https://twitter.com/netflixuie/status/923374215041912833?lan...

[2] https://dev.to/gypsydave5/why-you-shouldnt-use-a-web-framewo...


The author explains a few problems with the current state of the Web under the section The Five Lacks:

- "On the closed social web... We lack freedom, innovation, trust, respect, and transparency."

- "Innovation on these platforms is dying."

- "And there’s little transparency. All of the data is locked up or rate limited to a prohibitive degree."

While some of these may or not even be true (innovation dying, really?), I think the author makes a large leap from his premises to the conclusion. So just because the author claims there are issues with the current state of the web, that doesn't mean the solution is to completely ditch "the tech giants in control suppress our freedom" and remove yourself from the current web platform and applications (e.g. fb, google, etc.)

The best way to determine whether this is a viable and useful solution, and whether or not some of the apps are actually something people want and find useful is to see how many people ditch applications from the "tech giants" start using these new apps built for the social web.

A lot of ideas sounds great in theory, but then don't hold up years down the line. Plenty of new applications and social networks have been created over the years with great explanations and a "Our Philosophy" section, but what actually matters is whether or not people change their habits and start using these new applications.

The problems the author listed in The Five Lacks section are completely real problems on a lot of the applications on the Web, but I don't think any of these social web apps listed in the article are the solution.


It's easiest for me to approach these projects because I'm betting those developing such projects are willing to hear my user story and jam on healing-centered design for recovering information addicts such as myself.

My reason for mentioning these things is to help start the conversations around then.

I think any app with infinite scroll, which I think Patchwork might have, isn't respecting how repetitive motions like that lead to addictive behaviors and/or repetitive stress issues.

Also, I'd like to see other design patterns useful for addicting users to be publicly and loudly set aside.

Since these apps are open source, I can at least start the conversation & I can do it with pull requests.

I also think the metric of conversion is misguided for determining if it's successful. I think it's time to start measuring software design based on subjective well-being. Allow users to see metrics related to their well-being, like how much time is spent using the apps and in what ways.

I think people are first going to populate a new app ecosystem with iterations of what's popular outside the ecosystem before doing the serious work of addressing all the ways we software wrong beyond what's kept in mind when designing the ecosystem. Could that be what you're talking about?


It can't work. If you don't design for addicting behaviors none of the 80% of users who aren't addicts will use your service more than once. It's a catch 22. Damned if you do, damned if you don't.


This is a limiting belief. I no longer choose to hold such beliefs because they engage cognitive biases away from imagining ways it can happen.

I hypothesize an approach oriented around human needs can be immediately valuable and can grow in value for an individual user, even if they never connect to another user. Even if I haven't yet imagined it.

Choosing my beliefs intentionally is a skill I developed on my own and think a social app that simultaneously taught such a useful skill might be something people choose to learn. I've cultivated a set of skills I use to stay sane in the face of a weird world where accurately judging what's true is getting harder. I bet others could benefit from learning how to not be gaslit by politicians, for example. I think an app teaching such skills would go viral and spread as long as it remained useful.


Lots of people are heavy on beliefs and light on reality these days. I prefer to stay grounded and to study actual real world phenomena instead. It's better to use the best proven tools and methods to enact the change you want to see than to swim upstream with ineffective methods because reality makes you feel uncomfortable.



Are there any on the list here that are best for learning algorithms in a manner similar to a course curriculum and has explanations? I'm asking from a self-taught web dev (django) perspective where I have holes in my knowledge. Some algo books I have so far seem too mathy. While not afraid of math, I understand it better when coding them out myself.

I wish projecteuler would provide more hints/explanations. Adventofcode problems are fun but are random (not like a curriculum). Other websites are competitive programming focused which I'm not sure is comp-sci enough (correct me if I'm wrong).


So for beginner/intermediate alg/data structures challenges with explanations and solutions I would recommend the following resources:

1. Read the Algorithm Design Manual.

2. Practice coding simple and then more advanced algorithms on sites like Coderbyte (aimed at beginners -> intermediate) and HackerRank (a bit more mathy).

3. Read as many algorithm explanations and code examples as you can on GeeksforGeeks.

4. Try and implement basic algorithms yourself like: shortest path, minimum spanning tree, DFS/BFS, tree traversals, different sorting algs, min/max heap, etc. and learn their running times.

* Also this article may be helpful for you: https://medium.com/coderbyte/how-to-get-good-at-algorithms-d...


Not on the list, but you might find http://interviewbit.com more structured


Yeah, +1 for InterviewBit. Courses here are a way more structured and gamified.


I can recommend 2 books "competitive programmer handbook" and "algorithms unlocked" - none of these books is too mathy


Leetcode seems like something u are looking for as well


http://www.hakank.org/picat/ has the most examples, good explanation of the problems, and elegant, short solutions.



halite.io reminds me of screeps on steam


Ah yeah I forgot to mention screeps which is awesome!


Some more I actually should have mentioned:

https://flexboxfroggy.com (CSS)

https://www.datacamp.com (data science challenges)

https://www.codingame.com/start (learn to code via fun games)

https://screeps.com (cool MMO AI game w/ JS coding)


Great list, thank you


This report by IEEE is actually a very good collection of topics and research currently being done in this field. They discuss specific problems and topics like the neocortex, IIF [1], neuromorphic engineering [2], pose cells, SLAM [3], and more.

For anyone interested in research being done in AI, ML, consciousness, etc., these are great articles written by actual scientists and researchers who are doing the work (as opposed to the hyperbolic articles or tweets you see online these days about AI).

[1] https://en.wikipedia.org/wiki/Integrated_information_theory

[2] https://spectrum.ieee.org/semiconductors/design/neuromorphic...

[3] https://spectrum.ieee.org/robotics/robotics-software/why-rat...


I'd like to add this article (with somewhat of a cheeky title): "The impossibility of intelligence explosion"

https://medium.com/@francois.chollet/the-impossibility-of-in...

It's written by François Chollet, creator of the Keras DL framework. In the article it is shown how the environment and intelligence are interrelated. Some of the points are expressed in the IEEE Special Report as well (sensorimotor integration). There are many correlations with the recent push towards simulation in AI - Atari, OpenAI Gym, AlphaGo, self driving cars, etc. It's a new front of development, where simulation will create playgrounds for AI.

The main point is that intelligence develops in the environment, and is a function of the complexity of the environment and task at hand. There is no general intelligence, or intelligence in itself, only task-related intelligence. An intelligence explosion can't happen in the void (or in a brain in a vat, or in a supercomputer that has no interface to the world, and can't act on the world). The author concludes that AGI is impossible based on environment and task limitations.

An interesting take because we're focusing too much on reverse engineering "the brain" as if it exists in itself, outside the environment. We should learn about meaning and behaviour from the environment and the structure of the problems the agent faces. Meaning is not "secreted" in the brain.


A related 'trend' in Cognitive Science is called Embodied Cognition [1]. Intelligence develops together with the body that it inhabits, and, as you mention, the environment that this body is living in.

Maybe Dolphins are as 'intelligent' as we are, but having fins instead of hands and living in a maritime environment just make it impossible for them to invent fire, printing presses and automobiles.

[1] https://en.wikipedia.org/wiki/Embodied_cognition


Far-fetched conclusions based on a misinterpretation of "no free lunch" theorem. The theorem doesn't forbid intelligence which is universal only in our own universe, as our own universe doesn't give us uniform distribution of all possible problems.


I tend to believe that a hole in François' argument is that a sufficiently powerful computer could simulate an environment inside, where the AI could thrive.


A hole? In that Swiss cheese? It's hardly surprising. He uses hypothetical Chomsky language device to support his idea of "there couldn't be general intelligence", while there's provably optimal intelligent agent (AIXI) and its computable approximations. He uses the self-improvement trends established by entities which aren't intelligent agents (military empires, civilization, mechatronics, personal investing) to predict what self-improvement of intellectual agent will be like. It's a pure London-streets-overflowing-with-bullshit level prediction.

I am not extreme singularitarian. There are hard physical limits making exponential progress and singularity impossible. But bad arguments are bad arguments, it doesn't matter if conclusions are appealing.


This reminds me of "What Is it Like to Be a Bat?" by Nagel.

https://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F


this is great, the docs/tutorial are pretty clear as well. shameless plug: I made a js library [0] for creating and playing board games a while ago. It allows you to focus on gameplay and logic without worrying about the user interface so much. It allows you to create games like chess [1] and simulations like game of life [2] pretty easily.

[0] https://github.com/danielborowski/jsboard

[1] https://danielborowski.github.io/site/jsboard/demo/demo8/

[2] https://danielborowski.github.io/site/jsboard/demo/demo9/


shameless plug: I loved project euler and topcoder when I in high school. In 2011, there weren't really any nice, easy-to-use, interactive websites that allowed me to solve coding/algorithms challenges online easily, so that winter break my first year of college I made coderbyte.com for people to solve challenges online. Been running it ever since, but now there are like 20 similar websites as well.


Thank you! I love coderbyte. I especially love that you can immediately go back, correct your answer and get a perfect score. There is nothing more frustrating than a competitive screening tool that masquerades as an educational tool. Emphasizing mastery over getting it right on the first try is something that I deeply appreciate.


I haven't used Coderbyte yet, but I love the idea of going back and getting credit for showing what you learned / how you improved. (I mostly use Codewars which has a similar feature though.)

I've always wished academia worked this way. Seems logical to me. Everything else we do is iterative.


I really enjoyed you recap, nice job. I think vue.js, GraphQL, and wasm will become more popular in 2018. We'll see though :)


Agreed, I'm very excited to see how they grow.


"I understand that a machine could kill people. But will a machine want to kill people? That seems to go back to that philosophical notion of consciousness."

This is exactly the issue with a lot of journalists and people talking about AI/ML. There is no WANT or DESIRES from the programs, there is no self-awareness where the programs ask themselves if what they are doing is right or wrong. They are doing exactly what they were programmed to do.

With his example of adversarial networks, one network is learning to detect fake news and other other is generating fake news--they are working towards their reward functions and optimizing the weights to reach the goal. It's math, that's all it is. It's so silly to bring up consciousness, desires, awareness, fears, etc. in these AI programs.


It's not silly, if the AI agent is in the real world. The limit to RL currently is simulation. If an AI could use model based RL to do reasoning and planning, it would surely have motivation and be capable of doing harm.

Its motivation could be programmed in by its creators, or it could just act to prolong its existence. After all, all human motivations stem from the same source - the will to live, which is the result of evolution, in our case. AI's trained with evolutionary techniques could have the same will to live. Evolution just means pruning those who are not competent enough or have enough will to live.


Exactly. AI could easily be programed to "desperately" want resources or survival, and kill people as a means to an end.

Deception is an evolved trait as well.


Will to life can be hand programmed into AI, but it could also emerge on its own, if we train the agent with evolutionary techniques in the real world.


Don't you think that

> they are working towards their reward functions and optimizing the weights to reach the goal

applies to the human mind too ? Consciousness, when you think about it, might simply be a layer of indirection that any sufficiently complex network is capable of, if you give it a notion of time and events.

Certainly it's not super hard to make reinforcement learning "bots" that certainly appear as conscious as small animals are.


I agree! Especially if those agents are many and cooperate to solve tasks - that would require an ability to model the state of other agents, which in turn can be used to model its own state, gaining self awareness.


that statement I made may apply to the human mind and our thoughts, but it also may not some would argue: https://en.wikipedia.org/wiki/Computational_theory_of_mind#C...

So if someone were to disagree with the view of CTOM, then they can argue that no, consciousness must be something else.


CTOM's suffer from the Russel Teapot problem. All of them suppose magical stuff is there. Of course none of it can be found in any inspection of the actual brain ...


For most of the AI programs out there, what you write is true.

But there are a rapidly growing number of AI programs that are deliberately designed to be generally intelligent in one or more respects. For some of these AGI systems, some or many of the characteristics that you dismiss such as awareness, fear, etc., are in fact present and functional aspects of the system. Do these systems so far achieve animal-like capabilities or complexity? Not really, that I know of. But with so much being invested and the awareness of the field of AGI and mainstream acknowledgement that it is a feasible goal, I believe we will see some impressive somewhat animal-like AGI demos by the end of 2019.

'Consciousness' sadly is still usually used as a sort of fuzzy stand-in for some type of supposedly supernatural aspect of life, but it can be usefully defined as something like the present functional top-level most deliberate integrated awareness of senses and high-level thoughts. This type of thing is being designed into real systems.

For some examples, search for 'AGI' to learn about the field, or for example more specifically something like 'AGI NARS emotion'.


> There is no WANT or DESIRES from the programs, there is no self-awareness where the programs ask themselves if what they are doing is right or wrong.

But when left to their own devices, they do some pretty interesting things. Since the mid 1990's I've been somewhat obsessed with the Zodiac Killer. On the History channel, they out together some of the best code breakers and started inputting data into a super computer they were programming to think like the Zodiac to break several of his ciphers which have yet to be broken.

Turns out one night, it started writing poetry in the vein of the Zodiac:

http://www.history.com/news/this-supercomputer-was-programme...

At the age of love, a love deranged,

A beauty from romantic interest,

The thought of love, and love became estranged.

Then the words of love became obsessed.

Surrounded by the troubled, by the thieving,

Confused and bruised and poisoned by the master,

Confused and blinded by the helpless scheming,

Confused and blinded by the dreadful slander…


It seems most people have a very anthropocentric view of intelligence, where intelligence could be defined as "acting like a human." So an AI is only really considered intelligent if it has a bunch of human like traits, which are unessential to intelligence.

A public image problem arises when people hear offhand about advancements in the field, their imagination runs wild with thoughts of omnipotent and vengeful machines, inspired by movies like terminator. They assume all researchers are actively working toward creating a human-like entity, like skynet.

Basically we have a looming public image crisis. Better informing the public about what real AIs are, and what they're designed to do, should help quell a lot of misplaced fears. When you remove the veneer of Holywood, you're left with a machine that's just crunching numbers on a dataset, which isn't very glamorous or frightening.


"Doing what they were programmed to do" isn't really satisfying when nobody knows what the program is. When a Google self driving car stops suddenly, nobody can say "why." So nobody can know if and how a neural network has the "desire" to kill.


What exactly do you think is the difference between organic brains and silicon-based ones that result in so-called "WANT or DESIRES" as you put it? Is there something special in the material itself that enables this, or merely a difference in how the bits are moving around?


> They are doing exactly what they were programmed to do.

Just like you are doing exactly what you were programmed to do by your DNA.

The self-awareness is a strange loop where the program itself encodes a simplified model of itself within itself.

That's all self-awareness is. Meta-information. Humans just have REALLY GOOD meta-information that rolled off the evolutionary assembly line over billions of years.

It almost sounds like you're positing a magical `soul` that comes in at some point, and for some reason it will only inhabit squishy wet bio-machines like us but not hard metal-machines. You're not doing that, are you?


I'm not arguing for a soul or any sort of emergent properties of self-awareness or consciousness even, I just believe that in our current state of AI/ML/neural networks/deep learning, all of these techniques and algorithms simply optimize for some goal (or reward function like the author stated). Maybe one day we'll build a self-aware AI within a "hard metal-machine" but in my opinion we're far off from that.

If we build something one day that acts like a philosophical zombie [1] for example, then yes topics like self-awareness, consciousness, theory of mind will all be at the forefront of conversations regarding AI. But right now all we have are algorithms that optimize towards a goal and we have journalists jumping in on the hype talking about robots inventing languages, the desires these robots have, and robots teaming up to destroy humans [2].

[1] https://en.wikipedia.org/wiki/Philosophical_zombie

[2] https://www.maxim.com/news/elon-musk-artificial-intelligence...


Since you're someone that ponders philosophical problems, at least a little bit, then I ask you to consider this. Whether you come up with an answer or not is irrelevant, and I myself have no point other than to get you to question consciousness BEFORE we've built the thing ourselves, which is a very real potential in the coming decades, and something we should consider and take care to avoid pitfalls beforehand.

Here is my problem:

I am defining consciousness here as a feeling of existence. It does not necessarily entail strong self-awareness to the extent that humans have.

You have this kind of consciousness. Correct?

If you have consciousness, then we can assume I have consciousness as well. That is something I hope we can agree on, unless I am one of your philosophical zombies, in which case you are a solipsist and a true hardcore Empiricist, which is completely fine, but in that case we can unfortunately move no further.

Assuming I have consciousness, I ask you, does the average intelligent primate have consciousness?

Assuming so, does the average dolphin have consciousness?

Assuming so, does the average sheep have consciousness?

Assuming so, does the average rat have consciousness?

Assuming so, does the average lizard have consciousness? Do we lose consciousness here?

Does the average insect have consciousness?

Does a tree?

Does my houseplant?

Does a microbe?

Does my computer?

Does my thermostat?

Does a home?

Does a lake?

Does the universe?

Is there a line somewhere here that separates the conscious from the unconscious?

If there is, can we prove it, or is it as elusive as Bigfoot?

If we can't prove it, and the burden of proof lies on the thing being posited, then I have one final question: Why believe in the line at all?


I agree with what you said at the beginning of your comment, and what I think you're getting at is something like panpsychism [1], where consciousness is an inherent property in the universe, as opposed to an emergent view where at some point there is no consciousness in a given system and once it becomes complex enough consciousness emerges or develops.

I don't necessarily have an argument for either of them right now. But you asked me "why believe in the line at all?" -- I'm not making an argument for what consciousness is, I'm only claiming that the current state of the art algorithms we have developed and AI we currently build have nothing to do with topics like self-awareness, consciousness, theory of mind, etc. One day we'll get there I'm sure.

[1] https://en.wikipedia.org/wiki/Panpsychism


> There is no WANT or DESIRES from the programs, there is no self-awareness where the programs ask themselves if what they are doing is right or wrong. They are doing exactly what they were programmed to do.

It's unclear - are you talking about the current state of the art, or what you think is in-principle possible?


current state of the art


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You