It's advantageous to have both environments pumping out new technologies, obviously.
The environment at Google and MS is that they have near unlimited resources (legal, monetary, intelligence, computing, patents) to create a working product that can be mass-produced quickly if it's a success.
A large company might have some red-tape to go through, but it will also already have all of the HR and management systems setup so that project managers can focus on creating the product.
The Project Tango device for example went from concept to thousands of 100% working devices in developers hands in just 1.5 years. The low-level drivers are integrated with Android in a way that no 3rd-party startup would be able to do. The talent they acquired (former DARPA director, Kinect engineers) would have not been possible without large company resources.
Ideas are not what is important, we all have brilliant ideas that millions of other people have had. You have to be able to take those ideas and actually create a product or service that people can use.
This could be done without changing anything on the site. Just create a new page that can be accessed via options which lists the last 10 votes and a delete button.
This is awesome, great job, I think you've just given humans echolocation.
If someone was given a similar device at an early age that was semi-permanently attached to them, would their brain possibly be able to create a map of the room?
There have been previous attempts but the Tango device didn't exist then so the hardware was bulky and usually required a backpack.
I definitely think it would be possible. I find it interesting to think about eyesight in the same way -- even though an image is projected onto our retinas, there's not a little homunculus looking at our retinas to see the image; it gets translated into electrical signals that our brain interprets. There seems to be a great amount of plasticity in the brain that lets us remap senses and view tools as extensions of our bodies.
There has been some prior work on using depth cameras for navigation for the visually impaired. For example, a smart cane can detect objects beyond its reach and give haptic feedback. Microsoft Research did some work with putting the Kinect on a helmet and giving audio cues for navigation (http://research.microsoft.com/pubs/184208/VisionForTheBlind....). What I'm interested in is taking that sensory input and making it less immediate by giving it a memory -- letting it build up a picture of an environment rather than needing to point a device at something in order to know something about it.
One big issue is figuring out how to sonify depth information so it's useful. One simple approach is to do a sort of sweep across each frame from left to right, letting each row of an image correspond to a certain pitch. I don't think this is a good approach, as it seems very vision-oriented and is likely to sound just like noise. Maybe if someone was using it from birth, but for relatively fast training I doubt that approach. Other approaches do more interpretation -- Microsoft's work detected faces, walls, and floors, giving each a distinct sound for greater recognition.
> What I'm interested in is taking that sensory input and making it less immediate by giving it a memory -- letting it build up a picture of an environment rather than needing to point a device at something in order to know something about it.
Have you tested this approach with blind users? I think building a picture of an environment is a good task to offload to the brain and a good skill to have/develop for blind people.
> One big issue is figuring out how to sonify depth information so it's useful. One simple approach is to do a sort of sweep across each frame from left to right, letting each row of an image correspond to a certain pitch. I don't think this is a good approach, as it seems very vision-oriented and is likely to sound just like noise.
I think this is a quite good approach, but agree it has a high learning curve. However, that high learning curve might reward the end-user with a system that is more flexible. By preprocessing the input and generating audio based on the detected patterns you limit the applicability of such a system. That being said, a generic system that gives "unfiltered" output and has additional cues you can set for example for fast approaching objects might be useful.
This is a great project - to address your last point, I dont think it would just be noise if the user habituated to it. Check out this project [0][1] that maps audible data to vibrations and seems to have successfully re-mapped sense data taking advantage of the elasticity of the human brain.
Another similar project lets people "see" with their tongues [2]
I definitely think using binaural (3d) audio could give users a much more complete and useful idea of what they are seeing so I wish you luck. Great Idea.
Humans can already learn echolocation [1]. Still, there are many possibilities for machine-assisted perception/translation. I think the post correctly identifies finding good ways to aurally represent the information to be one of the challenges.
Somewhat, but our brains haven't had millions of years to develop the ability to "see" sound, like bats cat.
A human generated "click" is much different than a computer generated series of sounds which represent an accurate scan of the objects in front of the user.
Change in pitch representing changes in depth is much more able to be processed by the human brain verses trying to hear how the sound waves bounce off of objects.
Plus, this will work in public while I'd guess that the human clicking noises require a quiet environment and have significant limitations.
Imagine a blind person walking freely down the sidewalk and the device would not be making any sounds until there is a sign or building within 20 feet, with a scanning (or single point) tone that gets progressively louder. Effortless echolocation.
The device also has several GPS, WiFi, gyroscope, and cellular geolocation abilities so it would know when the user has reached the end of the sidewalk, if outside.
Restrict airspace access for densely tethered wind-farms and if they are needed closer to cities then we can figure that out later.
I was thinking some sort of central tether connecting a large group of them then either directly beamed or daisy-chained balloons high up in the air. It would be an amazing technical and engineering accomplishment to do that but it might provide energy with minimal airspace disruption, though you wouldn't really want planes flying under them either, just in case.
Maintenance could involve having detachable cables (which would deploy a parachute and have bring lights in case a place was passing by at that time) so that the cable falls gently against the central tether and can be reattached using a heavy-lifting drone or blimp/drone hybrid.
I block it using my hosts file and unblock it (and HN) about once a week. Then I look at the top content from that period of time and look at it all at once, then bookmark them (in a folder) and usually never click the bookmark again.
Facebook I'll check about once a month (also blocked in hosts).
I got into the habit of instinctively typing in "news." or "redd" into my browser as soon as I sat down, the lost productivity was too much and the hosts solution seems to work for me.
For PC games, I have to uninstall them. I'm actually glad games are 60 GB these days because it means I have at least 1 day to consider playing a game. If at any time I decide I don't want to waste 50 - 100 hours playing I can cancel the download and delete the files. Yes, I'm getting Google Fiber next year and this method will not be available because it will download in about 10 - 20 minutes. I'm looking into a time-delayed safe or the cheap plastic tubs with a timer for storing an SSD for those games.
My address bar knows me too well: "r" and "n" are enough for autocompletes. Reddit's blocked in hosts and HN is getting really close to going back.
Reddit pretty much ended the period of my life where I read books because that's where my reddit time came from. Since we're talking about years of my life that's a pretty big regret.
Amy and Sarah were two of the most popular girl's names when I was growing up.
My wife dug in her heels pretty early in the pregnancy for a name that I later learned was a character on a TV show. Judging by the site, she's not the only one. The peak is still under a hundred, so not as embarrassing as, say, characters from 90210.
Bob Green years ago wrote about going to the "Linda Hop", a convention of boomer women all named Linda. The article traced the name back to a song popular in the early 1950s. As far as male v. female, it seems more common to name sons after fathers than daughters after mothers.
It's given out like candy, if not the 100k one you can surely get several thousands worth of credits for server hosting, either on Azure or through Amazon.
Can confirm - with my MSDN license I get a $150/mo credit for any development/test instances. As long as I have a valid license (and they offer the deal) I'll get the credit.
The environment at Google and MS is that they have near unlimited resources (legal, monetary, intelligence, computing, patents) to create a working product that can be mass-produced quickly if it's a success.
A large company might have some red-tape to go through, but it will also already have all of the HR and management systems setup so that project managers can focus on creating the product.
The Project Tango device for example went from concept to thousands of 100% working devices in developers hands in just 1.5 years. The low-level drivers are integrated with Android in a way that no 3rd-party startup would be able to do. The talent they acquired (former DARPA director, Kinect engineers) would have not been possible without large company resources.
Ideas are not what is important, we all have brilliant ideas that millions of other people have had. You have to be able to take those ideas and actually create a product or service that people can use.