Hi! Quick update to H-Matched, the website tracking AI's progress toward human-level performance. Since last post, I've added 6 new benchmarks (now 20 total) and made the visualization interactive! The site shows how AI's 'catch-up time' has dramatically shrunk - from 6+ years with ImageNet to just months now. Explore the timeline with release dates, solve dates, and links to papers. Would love to hear your thoughts and if there are any benchmarks I've missed!
Thank you for your reply. The time to human level is simply the time it took between the initial release of the website to when an AI system reached human level performance for that benchmark. :) I am in the process of adding sources for all "solved" dates... Here is for instance source for the Winograd challenge human level performance:
Hi! I wanted to share a website I made that tracks how quickly AI systems catch up to human-level performance on benchmarks. I noticed this 'catch-up time' has been shrinking dramatically - from taking 6+ years with ImageNet to just months with recent benchmarks. The site includes an interactive timeline of 14 major benchmarks with their release and solve dates, plus links to papers and source data.