Co-Founder of Geocodio here who designed the scoring system :)
I suppose you could call it inspired by Bayesian inference since we're using multiple pieces of independent evidence to calculate a score, though that makes it sound a bit fancier than it is and we aren't using the Bayes' theorem. But it's possible I had that in the back of my head from a game theory class I took long ago.
But for the fun of it, let's model it that way:
Probability (Spam | disposable email domain, IP address, etc... ) = [probability(disposable email domain, IP address, etc... | spam) x prior probability(spam rate)] / probability(disposable email domain, IP address, etc...)
Or something like that.
Also — it's a delight to have one of Patrick's articles mentioned in connection with this!
As a way to make those tax cuts appear paid for, they changed how R&D expenditures -- which includes software development on software that has ALREADY been commercialized -- has to be amortized.
This was a budgetary sleight-of-hand that made the tax cuts appear paid for.
Most small companies are LLCs, not Corporations, and therefore pay personal tax rates.
As a result, software and other tech companies now have phantom profits because their engineer salaries and other expenses are no longer deductions.
It changes it so that it takes effect after 2025. (It is expected to be automatically renewed at that point)
This bill retroactively fixes it for 2022-2023 as well as 2024-2025. A permanent one-time fix is too expensive from a Congressional Budget Office scoring perspective, but lots of initiatives are renewed every few years. It's just how Congress works
A more precise term might be "computer vision," which is indeed a field of AI, but it is a less-known term even among a reasonably technical laypeople (such as might be reading a business publication like the MIT Tech Review).
There's a 200+ page Code of Points[1] that determines how much each skill is worth as well as each error. At the elite level, there's a panel of 9 judges for each event, divided into judges who tally the execution score and the difficulty score, and the reference panel that effectively audits the execution score.[2] The difficulty and execution scores are averaged. This is how you can get three-decimal scores due to averaging. Here is an example of a judge's score sheet.[2]
(Nevertheless, there are still issues with bias, where athletes or teams who benefit from a particular judge's biases might have their errors more gently scored. For example, on the bars, gymnasts are required to hit a handstand position within a 10-degree range of vertical. But a judge who is impacted by a bias, or a certain viewing position, many not notice that a particular handstand was at 15 degrees, and not deduct accordingly. This is where the AI judging may be particularly helpful.)
This whole affair seems rather silly IMO. For all the hullabaloo about AI killing art, I feel like this kind of fine grained scoring does more damage to art than AI ever will.
We have more about our data sources here: https://www.geocod.io/data-sources/