> And what is the parameter to brighten/darken/saturate/desaturate?
The parameter for brighten/darken is a positive/negative linear adjustment to the L(ightness) value of the color's Lab representation. (Spectra converts the color to Lab, makes the change, and then converts back to the original color space.)
Likewise, the parameter for saturate/desaturate is a positive/negative linear adjustment to the c(hroma) value of the color's Lch representation.
Another thanks for flagging, mark-r. I've updated spectra to address this issue more explicitly. Per colormath[0], colors are allowed to go out of gamut. Colors now have `mycolor.rgb` and `mycolor.rgb_clamped` properties; `mycolor.hexcode` uses the clamped values. Are there other ways you'd like to see the out-of-gamut issue addressed? Open to suggestions.
Thanks! Colormath does an amazing job handling the mathematical conversions between color spaces, but has (at least to me) a somewhat verbose API. Spectra wraps that API in something simpler, and adds scales, ranges, blending, brightening/darkening, and saturation/desaturation.
Spectra uses grapefruit for parsing HTML names and hexcodes.
I'm most excited about the color scales. It looks like a helpful tool for situations where the standard colormaps in matplotlib/pyplot don't work well. Is that an appropriate use case?
I also like seeing nbviewer used for documentation, I haven't seen that very often.
> ‘Cloud Face’ is a collection of cloud images that are recognized as human face by a face-detection algorithm. It is a result of computer’s vision error, but they often look like faces to human eyes, too. This work attempts to examine the relation between computer vision and human vision.
Is it really computer error? Most of us probably saw the face of Einstein in the one cloud photo, because in fact there was a strong resemblance. If a artist paints a caricature or abstract image we can often identify the original subject. That's not error. There is something else going on here.
It is an error in a sense that these are not in fact faces, but clouds. While humans also can see shapes of faces in clouds, we are capable of understanding that these are not real faces. Computer algorithm is not capable of recognizing the difference which results in such misclassification, thus it is failing its intended purpose.
> Throughout the past six years, the project has been stuck in the beta phase. According to McKinsey, "for past 5 years, Release 1.0 is consistently projected to be 24-32 months away."
Also in there are companies like Dell, IBM, Accenture, Batelle, Deloitte, Honeywell, AT&T, Rockwell, GE...
Hardly companies run by blind immigrant union women. And lest you dismiss those top ones as "just" defense, note that I've seen presentations by both General Dynamics and Northrop Grumman talking about their work in the health sector. There's something especially strange about booths featuring pictures of fighter planes at epidemiology conferences.
How dare the government ask (and pay for) companies to engage in ethical and socially concious ways! They should be paying bottom dollar and making sure that the service workers make minimum wage just like they do in the public sector!
I don't think you can extrapolate from a food court on a military base to a $300M project to rebuild Social Security IT. Don't let your political biases blind you to reality.
Structured archive: https://docs.google.com/spreadsheets/d/1wZhPLMCHKJvwOkP4jucl...
Most recent edition, sent this morning: http://tinyletter.com/data-is-plural/letters/data-is-plural-...