I'm glad to see the SIG improving the baseline audio codec. SBC is/was a weird little codec.
It's like a dumbed-down MPEG-1 Layer-1 audio, but with only 8 sub-bands. Like layer-1, the filterbank isn't perfect reconstruction, so the encode/decode cycle causes aliasing noise even before any quantization takes place.
(At a previous job I had to make a proprietary extension that improved the stop-band attenuation of the prototype filter, otherwise the speaker companies didn't even want to talk to us, the THD+N looked so bad.)
The only psychoacoustic model is an optional static biasing of which sub-bands gets bits allocated.
On the other hand, you can encode/decode entirely in software on a 24 MHZ arm7tdmi. (arm7 is ARMv3, not ARMv7, confusingly).
I don't think most people literally believe every man is stronger than every woman.
I think it's much more common that someone will say "of course there are some women who are stronger than most men" then go back to tending to, for example, hire men over women for a job requiring the ability to lift 60 lbs, without bothering to strength-test female applicants.
How do people rationalize this? When people start out by lying to prospective customers, do they have a plan for when they will start telling the truth instead?
It's more than just a SAT solver, but Microsoft's Z3 SMT solver has an option to minimize a quantity of interest. I haven't worked with it much, but it seemed faster than a binary search on 'k' when I was playing around with a non-TSP problem.
I've had the experience of talking with a friend and having a conversation along the lines of "Do you remember the guy that was in that movie?". If there's enough shared context, I might "know" exactly who they are talking about, but not the name of the movie or the name of the actor. I'm internally apprehending some kind of abstract "node" to which properties are attached, but not immediately available for recall.
I'm not thinking about the phrase 'that guy in that movie'. I'm not thinking about the name, because I don't (yet) recall it. I apprehend a connection between a person-node and perhaps as well a recent-experience node, the latter being an unsymbolized apprehension of the recollection of having shared an experience.
If I focus on the apprehension, I can begin to recall its properties.
To abuse a computer science analogy, it's as though there's some kind of abstract associative cache between nodes, linking them to other nodes but referring only to their object-ids. To further abuse the analogy, raw object-ids are a private type that have very few public methods. Mostly:
- more_or_less_the_same_thing_as(oid1, oid2)
- randomly_select_a_few_related_oids(oid) returns set<oid>
- recall_concrete_properties(oid, timeout) returns maybe<propertyset>
These apprehensions don't have an appearance or a sound, but they have a... brain feel? They have connections between them, and they have rough quasi-shapes, and can "fit" or "not fit" into certain other apprehended "structures".
Depending on what mode my brain is running in, I can generally render these apprehensions into words. Sometimes I can't seem to get them to cross the idea->word barrier.
The half-remembered movie is a great example. I see an actor whose name I don't recall. I remember that I've previously seen him doing something in some other movie. I do not at any point think the words "he played that FBI agent who was a reformed alcoholic chasing down a serial killer who leaves little whiskey bottles at the scenes of his murders" but all that is suddenly right there in my mind. I didn't think of any of those words, but all that is right there in my head.
Is recalling memories thinking? If not, if I then act on those memories, is that thinking?
The unimproved value of land is still a function of its proximity to other desirable things. Barring remodeling, it's not generally the value of the building on a piece of land that goes up over time.
I understand what you're getting at with Uber and Airbnb. I'm not especially familiar with Zenefits. I'd thought that Theranos, though, was a major straight-up fraud.
Thank you, I remember using this at some point but forgot it. I'm still probably going to trash my account but this does make it much more tolerable. This and i.reddit.com make life on mobile a bit more pleasant.
It's like a dumbed-down MPEG-1 Layer-1 audio, but with only 8 sub-bands. Like layer-1, the filterbank isn't perfect reconstruction, so the encode/decode cycle causes aliasing noise even before any quantization takes place.
(At a previous job I had to make a proprietary extension that improved the stop-band attenuation of the prototype filter, otherwise the speaker companies didn't even want to talk to us, the THD+N looked so bad.)
The only psychoacoustic model is an optional static biasing of which sub-bands gets bits allocated.
On the other hand, you can encode/decode entirely in software on a 24 MHZ arm7tdmi. (arm7 is ARMv3, not ARMv7, confusingly).