I don’t think it necessitates pansychism. You can still think of consciousness as a by product of a specific configuration, or just sufficient quantity of neurons or whatever.
However it emerges remains a mystery, but it’s possible that some configuration of them does not produce the consciousness affect.
For example, a million neurons is not enough to turn the lights on, but X amount is.
Self-aware consciousness is a pretty high bar, but how about the ability to experience suffering? The ability to experience reward? That question very quickly turns into "what is experience", or rather, "what isn't experience".
I don't find any of this intuitive, nor do I have answers to the ethical questions that follow (besides not eating animals, which I already don't). But I find it revealing how most people will fight tooth and nail to maintain that there is some magical threshold separating dead matter from precious, conscious life, without having given the topic much thought at all.
The really nasty question is, do animals have a(n intuitive or other) concept of the future, and/or of causality? Do they feel dread? And when they experience suffering, can they know what does that mean to them?
Of course, life in the wild is not pretty. Not many of the wild animals are apex predators in their respective territory, and thus their deaths are usually not peaceful, and being hunted, having close calls with death almost daily is a very integral part of their life, so they probably have an evolved way to deal with the psychological burden of this.
> so they probably have an evolved way to deal with the psychological burden of this.
I like that idea, but I don't understand how natural selection would favour animals that are less stressed about predators. Are you implying that the gazelle actually enjoys the thrill as it is running away from the lion?
No, I most certainly not implying it enjoys it, but if something is not useful for survival and procreation, then that gets pruned out eventually. If there's enough selection pressure.
If dread, stress, trauma, shock, etc. leads to worse survival outcomes than a more calm basic set-point, then that's likely to emerge.
For example animals that form herds stick to the herd even if their family member is isolated from it by predators. Because they gain nothing by trying to somehow save the isolated member. (Because that would probably lead to more of them getting mauled to death.)
Sure, these animals are pretty defensive, but after a point they let it go. Do they suffer from it? Yeah, sure, they have very similar stress response, but they rarely (to my knowledge) are traumatized by these encounters. (Because they probably don't ponder, they don't think about what that encounter meant. It meant nothing for them, it's just life in the wild.)
For example Sapolsky spent years observing baboons and took a lot of blood samples, and measured cortisol levels. And those animals lead a very stressful life. ( https://news.stanford.edu/news/2007/march7/sapolskysr-030707... ) But humans and primates are the exception probably. And probably the more cognition one can do the more things one can worry about.
Yeah, it is pretty remarkable. I consider it non controversial at this point that many of humans favorite meals are certainly capable of a lot of suffering.
I think that was never a real question. Lobsters might or might not feel pain, okay, but bovines and other mammals certainly do. And birds are (can be) pretty intelligent (eg parrots).
So that's why they are supposed to be kept, raised and slaughtered in relatively humane conditions. (Eg. free range, and painless death.)
Indeed the recent corporate tax cuts didn't go nearly far enough. Cutting them to zero is pretty close to a free lunch in terms of economic growth, from what I've read.
Yes, and I believe that to resolve that morally and efficiently, the government should lower taxes on all. Its not amazon thats getting the advantage, its the government that can't excercise its power over it.
My issue here is that expiring JWTs involve adding state! The whole point of JWTs is stateless authentication, so I’ve never understood the advantage over sessions unless revoking tokens is never an option.
Invalidation of any sort, including token revocation, is fundamentally a stateful operation. Either you are deleting session state or statefully blacklisting something that's a packet of self-contained state (e.g. JWT by id). Heck, even expiration just reduces the revocation into the universally shared state that is time.
My point is, you always have state. If you care about that state being anything but _the current time_ (e.g. just letting tokens expire and not worrying about revocation), you need shared storage.
If you always have state, what's the point of making tradeoffs to get closer to "statelessness"?
I see clearly why some small subset of applications benefits from carefully minimizing shared state among components. It is not at all clear to me why pseudo-statelessness is a good default.
I'm not sure you're actually asking for my opinion, or just making a point, but IMO, the "statelessness" most people describe with respect to web services simply means that a single request already has as much as possible of the information required (outside of current time, securely fetching/validating public keys, etc) to process the request.
The point is that it's easier for distributed systems as a whole when clients hold onto their client-specific state, rather than a minimal token that the server, likely being load balanced for availability, must exchange for that state with yet another service in yet another stateful/authenticated request/response manner.
My subtext is that gains from moving state from the server to the client are often swamped by the first serverside database round trip that has to be done to service requests. So, you see ActiveRecord Rails applications using JWT, and it's like: the benefits of "statelessness" are not rationally why this got deployed.
When are you expiring tokens ahead of their natural expiration date? When someone logs out?
Isn’t that state? If you add state to something stateless you are generally on your own.
[edit: also I disagree with the notion that something with an expiration date is stateless. It has two states. It’s just that the look like idenpotence]
Perhaps revoking is the better term here. At least that's the use case I had in mind. If you want the ability to revoke a token, you have to have some list of black listed tokens somewhere.
Suppose to be once per request and short-lived. But that's not specified in the thread above.
Previous person mentioned logout, so I assumed we were talking about session tokens--which I understand its a misuse of JWTs--and is why he/she's mentioning it.
But if we're talking about just one time auth tokens, then yeah, you don't need expiry ahead of the expiry time, and it's plain it's a non-issue.
> My issue here is that expiring JWTs involve adding state!
I don't agree. The expiration timestamp is not a state, nor is a nonce/token id. Moreover the specs enable servers to arbitrarily reject tokens, which means servers can arbitrarily request token refreshes. This means that any argument regarding how a nonce is a state is entirely irrelevant and without any practical interest.
Why would you arbitrarily reject a token refresh? The question here is which token to reject. That's where the state comes in. If a token is compromised, you have to know which one. Hence, stateful.
> Why would you arbitrarily reject a token refresh?
You've misread what I've said. The server can trigger token refreshes by rejecting the request. According to the JWT workflow, that triggers the client to request a new token and retry the request.
> The question here is which token to reject.
That isn't much of a question, because servers are free to reject any token arbitrarily. They can, however, ignore specific tokens that cease to be valid, such as expired tokens or tokens which have already been used. None of those scenarios involves any change to the token's state.
I'm failing to see how this is complicated. Imagine the following:
1) A token gets compromised.
2) You know which token. You need to revoke access.
3) You introduce state by storing said token on disk / in memory somewhere.
Key takeaways:
1) The authentication system (not the token) is now stateful. 2) You now have to check this data store to properly allow authentication.
3) A core benefit of JWT (stateless auth) is gone.
Tokens are single-use and short-lived. Once a token is used it's revoked.
> 2) You know which token. You need to revoke access.
>
3) You introduce state by storing said token on disk / in memory somewhere.
You don't. You simply reject the token and let the client refresh its token. That's it. There is no state. Compliant clients already expect tokens to be rejected for no apparent reason. They are access tokens.
Why exactly are you assuming that an access token is not single-use or even short-lived, particularly in bearer token protocols specifically designed so that tokens are ephemeral and single-use?
> 1) The authentication system (not the token) is now stateful.
Even if you shoehorn your definition of statefulness, that's entirely irrelevant. The whole point of an authentication system is, following your line of reasoning, to implement a stateful system. Thus, not only is that line of reasoning absurd, it also completely misses the point of implementing an authentication system, not to mention it ignores a whole class of attacks. And for what, exactly?
One way to look at it is, presumably you will have far fewer revocations than active sessions, so why optimize for that case?
Instead of an "active sessions" table you could just maintain a list of revoked sessions and check each incoming request against that list. You can make revocations expire shortly after the JWT was set to expire.
Dependent types are types which depend on values. As an example, think of the cons procedure: (List A, A) -> List A. It takes a List of A's and an A and returns a List of A's. With dependent types you can write this as (List n A, A) -> List n+1 A. This tells us that cons takes a List of A's with length n and an A returns a List of A's with length n+1.
Edwin Brady shows off some examples in Idris here[1]. I thought the matrix example was really impressive.
When I see this definition, I scratch my head and wonder why this isn't a different way to introduce object orientation, generics, c++ parameterized templates and such.
The key difference with C++ templates is that in C++ you can't have a type parameterised by a value that's only know at runtime, e.g. a std::array<N> where N is read from stdin. In a dependently typed language, you can. Object orientation is a different matter entirely; in the sense it implies Java-style class-based inheritance, it's almost in the opposite spirit to dependent types, as such late-binding (virtual methods) means it's possible to call methods such that the compiler has no idea which method will be called at compile time, making it hard to reason about the code at compile time.
Imagine that you take a type from C or Java or something and add the ability to convert it into another type. I'm not talking about casting an int to a float, but actually manipulating the type without doing anything on the data side.
Now allow arbitrary programs to be types. We'll call these type programs.
Now imagine a game where you take one type program and try to convert it into another while proving to the compiler that it still does the same thing.
For example, say you had a function foo (int x) { return x+1; }, and the type program A “foo(a+1)”, and the type program B “foo(foo(a))”. Your job would be to change A to look exactly like B, or vice versa, using commands built into the language itself.
Why would you want to do this? Because it's been shown using a certain type of mathematics, called constructive mathematics, you can create type programs to represent any hypothesis you would like: m + n = n + m, for example. Then you can use the built in commands to change a type into another type to represent a proof of the hypothesis.
In this way you can prove hypotheses related to your program, (or anything really) as long as you construct the right types and use the built in commands to change prove the hypotheses true.
You really need to learn a lot of things to understand how this translation from types to proofs work, so I didn't try to explain that here.
I find the whole thing fascinating myself, but haven't gone too deeply into it.
(BTW, if anyone more knowledgeable sees something I've written that is wrong here, please correct me, for my own information more than anything else)
In a very abstract way, it is a way to encode relationships between types on the type system.
You can express things like how are you allowed to consume elements from a data structure, how many elements a specific data structure is allowed/expected to have, how their form looks like and so forth, all at compile time.
Thus working as kind of profs that certain functions can only be called if the conditions are met.
Most typed programming languages will allow you to form a pair of type (Integer, Integer), and reject programs which try to return something like (Integer, Float).
Dependently typed languages might allow you to declare a type (n, m) where n and m are integers, and n < m, and will reject programs which attempt to return pairs with n > m.
The key step here is that this rejection is done during compilation, and so while your program is running you can be absolutely sure that n < m.
Most typed languages have two separate levels: expressions (and statements in imperative languages) and types. Dependent types unify those two levels into one. This allows one to use values in types, or vice versa, effectively making types first class in the sense that functional programming makes functions first class.
As an example this is Haskell's core language:
data Expr b
= Var Id
| Lit Literal
| App (Expr b) (Arg b)
| Lam b (Expr b)
| Let (Bind b) (Expr b)
| Case (Expr b) b Type [Alt b]
| Tick (Tickish Id) (Expr b)
| Type Type
| Cast (Expr b) Coercion
| Coercion Coercion
data Type
= TyVarTy Var
| LitTy TyLit
| AppTy Type Type
| ForAllTy !TyCoVarBinder Type
| FunTy Type Type
| TyConApp TyCon [KindOrType]
| CastTy Type KindCoercion
| CoercionTy Coercion
If you look closely you'll notice quite a lot of duplication between the two (see the first 4 and the last 2 constructors). Haskell is effectevily using the same language (lambda calculus) to describe types and expressions though with different syntax.
And this is Lean core language (a dependently type language):
inductive expr
| var : nat → expr
| sort : level → expr
| const : name → list level → expr
| mvar : name → name → expr → expr
| local_const : name → name → binder_info → expr → expr
| app : expr → expr → expr
| lam : name → binder_info → expr → expr → expr
| pi : name → binder_info → expr → expr → expr
| elet : name → expr → expr → expr → expr
| macro : macro_def → list expr → expr
Imo dependent types provide a surprising answer to the question "What is the best type system?". Different languages have different type systems and as they evolve to become more expressive at a certain point they become Turing complete (e.g. C++, Typescript, Haskell, ..). So we end up with two separate languages - the language itself to describe programs and the type system to describe types.
So what's the best type system? Dependent types' answer: the language itself. Instead of having two separate languages, have a single language that acts as its own type system, powerful enough to describe both programs and types. Then types become first class - you can pass them to a function as parameters, or return them as values, or use any function at compile time.
In physics progress usually comes in the form of unification. Two seemingly separate phenomena (say electricity and magnetism) turn out to be described by a single theory. Dependent types provide such a unification between expressions and types in programming languages. Imo definitely a step forward in the noisy sea of programming languages.
That's not even touching the mathematical side of things, Curry-Howard correspondence, proof assistants and using dependent types for proving theorems.
When the type of a function mentions its parameter, it’s a dependent type (just like a normal function that mentions it’s parameter is a “dependent value”, so to speak.
Usually though one speaks of dependent types only when the parameter can range over not just types.
I don't see what the use case is then. Intermittent background jobs or something?